|author||intrigeri <email@example.com>||2015-07-01 16:50:04 +0000|
|committer||intrigeri <firstname.lastname@example.org>||2015-07-01 16:50:04 +0000|
Various typo fixing and nitpicking.
Diffstat (limited to 'wiki')
1 files changed, 8 insertions, 8 deletions
diff --git a/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn b/wiki/src/blueprint/automated_builds_and_tests/automated_tests_specs.mdwn
index 3265ad4..fbe5714 100644
@@ -22,7 +22,7 @@ on the activity.
We usually build the _stable_, _devel_, _experimental_,
_feature/jessie_ (+ _testing_ sometimes) and a bunch of other
This numbers are expected to grow when the automated builds will be
put in production. It's difficult to guess what would the maximum
@@ -36,11 +36,11 @@ more isotesters. If a machine is dedicated to that usage, we can throw
in faster CPUs and run the test suite on bare metal, which would
speed up the test process. That's [[!tails_ticket 9264]].
-So in the discussion, we have to think to a deployment that might
-have two iterations with different computational powers (those
+So in this discussion, we have to think to a deployment that might
+have two iterations with different computational powers (and thus
different amounts of tests/day possible), and the defined
implementation should be modular enough to handle both of them without
-too much changes.
+too many changes.
@@ -62,17 +62,17 @@ and tested:
* for base branches, we could envisage to run the full test suite on
every automatically built ISO (every git push and daily builds) if
- we think that is relevant.
+ we think that is relevant;
* for feature branches, we could run the full test suite only on the
daily builds, and either only the automated tests related to the
branch on every git push, and/or a subset of the whole test suite.
-We can also consider testing only the feature branch that are marked
-as ReadyforQA as a beginning, even if that doesn't cover Scenario 2
+We can also consider testing only the feature branches that are marked
+as *Ready for QA* as a beginning, even if that doesn't cover Scenario 2
We can also maybe find more ways to split the automated test suite in
-faster subsets of feature depending on the context, define priorities
+faster subsets of features depending on the context, define priorities
for built ISO and/or tests.