- How to parallelize test runs
- How to group test runs
- Strategies for grouping test runs
- How load-balancing of tests works
- What test insights are available on the Dashboard
If your project has a large number of tests, it can take a long time for tests to complete running serially on one machine. Running tests in parallel across many virtual machines can save your team time and money when running tests in Continuous Integration (CI).
Cypress can run recorded tests in parallel across multiple machines since version 3.1.0. While parallel tests can also technically run on a single machine, we do not recommend it since this machine would require significant resources to run your tests efficiently.
Cypress’ parallelization strategy is file-based, so in order to utilize parallelization, your tests will need to be split across separate files.
Cypress will assign each spec file to an available machine based on our balance strategy. Due to this balance strategy, the run order of the spec files is not guaranteed when parallelized.
Refer to your CI provider’s documentation on how to set up multiple machines to run in your CI environment.
cypress run --record --key=abc123 --parallel
Running tests in parallel requires the
--recordflag be passed. This ensures Cypress can properly collect the data needed to parallelize future runs. This also gives you the full benefit of seeing the results of your parallelized tests in our Dashboard Service. If you have not set up your project to record, check out our setup guide.
During parallelization mode, the Cypress Dashboard Service interacts with your CI machines to orchestrate the parallelization of a test run via load-balancing of specs across available CI machines by the following process:
- CI machines contact the Cypress Dashboard Service to indicate which spec files to run in the project.
- A machine opts in to receiving a spec file to run by contacting Cypress.
- Upon receiving requests from a CI machines, Cypress calculates the estimated duration to test each spec file.
- Based on these estimations, Cypress distributes (load-balances) spec files one-by-one to each available machine in a way that minimizes overall test run time.
- As each CI machine finishes running its assigned spec file, more spec files are distributed to it. This process repeats until all spec files are complete.
- Upon completion of all spec files, Cypress waits for a configurable amount of time before considering the test run as fully complete. This is done to better support grouping of runs.
In short: each Test Runner sends a list of the spec files to the Dashboard Service, and the service sends back one spec at a time to each Test Runner to run.
Cypress will automatically balance your spec files across the available machines in your CI provider. Cypress calculates which spec file to run based on the data collected from previous runs. This ensures that your spec files run as fast as possible, with no need for manual configuration.
As more and more tests are recorded to the Cypress Dashboard, Cypress can better predict how long a given spec file will take to run. To prevent irrelevant data from affecting the duration prediction, Cypress doesn’t use old historical run data regarding the spec file.
With a duration estimation for each spec file of a test run, Cypress can distribute spec files to available CI resources in descending order of spec run duration. In this manner, the most time-consuming specs start first which minimizes the overall test run duration.
Duration estimation is done separately for every browser the spec file was tested against. This is helpful since performance characteristics vary by browser, and therefore it is perfectly acceptable to see different duration estimates for each browser a spec file was tested against.
In this example, a single machine runs a job named
1x-electron, defined in the project’s circle.yml file. Cypress runs all 19 spec files one by one alphabetically in this job. It takes 1:51 to complete all of the tests.
1x-electron, Machine #1 -------------------------- -- actions.spec.js (14s) -- aliasing.spec.js (1s) -- assertions.spec.js (1s) -- connectors.spec.js (2s) -- cookies.spec.js (2s) -- cypress_api.spec.js (3s) -- files.spec.js (2s) -- local_storage.spec.js (1s) -- location.spec.js (1s) -- misc.spec.js (4s) -- navigation.spec.js (3s) -- network_requests.spec.js (3s) -- querying.spec.js (1s) -- spies_stubs_clocks.spec.js (1s) -- traversal.spec.js (4s) -- utilities.spec.js (3s) -- viewport.spec.js (3s) -- waiting.spec.js (5s) -- window.spec.js (1s)
Notice that when adding up the spec’s run times (0:55), they add up to less than the total time for the run to complete (1:51) . There is extra time in the run for each spec: starting the browser, encoding and uploading the video to the dashboard, requesting the next spec to run.
When we run the same tests with parallelization, Cypress uses its balance strategy to order to specs to run based on the spec’s previous run history. During the same CI run as above, we ran all tests again, but this time with parallelization across 2 machines. This job was named
2x-electron in the project’s circle.yml file and it finished in 59 seconds.
2x-electron, Machine #1, 9 specs 2x-electron, Machine #2, 10 specs -------------------------------- ----------------------------------- -- actions.spec.js (14s) -- waiting.spec.js (6s) -- traversal.spec.js (4s) -- navigation.spec.js (3s) -- misc.spec.js (4s) -- utilities.spec.js (3s) -- cypress_api.spec.js (4s) -- viewport.spec.js (4s) -- cookies.spec.js (3s) -- network_requests.spec.js (3s) -- files.spec.js (3s) -- connectors.spec.js (2s) -- location.spec.js (2s) -- assertions.spec.js (1s) -- querying.spec.js (2s) -- aliasing.spec.js (1s) -- location.spec.js (1s) -- spies_stubs_clocks.spec.js (1s) -- window.spec.js (1s)
The difference in running times and machines used is very clear when looking at the Machines View on the Dashboard. Notice how the run parallelized across 2 machines automatically ran all specs based on their duration, while the run without parallelization did not.
Parallelizing our tests across 2 machines saved us almost 50% of the total run time, and we can further decrease the build time by adding more machines.
cypress run calls can be labeled and associated to a single run by passing in the
--group <name> flag, where
name is an arbitrary reference label. The group name must be unique within the associated test run.
For multiple runs to be grouped into a single run, it is required for CI machines to share a common CI build ID environment variable. Typically these CI machines will run in parallel or within the same build workflow or pipeline, but it is not required to use Cypress parallelization to group runs. Grouping of runs can be utilized independently of Cypress parallelization.
You can test your application against different browsers and view the results under a single run within the Dashboard. Below, we name our groups the same name as the browser being tested:
The first group can be called
cypress run --record --group Windows/Chrome-69 --browser chrome
The second group can be called
cypress run --record --group Mac/Chrome-70 --browser chrome
The third group can be called
Linux/Electron. Electron is the default browser used in Cypress runs.
cypress run --record --group Linux/Electron
We also have the power of Cypress parallelization with our groups. For the sake of demonstration, let’s run a group to test against Chrome with 2 machines, a group to test against Electron with 4 machines, and another group to test against Electron again, but only with one machine:
cypress run --record --group 1x-electron
cypress run --record --group 2x-chrome --browser chrome --parallel
cypress run --record --group 4x-electron --parallel
4x group prefix used here is an adopted convention to indicate the level of parallelism for each run, and is not required or essential.
The number of machines dedicated for each
cypress runcall is based on your CI configuration for the project.
Labeling these groups in this manner helps up later when we review our test runs in the Cypress Dashboard, as shown below:
Let’s say you have an application that has a customer facing portal, guest facing portal and an administration facing portal. You could organize and test these three parts of your application within the same run:
- One group can be called
cypress run --record --group package/admin --spec 'cypress/integration/packages/admin/**/*'
- Another can be called
cypress run --record --group package/customer --spec 'cypress/integration/packages/customer/**/*'
- The last group can be called
cypress run --record --group package/guest --spec 'cypress/integration/packages/guest/**/*'
This pattern is especially useful for projects in a monorepo. Each segment of the monorepo can be assigned its own group, and larger segments can be parallelized to speed up their testing.
A CI build ID is used to associate multiple CI machines to one test run. This identifier is based on environment variables that are unique to each CI build, and vary based on CI provider. Cypress has out-of-the-box support for most of the commonly-used CI providers, so you would typically not need to directly set the CI build ID via the
Cypress currently uses the following CI environment variables to determine a CI build ID for a test run:
You can pass a different value to link agents to the same run. For example, if you are using Jenkins and think the environment variable
BUILD_TAG is more unique than the environment variable
BUILD_NUMBER, pass the
BUILD_TAG value via CLI
cypress run --record --parallel --ci-build-id $BUILD_TAG
During parallelization mode or when grouping runs, Cypress will wait for a specified amount of time before completing the test run in case any more relevant work remains. This is to compensate for various scenarios where CI machines could be backed-up in a queue.
This waiting period is called the run completion delay and it begins after the last known CI machine has completed as shown in the diagram below:
This delay is 60 seconds by default, but is configurable within the Dashboard.
You can see the result of each spec file that ran within the Dashboard Service in the run’s Specs tab. Specs are visualized within a Timeline, Bar Chart, and Machines view.
The Timeline View charts your spec files as they ran relative to each other. This is especially helpful when you want to visualize how your tests ran chronologically across all available machines.
The Bar Chart View visualizes the duration of your spec files relative to each other.
The Machines View charts spec files by the machines that executed them. This view enables you to evaluate the contribution of each machine to the overall test run.