Improve this doc
  1. What is the Dashboard?
  2. How do I record my tests?
  3. How much does it cost?
  4. What is the difference between public and private projects?
  5. How is this different than CI?
  6. Can I host the Dashboard data myself?
  7. Can I choose not to use the Dashboard?
  8. What does Cypress record?
  9. Why is test parallelization based on spec files and not on the individual functions?
  10. My CI setup is based on Docker, but is very custom. How can I load balance my test runs?

What is the Dashboard?

Dashboard Screenshot

The Dashboard is a Cypress service that gives you access to tests you’ve recorded - typically when running Cypress tests from your CI provider. The Dashboard provides you insight into what happened during your tests run.

You can read more here.

How do I record my tests?

  1. First set up the project to record.
  2. Then record your runs.

After recording your tests, you will see them in the Dashboard and in the “Runs” tab of the Test Runner.

How much does it cost?

Please see our Pricing Page for more details.

What is the difference between public and private projects?

A public project means that anyone can see the recorded runs for it. It’s similar to how public projects on Github, Travis, or Circle are handled. Anyone who knows your projectId will be able to see the recorded runs, screenshots, and videos for public projects.

A private project means that only users you explicitly invite to your organization can see its recorded runs. Even if someone knows your projectId, they will not have access to your runs unless you have invited them.

How is this different than CI?

Cypress is complimentary to your CI provider, and plays a completely different role.

It doesn’t replace or change anything related to CI. You will simply run Cypress tests in your CI provider.

The difference between our Dashboard Service and your CI provider is that your CI provider has no idea what is going on inside of the Cypress process. It’s simply programmed to know whether or not a process failed - based on whether it had an exit code greater than 0.

Our Dashboard Service provides you with the low level details of what happened during your run. Using both your CI provider + Cypress together gives the insight required to debug your test runs.

When a run happens and a test fails - instead of going and inspecting your CI provider’s stdout output, you can log into the Dashboard, see the stdout as well as screenshots and video of the tests running. It should be instantly clear what the problem was.

Can I host the Dashboard data myself?

No, although we are looking to build an on-premise version of the Dashboard for use in private clouds. If you’re interested in our on-premise version, let us know!

Can I choose not to use the Dashboard?

Of course. The Dashboard Service is a separate service from the Test Runner and will always remain optional. We hope you’ll find a tremendous amount of value out of it, but it is not coupled to being able to run your tests.

You can simply always run your tests in CI using cypress run without the --record flag which does not communicate with our external servers and will not record any test results.

What does Cypress record?

We capture the following:

  • Standard Output
  • Test Failures
  • Screenshots
  • Video

We have already begun the implementation for capturing even more things from your run such as:

  • Commands
  • Network Traffic
  • Browser Console Logs

These will be added in subsequent releases.

Why is test parallelization based on spec files and not on the individual functions?

Cypress test parallelization is indeed based on specs. For each spec the Test Runner scaffolds the new running context, in a sense isolating each spec file from any previous spec files, and ensuring a clean slate for the next spec. Doing this for each individual test would be very very expensive and would slow down the test runs significantly.

Spec file durations are also more meaningful and consistent than timings of individual tests, we can order specs by the moving average of the previously recorded durations. This would be much less useful when load balancing quickly finishing individual tests.

To better load balance the specs, thus you would want more spec files with approximately the same running duration. Otherwise, a single very long running test might limit how fast all your tests finish, and the run completes. Due to starting a new test execution context before each spec file and encoding and uploading video after, making spec files to run shorter than approximately 10 seconds would also be fruitless - because Cypress overhead would eat any time savings.

My CI setup is based on Docker, but is very custom. How can I load balance my test runs?

Even if your CI setup is very different from the CI examples we have and run with our sample projects, you can still take advantage of the test load balancing using the Dashboard. Find a variable across your containers that is the same for all of them, but is different from run to run. For example it could be an environment variable called CI_RUN_ID that you set when creating the containers to run Cypress. You can pass this variable via CLI argument --ci-build-idwhen starting Cypress in each container:

cypress run --record --parallel --ci-build-id $CI_RUN_ID

For reference, here are the variables we extract from the popular CI providers, and for most of them there is some variable than is set to the same value across multiple containers running in parallel. If there is NO common variable, try using the commit SHA string. Assuming you do not run the same tests more than once against the same commit, it might just be good enough for the job.