Detect and manage changes
Cypress Accessibility supports three main paths to observing and acting on changes in the accessibility report for your projects:
- Run comparisons with Branch Review
- The Results API to fail builds or trigger alerts when results don't meet your standards
- Analytics for high-level trend-spotting and analysis
In this guide we'll discuss how these work and some common situations where we might use them together.
Branch Reviewโ
Branch Review helps you spot new issues quickly by comparing accessibility results from two Cypress runs.
- Pinpoint when and where problems have been introduced
- Understand and debug easily with fully rendered page snapshots
- Share links to specific new issues with all the context needed to dive in
Branch Review use casesโ
Speed up code reviews and deploymentsโ
Use Branch review when reviewing a pull request made by another developer or before you create one of your own. You can compare the report from your changed branch with the equivalent report on the main branch. The comparison allows you to see any issues that your changes have improved or fixed completely, as well as net-new issues introduced, and any new instances of existing failures.
Since this reduces the amount of violations to review to just what was affected by the PR code changes, you won't spend any time looking for what's relevant -- and if there is no difference, you'll know your PR does not affect the accessibility score and is safe to merge.
See the main Accessibility Branch Review docs for a video demo of this workflow.
Compare reports during local development (without waiting for CI)โ
You might expect that every Cypress Accessibility report requires a full standard Cypress test run in your Continuous Integration (CI) pipeline.
A common accessibility workflow is to create small Cypress Cloud recordings from your local development environment to preview the effects of any code changes. This allows for fast analysis of just a single test or two, or as much as you need to have confidence your changes are working -- especially if you are actively remediating accessibility issues.
To get a high-quality comparison for a small set of tests, record the tests locally from your main branch, and then on the branch with changes. In Cypress Cloud, you will be able to compare these two runs knowing they cover exactly the same scope.
This completely sidesteps the usual "push, build, run tests, report" cycle you might be used to, and is great when you need fast feedback.
See what's changing between scheduled runsโ
Many teams have some Cypress projects that run on a schedule and target a QA, staging, or production environment. These runs produce Cypress Accessibility reports but are not connected to specific pull requests with code changes. Branch Review works perfectly in these situations, even if all runs are on the same branch.
Comparing reports on a regular schedule can let you see if accessibility issues have been introduced between this week and last week, for example, and then find the specific run where the changes happened. That "diff report" can can be forwarded to engineering or product teams as the basis of a ticket.
Noteโ
Comparing two runs in detail with Branch Review requires a good understanding of the runs themselves. The comparison is most useful if the runs are testing the same pages and workflows, and if each run fully passed all tests. This means that Cypress saw all the same states and variations of the application in both contexts. Your runs will likely never be 100% perfectly matching, but the closer they are, the better.
Results APIโ
Cypress Accessibility's Results API provides rich data related to the outcome of any accessibility report for a Cypress run. It supports adding a step in CI to block pull requests or trigger other side effects. To use the Results API, you'll need to add a step in your CI process. This comes with a helper function which will automatically match your CI build with the related run in Cypress Cloud.
Once you receive the results, you can perform any action you need to. Below are some of the common ones customers have told us about, where the results are compared with expectations.
Results API use casesโ
Fail a build based on new failed accessibility rulesโ
The Results API provides a combined report with all the failing rules included. This means you can easily prepare a list of known, permitted failing rules to compare with the results from any run, and choose to fail only if a previously passing rule has started to fail. This is described in detail in the guide related to blocking pull requests and setting policies.
This concept is simple but powerful because it provides extremely high confidence that any failed build is related to a specific recent code, content, or testing change. Especially if there are a high number of existing failures, the actual number of failed elements detected can vary from run-to-run, especially if existing broken components are reused in new ways.
Rules that are fully passing do not have any of these problems, and for an extreme action like failing a pipeline, it's important to have high confidence that a problem is actually a new issues and can be traced to a specific build. With this confidence in place, the existing failed rules can be addressed one by one to get fully passing.
Fail a build based on deviation from a desired scoreโ
The Results API includes the specific failed element count and accessibility percentage score for the accessibility report. This makes it easy to set a score threshold and fail a build if the score falls below that expectation. This is especially useful when taking a monitoring approach, or wanting a broader, less specific standard than one based on specific rules.
Changes in percentage score are directly related to changes in the number of elements detected failing accessibility rules throughout the run. Alerting or failing a pipeline for scores going down is a great way to know when you need to jump into Branch Review to understand the underlying causes of the change in score.
Alert specific teams about failures they ownโ
In addition to the consolidated accessibility report, accessibility failures are listed per View in the results API. A View is always related to a specific URL pattern or component test, which means in many projects it is possible to connect accessibility issues in the Results API with the team responsible for solving issues in that part of the application.
As an example, customers can use this functionality to trigger Slack webhooks that notify a particular channel of the issues in specific areas. The Results API provides direct links to the Views and to each failed rule within that View, making it easy to forward useful links to the correct places, or jump right into debugging from the CI output.
Analyticsโ
Analytics are located within the Cypress Accessibility tab of Enterprise Reporting of Cypress Cloud. This provides an overview of accessibility trends across projects, and also supports filtering down to a single project and branch to see where scores have changed over time.
Analytics use casesโ
Track progress and spot trendsโ
This level of reporting is ideal for documenting progress and spotting unexpected trends. When you see a project's score dropping over the long term, you can explore further with Branch Review to understand what changes are driving the score. You can compare reports before and after any shifts in scores to determine which accessibility issues were introduced.
Set goals and minimum standardsโ
Seeing all projects in one place gives a good sense of each project's relative level and helps inform what values you might want to use for comparison when implementing a policy using the Results API.
This also provides useful planning insights, because you can see your historical pace of improvements. If some projects are improving faster than others, you will be able to recognize this and understand how to replicate that success, or create realistic OKRs on a project basis.
Combine approachesโ
As you can see above, there are times you may want to move between multiple systems to track down an issue. For example, if the Results API signals that a build on a feature branch has failed your accessibility standard, you might click through to the run itself, and then compare it with the main branch in Branch Review. That might be the end of the journey, but if the main branch is also failing your policy, you can use high-level analytics to see when the score originally changed, and find the specific cause by comparing two runs from around that time. This helps detect where code was force merged, or when content served in the application had accessibility regression unrelated to code changes.
As you use these tools, remember to hit the "Submit Feedback" button any time you see something that could be improved to help you understand what's important and what's changing.