Accessibility Scans and Audits Work Together

Accessibility scans and audits work together by dividing the evaluation work between automated software and human evaluators. Scans quickly check a website’s code against a subset of WCAG criteria and flag approximately 25% of accessibility issues. Audits cover the remaining 75% through human evaluation of keyboard operation, screen reader output, visual presentation, and code context. Used in sequence and on an ongoing schedule, the two methods produce a more accurate picture of conformance than either method alone.

How Scans and Audits Complement Each Other
Element Role in the Evaluation
Scan Automated review of HTML, CSS, and ARIA that flags approximately 25% of WCAG issues.
Audit Human evaluation covering keyboard operation, screen reader output, and visual review for the remaining 75%.
Sequence Scan first for quick detection, then audit to identify what automation cannot assess.
Cadence Scans run on a recurring schedule. Audits occur at key intervals and after significant changes.

What Scans Contribute

Accessibility scans load web pages and run checks against programmatic WCAG criteria. They evaluate code patterns: missing alternative text, empty links, form inputs without labels, document structure, and ARIA usage. The output is a list of flagged items with the page location and the related success criterion.

Scans are fast and repeatable, which makes them useful for ongoing monitoring across many pages. Scheduled scans catch regressions introduced by content updates, template changes, or third-party scripts. They also give development teams a quick feedback loop during build cycles.

The limitation is coverage. Automated checks can only evaluate criteria that resolve to yes or no through code inspection. Anything requiring human judgment, context, or interaction falls outside what a scan can assess.

What Audits Contribute

An audit is a human evaluation conducted by an accessibility professional. It covers the criteria scans cannot reach: whether alternative text actually describes the image, whether headings reflect the page’s real structure, whether keyboard focus order matches visual order, whether screen reader output makes sense in context, and whether interactive components work without a mouse.

Auditors use assistive technologies including NVDA, JAWS, and VoiceOver across Chrome and Safari. They inspect code, operate the site with a keyboard, and evaluate behavior at 200% and 400% browser zoom. The audit identifies issues with specific locations, the related WCAG success criterion, and remediation guidance.

How the Two Fit Together

The most accurate picture comes from combining both methods in a defined sequence. A scan establishes a baseline and catches the issues that automation reliably detects. An audit then evaluates everything scans cannot, including keyboard operability, assistive technology compatibility, and content quality.

After remediation, scans verify that code-level issues have been addressed. Audit validation confirms that fixes actually work for users operating with assistive technologies. This closed loop prevents issues from being marked as resolved when the underlying problem remains.

Where Each Method Fits in an ADA Risk Reduction Program

Organizations working toward ADA Title II conformance with WCAG 2.1 AA, or reducing risk under Title III, use both methods for different purposes. Scans run continuously to monitor a site at scale and catch new issues quickly. Audits occur at defined intervals, such as at the start of a conformance program, after major releases, and on an annual or semiannual cadence.

Relying on scans alone leaves 75% of WCAG issues unevaluated. Relying on a single audit without ongoing monitoring means new issues accumulate between reviews. The two methods answer different questions, and a mature program uses both.

What to Look For in Combined Evaluation

Quality indicators when pairing scans with audits include:

  • Human-led audit work conducted by accessibility professionals, not automated reports labeled as audits.
  • Full WCAG 2.1 AA coverage across all 50 success criteria, not only those scans detect.
  • Prioritization based on user impact and legal risk, so remediation order reflects real-world consequences.
  • Specific reporting with issue locations, success criterion references, and remediation guidance.
  • Ongoing monitoring through scheduled scans between audit cycles.

Scans and audits answer different questions about the same website. Used together on a consistent schedule, they produce a more complete record of conformance than either method can deliver on its own.

Similar Posts