
Lighthouse Audit
Runs Lighthouse audits on web URLs to assess performance, accessibility, best practices, and SEO metrics. Delivers JSON reports with numerical scores, diagnostics, and improvement details. Web developers and front-end engineers use it to automate site quality evaluations in CI/CD pipelines or pre-deployment checks.
Overview
The Lighthouse Audit MCP server integrates Google's Lighthouse tool into Model Context Protocol workflows. It allows programmatic execution of comprehensive web audits on any URL, producing structured JSON outputs with metrics across core categories.
Key Capabilities
- lighthouse_audit: Accepts a URL and optional configuration (e.g., category flags for performance, accessibility, best practices, SEO). Executes the audit and returns a JSON report containing:
- Numerical scores (0-100) per category.
- Detailed audits array with items, including impact, descriptions, and remediation steps.
- Timings, resource sizes, and pass/fail details.
This enables parsing and acting on results directly in AI-driven tools or scripts without manual browser interaction.
Use Cases
-
CI/CD Integration: Run lighthouse_audit on pull requests to block merges if performance score < 90, enforcing standards automatically.
-
Site Monitoring: Schedule audits on production URLs daily; parse JSON to alert on accessibility regressions via score drops.
-
SEO Optimization: Audit competitor sites with lighthouse_audit, extract SEO diagnostics, and compare against your site's best practices report.
-
Accessibility Audits: Target specific pages for accessibility-only runs, using the report's items to generate remediation tickets in Jira.
Who This Is For
Front-end developers automating quality gates, DevOps engineers embedding audits in pipelines, SEO analysts evaluating page health programmatically, and web agencies batch-auditing client sites. Requires basic familiarity with JSON parsing and web performance concepts.