Pages
Pages run automated Lighthouse audits on your URLs on a regular schedule. While Sites show what real users experience, Pages give you controlled, repeatable measurements — ideal for catching regressions, tracking optimization work, and monitoring metrics that CrUX doesn't cover (like Accessibility and SEO scores).
Manage your pages
The pages list shows all monitored URLs within a page group.

Organize with page groups
Use page groups to separate pages by project, team, or environment (e.g., production vs staging). Switch between groups using the dropdown in the header.
Choose a display mode
Select a view that matches what you're investigating:
- Metrics Overview (default) — LCP, TBT, CLS, and Performance score. Start here for a general health check.
- Loading — FCP, Speed Index, and LCP. Use when investigating loading performance.
- Interactivity — TTI and TBT. Use when investigating responsiveness and JavaScript issues.
- Scores — Accessibility, Best Practices, and SEO. Use for non-performance quality checks.
- Real-User Metrics — LCP, INP, and CLS from CrUX. Bridges the gap between synthetic and real-user data.
Each metric column includes a sparkline chart, so you can spot trends without opening individual reports.
Find problems fast
- Sort by any metric — click a column header to surface the worst performers.
- Filter by device — mobile and desktop audits often produce very different results.
- Switch aggregation — median smooths out noise across multiple runs; latest shows the most recent result.
Add a new page
Click "New page" and configure:
- URL — the page to audit.
- Device — mobile or desktop. Mobile uses throttled CPU and network to simulate a mid-range phone.
- Audit interval — every 1 hour or every 8 hours. Hourly audits catch regressions faster but require a paid plan.
- Server — Treo (simulated throttling on Treo's infrastructure) or PageSpeed Insights (Google's servers).
- Location — AWS region for the audit. Choose a region close to your users for more realistic results.
- Static IP — fixed IP address for firewall whitelisting (available in select regions).
- Custom headers — HTTP headers sent with each audit (useful for authentication, cache bypass, or A/B test control).
Tip: Start with mobile audits — they use throttled conditions that reveal performance issues desktop audits might miss. Add a desktop audit for the same URL if you need to compare.
Analyze a page report
Click any page to open its monitoring report. This is where you track performance over time and catch regressions.

Performance overview
The top section shows the Performance score trend as a line chart, along with three key metrics — LCP, TBT, and CLS. Each metric displays the current value and the change compared to the previous period, so you can immediately see if things are getting better or worse.
Latest audits
The three most recent audit runs are shown as cards with:
- Performance score and timestamp.
- Screenshot timeline showing how the page loaded visually.
- Long tasks visualization highlighting JavaScript execution.
Click any card to open the full Lighthouse report. Use the "New Audit" button to trigger a run on demand — useful after a deploy or configuration change.
Real-user metrics
When CrUX data is available for the page, this section shows distribution charts for LCP, INP, and CLS based on real Chrome user data. This lets you compare synthetic Lighthouse results against actual user experience in the same view.
Tip: If Lighthouse shows good scores but CrUX shows poor real-user metrics, the difference is often caused by factors Lighthouse doesn't simulate — like third-party scripts, personalized content, or server-side variability.
Detailed metric charts
Three chart sections give you deeper insight into specific aspects of performance:
- Loading — LCP, FCP, and Speed Index over time. Look for sudden jumps that correlate with deploys.
- Interactivity — TBT (as bars) and TTI (as a line). Rising TBT often points to newly added JavaScript.
- Visual Stability — CLS trend. Spikes often come from ads, lazy-loaded images without dimensions, or dynamic content injection.
Lighthouse checks
Scores for SEO, Accessibility, and Best Practices are tracked over time. Hover over any score card to see which specific audits are currently failing — this gives you a quick list of actionable fixes.
Time period and aggregation
Adjust the view using header controls:
- Time period — 3 days, week, month, quarter, year, or 2 years.
- Aggregation — median averages across all runs in the period (reduces noise from one-off flaky results); latest shows the most recent run.
Tip: Use "past week" with "median" aggregation for day-to-day monitoring. Switch to "past 3 days" with "latest" when investigating a specific regression.
Read an audit result
Click any data point in the report charts, or any latest audit card, to open the full Lighthouse report for that specific run.

The audit result includes:
- Category scores — Performance, Accessibility, Best Practices, and SEO, each scored 0–100.
- Diagnostics — specific audits with recommendations. Items are sorted by impact, so focus on what's at the top.
- Filmstrip — screenshot timeline showing how the page loaded frame by frame. Useful for understanding perceived loading experience.
- Treemap — visual breakdown of JavaScript bundle sizes. Helps identify oversized dependencies.
Use the download button in the header to export the full Lighthouse JSON report for further analysis or sharing with your team.
Tip: Don't chase a perfect 100 score. Focus on failing audits with the highest estimated impact — Lighthouse ranks them by potential savings in milliseconds or bytes.