A Practical SEO Audit Template You Can Use Today (Free Download)

ranklytics ranklytics |
13 min read
Uncategorized

A Practical SEO Audit Template You Can Use Today (Free Download)

If your site audits feel chaotic and never turn into prioritized work, this free seo audit template will fix that. It is a ready-to-use Google Sheets and Excel workbook with tabs for Technical, Performance, On Page, Content, Backlinks, and Prioritization plus exact export instructions from Screaming Frog, Search Console, PageSpeed, Analytics, and Ahrefs. Use the step-by-step checklist to score and prioritize issues, then export top tasks into Ranklytics to turn audit findings into tracked content and technical work.

What the free SEO audit template contains and how to download it

Quick fact: the package is a working audit workbook, not an abstract checklist. It includes ready-to-fill Google Sheets and Microsoft Excel files, plus CSV import examples and a Trello/Jira-compatible CSV for pushing tasks into your workflow. Download using the green button on Ranklytics features or grab the sheet directly from Ranklytics.ai.

Files and formats included

  • Google Sheets workbook — collaborative copy with import macros and protected formula ranges
  • Microsoft Excel workbook — offline copy with full pivot tables and VBA-free formulas for compatibility
  • CSV examples — exact column layouts for Screaming Frog, Google Search Console, PageSpeed/Lighthouse, Google Analytics, and Ahrefs exports
  • Trello/Jira CSV — preformatted task export so you can batch-import prioritized fixes into your issue tracker

Tabs and purpose: the workbook contains Overview, Technical, Performance, On Page, Content Gap, Backlinks, Prioritization, Fix Log, and Weekly Check. Each tab expects a specific CSV structure so you can drop tool exports in without manual column mapping.

Required inputs and exact exports: plan to pull Screaming Frog crawl CSV (status, canonical, meta title, H1), Google Search Console Coverage and Links CSVs, PageSpeed/Lighthouse summary exports for representative URLs, Ahrefs referring domains/backlinks CSV, and a Google Analytics landing page report. If you automate, use the CSVs exactly as-is; the template has import rows that normalize common column names.

Practical trade-off: Google Sheets is the fastest for collaboration and linking to cloud exports, but it hits performance limits once you import very large crawls (>100k rows). For large ecommerce or enterprise sites use Excel local files or push raw exports into BigQuery and import summarized views into the sheet.

Concrete example: auditing a 10,000-page ecommerce catalog: run a full Screaming Frog crawl and export to CSV, then load only the top 1,000 organic landing pages into the On Page and Performance tabs. Use the Backlinks tab to import Ahrefs referring domains for product-category pages, and populate Prioritization so the dev team gets the top 25 fixes as a Trello CSV.

Permission and version control tactic: make a working copy before you touch exports. Protect formula ranges, keep raw exports on a Raw Data tab, and create an Audit-Run tab for each audit date so you can compare trends in the Weekly Check without losing historical context.

One judgement worth stating: most teams waste time auditing every low-traffic page. The template is designed to force a sampling + prioritization approach — raw completeness feels thorough but rarely produces ROI. Start with pages that drive traffic or conversions and let the Prioritization tab scale fixes outward.

Photo realistic screenshot-style image of a Google Sheets SEO audit template open on a laptop screen showing tabs: Overview, Technical, Performance, On Page, Prioritization; clean, professional workspace; highlight CSV import areas and protected columns

Next step: download the workbook, make a copy, then immediately export a Screaming Frog crawl and GSC Coverage CSV. If you need guidance on exports, see Screaming Frog and Google Search Central for exact report names and CSV formats.

Download, protect your working copy, import raw CSVs, and prioritize the top pages first — the template handles the rest.

Technical SEO checklist and exact exports to gather

Start with targeted exports, not exhaustive dumps. A focused set of CSVs and a couple of lightweight API pulls will expose the majority of technical blockers you need to fix this quarter — crawlability, indexation, redirect logic, and structured data.

Which reports to pull and what to capture

  1. Screaming Frog — Internal HTML CSV: export the Internal HTML (or crawl) CSV and capture Address, Status Code, Canonical Link Element, Meta Robots, Title 1, H1-1, Inlinks. Filter the crawl to indexable HTML and pagination templates to reduce noise.
  2. Google Search Console — Coverage + Performance (Pages): download the Coverage CSV and the Pages performance CSV (clicks, impressions, CTR, position). Use the Coverage export to find excluded but crawled URLs; use Pages to prioritize pages by real traffic impact. See Google Search Central for exact report names.
  3. PageSpeed / Lighthouse — batch reports: run Lighthouse or PageSpeed Insights for a sample set (homepage, top 10 landing pages, representative category and product/article pages). Export the summary CSV/JSON and record LCP, FID/INP, CLS, TTFB and TTI per URL.
  4. Google Analytics (GA4 or Universal) — Landing pages: export the landing page report with sessions, users, bounce/engagement rate, and goal conversions. Use GA filters to isolate organic traffic.
  5. Backlinks — Ahrefs/Majestic CSVs: get referring domains and top backlinks CSVs. Capture referringdomain, DR/TF, linkedpage, anchortext, and firstseen. These files are for spotting low-quality clusters and spikes.
  6. Sitemap and robots check + Structured data: fetch sitemap index and individual sitemaps, and GET robots.txt. From Search Console pull the Rich Results report and a hreflang export or use Screaming Frog hreflang audit to validate language annotations.

Practical trade-off: you will lose signal if you try to process every URL for very large sites. For catalogs and news sites, sample by template and by organic traffic rank (top 500–2,000 pages) and treat the rest as low-priority buckets.

Limitation to watch: Search Console is sampled and slow to reflect fixes — do not rely on it alone to prove an indexing problem. Combine Screaming Frog, server logs, and Search Console to validate whether an excluded URL was actually crawled and why.

Concrete example: On a mid-size ecommerce site we exported Screaming Frog Internal CSV and GSC Coverage, then filtered Screaming Frog for pages with a canonical pointing to the homepage. Cross-referencing with GA landing pages revealed that high-intent category pages were being canonicalized away. After fixing the template-level canonical tags, those categories regained organic search visibility within 6–8 weeks.

Minimum columns to store for each import: Screaming Frog (URL, status, canonical, meta robots, title, H1, inlinks); GSC Coverage (URL, reason excluded), GSC Pages (URL, clicks, impressions, position), PageSpeed (URL, LCP, CLS, TTFB), GA (URL, sessions, conversions), Backlinks (refdomain, DR/TF, linkurl, anchor).

Collect these exact exports first; any prioritization done without them is guesswork.

Next consideration: run the exports, load them into the template Technical tab, then create two views — Indexation issues and Redirects/Canonicals — and prioritize by traffic impact. If your site exceeds ~50k pages, plan to summarize with BigQuery or a database and import aggregated rows into the sheet rather than raw crawls.

Performance and Core Web Vitals auditing steps

Start with alignment: field data first, lab tests second. Mismatches between Chrome UX Report percentiles and a single Lighthouse run are the most common cause of wasted effort. Your first action is to confirm which pages fail field thresholds at the 75th percentile for LCP, INP/FID, and CLS before spending engineering time on synthetic tuning.

Audit workflow

  1. Collect field signals. Pull CrUX or the Chrome UX Report by URL or origin, and export the 75th percentile values for LCP, INP/FID, and CLS. Use the GSC Core Web Vitals report for a quick site-level view.
  2. Map to traffic. Join field metrics to your landing page list from Google Analytics to focus on pages that actually move traffic or conversions.
  3. Run targeted lab tests. For the top 50 traffic pages run Lighthouse or PageSpeed Insights in controlledDesktop and mobile modes. Export the JSON or CSV summaries so the template can capture LCP, TTFB, TTI, and Total Blocking Time.
  4. Diagnose root causes. Use filmstrip, network waterfall, and layout shift regions in DevTools or WebPageTest to determine whether slow LCP is images, server TTFB, render-blocking CSS, or third-party scripts.
  5. Score and bucket fixes. In the Performance tab assign each page a fix type: template-level, asset optimization, server/CDN, or third-party script. This guides whether you fix one page or the template powering hundreds of pages.
  6. Verify and measure. After deployment, compare new field percentiles and Ranklytics keyword groups for organic movement. Synthetic tests alone do not prove impact on real users or rankings.

Practical trade-off: fixing template level issues is higher upfront but scales; fixing page-level image sizes is low effort but only helps isolated high-value pages. If you have limited engineering bandwidth, prefer template and CDN caching fixes that improve dozens or hundreds of pages at once.

Common trap to avoid: chasing tiny CLS improvements across low-traffic pages while third-party tags add 300-500 ms of TBT site wide. Third-party scripts often dominate blocking time and unpredictably affect INP – remove or lazy-load nonessential vendors first.

Concrete example: On a mid-size ecommerce site the audit showed high LCP on category pages caused by large hero images and synchronous tag manager calls. The team switched to responsive srcset images, added server side compression, and deferred the tag manager load. Template changes applied to the category template yielded consistent LCP improvements across 120 category pages and simplified the Prioritization tab exports for implementation.

Implementation note: export Lighthouse/PageSpeed summaries as CSV or JSON and store raw files on a Raw Data tab. Use the template to calculate percent change and flag regressions automatically so releases include a performance gate.

Measure with both field and lab data, prioritize template fixes for scale, and treat third-party scripts as the most common low ROI sink.

Photo realistic dashboard-style image showing a performance audit workflow: a Chrome UX Report export, Lighthouse JSON, and a spreadsheet with columns for URL, LCP, INP, CLS, TTFB, Fix Type; clean professional analytics workstation

On page and content audit using the template

Start with pages that move the needle. The On Page tab in the seo audit template is designed to turn raw exports into decisions: keep, refresh, consolidate, or delete. Populate the tab with your Screaming Frog crawl rows, GA landing-page engagement metrics, and Ranklytics keyword groups so each URL has a signal set you can act on quickly.

What the On Page tab actually holds

Think of the tab as a decision worksheet rather than a checklist. Key columns the template uses are: page type (template/category/product/article), target keyword group (Ranklytics grouping), traffic trend (GA sessions delta), meta status (automated flag for missing/duplicate), content depth score (word count vs page-type benchmark), internal link weight, and an action recommendation with an estimated dev effort. The template auto-calculates a local priority score that feeds the Prioritization tab.

  • Signal aggregation: The template pulls CTR and position from Google Search Console and merges it with GA engagement to detect high-impression, low-CTR pages that need meta/title tests.
  • Cannibalization indicator: Pages mapped to the same Ranklytics keyword group get an overlap score so you can spot competing pages and choose consolidation versus differentiation.
  • Quality gating: Instead of raw word count, the template compares content depth to type-specific medians and flags pages under the 25th percentile for review.

Practical trade-off: Automated flags save time but over-call issues. For example, duplicate title tags flagged across a template may be intentional for paginated or faceted pages. Use the page-type column to suppress irrelevant alerts and avoid bulk edits that break templates.

Concrete example: On a SaaS docs site we exported top 500 landing pages and the template flagged 87 pages with thin content relative to other docs of the same type. Rather than rewriting each, the team grouped similar thin guides into four consolidation tasks, created combined pages with improved keyword coverage using Ranklytics content briefs, and measured a 20% net traffic gain to the consolidated pages within two months.

Minimum fields to make an on-page decision: URL, page type, target keyword group, GSC clicks/impr/position, GA sessions & conversions, content depth, internal links, meta/title duplicate flag, cannibalization score, recommended action.

A useful judgment: content pruning is not always the fastest route. Consolidation and canonicalization reduce index bloat but can temporarily drop long-tail impressions. If a page has unique impressions or conversions, prefer targeted refreshes guided by intent signals rather than outright removal.

Execution tip: Export the On Page tab's top priority rows as the Trello/Jira CSV included with the template and attach the Ranklytics content brief link. That keeps writers focused on intent and engineers focused on template-level changes that scale.

Prioritize actions that fix template-level problems first (they scale), then use targeted content work for unique high-intent pages.

Backlink audit and authority signal checks

Key point: a backlink audit is about patterns and context, not vanity numbers. Raw link counts lie; referral domain topicality, anchor distribution, and link velocity reveal the real risk and opportunity you must act on.

Exact exports to collect

  • Ahrefs referring domains CSV: include referringdomain, domainrating, firstseen, lastseen, and numberoflinkstosite.
  • Ahrefs backlinks CSV: export backlink URL, anchor text, target URL, status (live/lost), and traffic estimate for the target page.
  • Google Search Console Links export: to capture what Google sees natively and to cross check odd anchors or domains.
  • Majestic Trust Flow / Citation Flow export: if available, for a second opinion on domain trust signals.
  • Historical new/lost links report: use it to detect sudden spikes – these are the most reliable early warning signs of manipulation.

Practical limitation: backlink providers have different crawls and update cadences so you must treat their outputs as overlapping samples, not a single truth. Correlate at domain level and do manual spot checks for any domain you plan to disavow.

How to triage quickly: cluster referring domains, then score by topical relevance, trust metric (DR or Trust Flow), anchor text risk, and link velocity. Prioritize manual review for domains with low topical match, low trust, and concentrated exact match anchors.

Concrete example: during an audit for a niche B2B product the report showed a rapid influx of forum links using the product name as exact-match anchors from 180 domains over 3 weeks. The team removed what they could via outreach for 60 domains, disavowed 40 irredeemable domains after manual review, and monitored Ranklytics keyword groups for recovery. Rankings began to stabilize after 8 to 12 weeks.

Judgment call most teams get wrong: domain rating alone is a poor gatekeeper. Ten links from topically relevant, lower-DR sites often beat a single link from a high-DR directory. Focus on relevancy and the placement context – is the link editorial or buried in user generated content – because the latter is where risk concentrates.

  • Action steps for the template: import referring domain CSV into the Backlinks tab, create a domain cluster column, add a topical relevance flag, compute an anchor concentration ratio, and mark domains for outreach/disavow with a review status.
  • Outreach first, disavow second: send targeted removal requests documented in the Fix Log tab, wait 4 to 8 weeks to see effects, and disavow only when outreach fails and a manual correlation to ranking decline exists.
  • Tracking impact: use referring domain count and Ranklytics keyword tracking to look for lift, but expect a time lag of 6 to 12 weeks and noisy attribution.
Risk mitigation tip: never bulk disavow without sampling. Disavow at domain granularity, keep a copy of the pre-disavow list in the Fix Log tab, and pair any disavow action with ongoing ranking and traffic monitoring.
Photo realistic image of a spreadsheet Backlinks tab showing columns: Referring Domain, DR, Trust Flow, Anchor Concentration, First Seen, Action Status; plus an open browser window with an Ahrefs referring domains export; professional analytics workstation

Next consideration: export your top 200 referring domains from Ahrefs, import them into the Backlinks tab, and run the topical relevance and anchor concentration filters to create your initial outreach queue.

Prioritization framework and scoring rules in the template

Bottom line: the template translates messy audit outputs into a single numeric priority so teams stop arguing about what to fix first. The approach blends impact (what the fix can move in traffic, conversions, or crawlability) with effort (engineering, QA, and content time) and a technical severity multiplier for issues that block indexing or cause direct penalties.

How the score is computed

Scoring formula: Priority = (TrafficPotentialScore 0.4) + (BusinessValueScore 0.25) + (TechnicalSeverity 0.2) – (EffortScore 0.15). All component scores are normalized to 0-100. The template then buckets Priority into High (75+), Medium (40–74), and Low (<40).

FactorWhat it capturesHow the template measures itWeight
TrafficPotentialScorePotential organic sessions the page or page group can gainUses GSC clicks/impressions trend and Ranklytics keyword volume aggregation0.40
BusinessValueScoreConversion likelihood or strategic importanceMaps GA conversions and manual business tags (ecommerce, lead form, docs funnel)0.25
TechnicalSeverityIndexing/blocking issues and penalty riskBinary and scaled flags from Screaming Frog + GSC Coverage (noindex, robots, redirect chains)0.20
EffortScoreEstimated dev/content hours to implement and QATeam-provided estimate buckets (1, 3, 8, 20+ hours) mapped to 0-1000.15

Reality check: raw traffic numbers bias you toward legacy pages. The template lets you multiply TrafficPotential by a BusinessValueScore so a lower-traffic checkout page can outrank a high-traffic blog post when conversions matter. That judgement reduces noisy work and forces trade-offs between volume and value.

  1. Step 1: Populate component columns using the template import rows (GSC, GA, Screaming Frog, Ranklytics keyword groups).
  2. Step 2: Have an engineer or project lead set EffortScore buckets for each item; use template-level tags to apply the same effort to many pages at once.
  3. Step 3: Run the formula and review the top 50 High-priority rows; convert those to tracked tasks using the included Trello/Jira CSV export or push them into Ranklytics for content briefs and keyword experiments.

Example case: A site had 72 pages with duplicate title tags flagged by Screaming Frog. TrafficPotential aggregated from GSC showed these pages accounted for 18% of category impressions but low conversions. TechnicalSeverity was moderate (duplicate meta, not noindex) and EffortScore was low because the fix is a template change. The resulting Priority score pushed the task into High, and after a template update the pages saw improved CTRs and steady ranking gains within two months.

Trade-off to accept: scoring is only as good as your effort estimates and business-value tags. If teams under- or over-estimate effort, the priority ranking will skew. Use historical ticket data to calibrate EffortScore buckets and review actual time spent after the first run; adjust weights if fixes consistently take more or less time than estimated.

Implementation prompt: start by running the scoring on your top 500 landing pages. Export the High bucket to the template's Trello/Jira CSV, assign owners, and measure time-to-fix. After one sprint recalibrate EffortScore and reweight TrafficPotential if your top fixes are not delivering expected lift.

Next consideration: treat the priority score as a decision aid, not gospel. Use the Prioritization tab to create a short, actionable sprint of top fixes and re-evaluate scores after you get real implementation data.

Workflow: from audit to fixes to measurement using Ranklytics

Make the audit operational, not archival. Exported CSVs should not sit in a folder — they must flow through the template, become prioritized work, and land in Ranklytics where you can run experiments and measure outcomes. Treat the template as a staging area: normalize tool outputs, attach context, then convert rows into tracked tasks.

Core steps in the conveyor-belt workflow

  1. Normalize raw exports: drop Screaming Frog, GSC, PageSpeed, GA and backlink CSVs into the template Raw Data tab so formulas can classify issues automatically.
  2. Triage and group: use the template to cluster by page template, keyword group (Ranklytics groups), and fix type so you avoid one-off changes across hundreds of pages.
  3. Score and bucket: run the priority formula and move the High bucket into an implementation sprint. Export those rows as a Jira/Trello CSV or push them into Ranklytics as content briefs and tracked tasks.
  4. Prepare execution artifacts: for content fixes attach Ranklytics AI briefs and for technical fixes include a minimal repro, affected templates, and a QA checklist in the Fix Log tab.
  5. Deploy in controlled batches: roll template-level changes first (they scale) and page-level edits second to reduce regression risk.
  6. Measure with keyword groups: map affected pages to keyword groups in Ranklytics and monitor rank and click trends over multiple index cycles, logging correlated changes back into the Weekly Check tab.

Trade-off to accept: batching template changes saves engineering time but increases blast radius. If you lack a reliable staging/QA pipeline, prefer small, reversible releases and use the Fix Log to track rollbacks. The template forces grouping so you can make that decision deliberately rather than by accident.

Concrete Example: A mid-market ecommerce SEO team exported their top category landing pages into the template, grouped 45 pages under a single category template, and created one Ranklytics content brief and one dev ticket for the template change. Engineers deployed the template update and writers refreshed product intro copy via Ranklytics briefs. The team tracked the affected keyword group in Ranklytics and documented ranking and CTR movement in the Weekly Check tab to validate the remediation.

Measurement reality check: expect noisy signals. Organic ranking moves after fixes are influenced by seasonality, other site releases, and index lag. Use Ranklytics experiments to compare keyword group performance and require at least several index cycles and consistent query-level checks before declaring success or failure.

Implementation tip: automate exports where possible. Use scheduled CSV reports from Ahrefs and PageSpeed, IMPORTDATA or an API connector into Google Sheets for GSC/GA, and configure Ranklytics to accept the Trello/Jira CSV so prioritized items appear as actionable briefs automatically.
Photo realistic image of a modern work desk showing a laptop with the Ranklytics interface open beside a Google Sheets audit template populated with prioritized rows; visible elements: exported CSV filenames, a Trello board in the background, and a printed Fix Log; professional analytical mood

Next consideration: pick a cadence and stick to it — the real benefit comes from repeating the pipeline, tightening effort estimates, and using Ranklytics to turn audit outputs into measurable experiments.

Download, apply, and next steps

Start fast and stay focused. Download the seo audit template from Ranklytics features, create a dated Audit-Run sheet, and keep the original file untouched. Give your Raw Data imports a clear naming convention like AuditNametoolYYYYMMDD so you can compare runs without guesswork.

Two-hour quick audit session

  1. Prepare the workspace: open the Google Sheets version and grant edit rights to the audit owner only.
  2. Pull three exports: Screaming Frog crawl CSV for indexable HTML, Google Search Console Pages CSV filtered to organic, and the PageSpeed summary JSON for three sampled pages.
  3. Sample pages for Lighthouse: run Lighthouse for the homepage, a representative category or landing page, and a high-value product or article page to capture template versus page level issues.
  4. Load raw files: drop CSV/JSON into the Raw Data tab; let the sheet normalizers map columns automatically.
  5. Run the auto-classifiers: switch to Technical and On Page tabs and review flagged issues from the template's formulas.
  6. Score top items: use the Prioritization tab to calculate Priority for the High bucket and export that bucket as a Trello or Jira CSV.
  7. Create implementation artifacts: for each High item attach a minimal repro, affected template, and acceptance criteria in the Fix Log.
  8. Push top tasks to Ranklytics: import the Trello/Jira CSV or paste the exported rows into Ranklytics to create content briefs and tracking groups.
  9. Schedule work: assign owners and set a short SLA – template-level changes in the next sprint, page-level edits in follow-up sprints.
  10. Snapshot results: duplicate the Audit-Run tab and move it to Weekly Check so you have a baseline for measurement.

Practical trade-off: run this compact session instead of a full-site sweep when you need momentum. Full crawls of very large sites are valuable, but they often delay action. Treat a sampled, prioritized run as your Minimum Viable Audit and iterate from there.

Concrete example: a 25k page ecommerce team used this approach in a two-hour kickoff. They imported the top 500 organic landing pages, discovered a template-level canonical mistake affecting category templates, exported the High bucket to a Jira CSV, and rolled the fix in one sprint. The focused fix stopped further traffic decline and simplified subsequent content work.

Important: automation speeds audits but amplifies bad data. Validate imports for column drift and sample-check 10 rows before trusting priority scores.

Next reporting step: use the Overview tab to produce a one-page executive snapshot. Include three KPIs: High priority items count, estimated aggregate traffic at risk, and planned sprint for fixes. Attach the Prioritization CSV when you present to stakeholders.


ranklytics

Written by

ranklytics
🎉 Use code BLACKFRIDAY2025 to get 30% off — valid until Dec 1, 23:59!