Enter a URL above and click Analyze to start the SEO analysis.
Run analysis to see SEO score.
Run analysis to see GEO score (AI readiness).
Run analysis to see HTTP headers.
Run analysis to see SSL certificate details.
Run analysis to see redirect chain.
Run analysis to see performance data.
Run analysis to see meta tags and headings.
Run analysis to see link analysis.
Run analysis to see image analysis.
Run analysis to see structured data.
Run analysis to see content metrics.
Run analysis to see the HTML5 document structure.
Run analysis to see accessibility checks.
Run analysis to see robots.txt data.
Run analysis to view HTML source.
A practical reference of SEO best practices, organized by category. Independent of any analysis — always available.
- HTTPS everywhere — Serve all pages over HTTPS. HTTP→HTTPS redirects must be permanent (301/308).
- Single canonical URL — Use
<link rel="canonical">on every page. Avoid duplicate content via URL parameters, trailing slashes, or www/non-www. - Crawlable robots.txt — Keep
robots.txtaccessible at the root. Declare your Sitemap URL there. Avoid accidentally blocking Googlebot. - XML Sitemap — Submit to Google Search Console. Include only canonical, indexable URLs. Update automatically.
- Redirect chains — Keep redirect chains to 1 hop. Chains of 3+ slow crawling and lose link equity.
- HTTP/2 or HTTP/3 — Enables request multiplexing; improves performance for pages with many resources.
- Security headers — HSTS, CSP, X-Frame-Options, X-Content-Type-Options signal a well-maintained site.
- Structured data (JSON-LD) — Mark up key entities (Article, Product, FAQPage, BreadcrumbList) for rich results in SERPs.
- hreflang — For multi-language sites, include self-referencing hreflang tags and an
x-defaultfallback. - Avoid noindex in production — Double-check that staging/dev
noindexheaders are not deployed to production.
- Title tag — 30–60 characters, primary keyword near the start. Unique per page. Avoid keyword stuffing.
- Meta description — 70–160 characters. Compelling summary that increases click-through rate. Not a direct ranking factor, but indirectly important.
- Single H1 — One H1 per page, containing the primary topic. H2–H6 should follow a logical hierarchy.
- Word count — Aim for 300+ words for indexable pages. Thin content (<100 words) is rarely ranked.
- Readability — Flesch-Kincaid score ≥ 60 (standard) is a good target for general audiences. Short paragraphs and sentences help.
- Keyword placement — Include primary keyword in title, H1, first paragraph, and naturally in body. Avoid repetitive exact-match stuffing.
- Fresh content — Update evergreen content regularly. Add a
dateModifiedin JSON-LD to signal freshness to crawlers. - Image alt text — Every meaningful image needs descriptive alt text. Decorative images:
alt="". - Open Graph + Twitter Card — Required for controlling how pages appear when shared on social media. Minimum: title, description, image (1200×630 px).
- Descriptive anchor text — Use meaningful text like "SEO best practices" instead of "click here". Helps both users and crawlers understand link context.
- Internal linking — Link related pages from within your content. Spreads PageRank and helps crawlers discover pages.
- External links — Link out to authoritative sources. Use
rel="nofollow"orrel="sponsored"for paid/UGC links. - Broken links — 404 links waste crawl budget and degrade user experience. Audit regularly with a crawler.
- Link depth — Important pages should be reachable within 3 clicks from the homepage. Deep pages are crawled less frequently.
- Backlinks — Links from authoritative, topically relevant domains are a major ranking signal. Focus on quality over quantity.
- Disavow sparingly — Only disavow links if you have evidence of a manual action or clearly toxic pattern. Do not disavow indiscriminately.
- llms.txt — Add an
/llms.txtfile (plain text, Markdown format) to help AI systems understand what your site offers and how it should be cited. See llmstxt.org. - AI Overview optimization — Google's AI Overviews often cite pages with clear, structured answers, concise prose, and authoritative signals (author, date, organization schema).
- Entity clarity — Name entities explicitly (organizations, people, products) and mark them up with Schema.org. This helps knowledge graph inclusion and AI citation.
- Conversational queries — AI-driven search handles natural language questions. Optimize for "who, what, where, why, how" question patterns.
- Robots.txt for AI crawlers — Common AI crawlers:
GPTBot,Claude-Web,PerplexityBot,CCBot. Allow or block per use case. - Perplexity / ChatGPT citations — These tools cite readable, well-structured HTML pages. Avoid heavy JavaScript-only rendering.
- AI-generated content — Provide value beyond what AI generates. Differentiate with original data, personal expertise, and direct experience (Google's "E-E-A-T").
- E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) — Key quality signals for Google's Quality Raters. Use author schema, About/Contact pages, and editorial standards.
| Metric | Good | Needs work | Poor | What it measures |
|---|---|---|---|---|
| LCP (Largest Contentful Paint) | ≤ 2.5 s | ≤ 4 s | > 4 s | Time until the largest visible element is rendered |
| INP (Interaction to Next Paint) | ≤ 200 ms | ≤ 500 ms | > 500 ms | Responsiveness to user interactions |
| CLS (Cumulative Layout Shift) | ≤ 0.1 | ≤ 0.25 | > 0.25 | Visual stability — unexpected layout shifts |
- Improve LCP — Preload the LCP image (
<link rel="preload" as="image">). Use a CDN. Eliminate render-blocking resources. Upgrade to HTTP/2. - Improve INP — Minimize long JavaScript tasks (>50 ms). Use
requestIdleCallbackfor non-critical work. Avoid synchronous XHR. - Improve CLS — Set explicit
widthandheighton all images and iframes. Avoid inserting DOM elements above existing content without user interaction. - TTFB — A server-side metric: < 200 ms is excellent, < 500 ms is acceptable. Use caching, CDN edge nodes, and faster server responses.
- Measure — Use PageSpeed Insights, web.dev/measure, or Chrome DevTools Performance panel.
A practical reference for Generative Engine Optimization (GEO) — optimizing your content for AI-powered search engines and LLM-based answers. Independent of any analysis — always available.
Generative Engine Optimization (GEO) is the practice of optimizing web content so it gets cited and referenced in AI-generated answers — from tools like ChatGPT, Google Gemini, Perplexity, and others.
Unlike traditional SEO, which focuses on ranking positions in search result pages, GEO focuses on citability — being the source that AI systems reference when answering user questions.
| Aspect | SEO (Classic) | GEO (AI Optimization) |
|---|---|---|
| Goal | Ranking in SERPs | Citation in AI answers |
| KPI | Position / CTR | AI-Citation Rate |
| Content | Keyword-focused | Structured, original, authentic |
| Technical | Basic markup sufficient | Comprehensive schema markup |
| Authority | Backlinks | Original data & studies |
| Format | Text-oriented | Multimodal (text, graphics, video) |
AI systems prefer content that is clear, structured, and factual. Generic, bloated text walls are rarely cited.
- Short paragraphs — Keep paragraphs under 150 words. AI models extract information more easily from concise blocks.
- Clear headings — Use descriptive H2–H6 headings that summarize the section content. AI uses these as navigational anchors.
- Facts, tables, and lists — Explicit data points, comparison tables, and structured lists are highly quotable by AI.
- Original data — AI heavily cites original sources: own studies, surveys, benchmarks, and unique datasets. Be the primary source.
- Authenticity over mass — Generic, AI-generated content is ignored. Show real expertise, personal experience, and unique insights.
- Conversational queries — Optimize for natural language questions: "who, what, where, why, how" patterns that AI users ask.
- FAQ sections — Dedicated FAQ blocks with clear Q&A format are directly extractable by AI systems.
AI models rely on structured, machine-readable content. Unstructured text blocks are increasingly ignored.
- Schema Markup (JSON-LD) — Use
FAQPage,HowTo,Article,Review,Productschemas. These feed directly into AI knowledge extraction. - Organization & Person schema — Identify who is behind the content. AI uses this for E-E-A-T verification and citation attribution.
- Tables with
<th>headers — Properly structured HTML tables are machine-readable and highly quotable. - Definition lists —
<dl>/<dt>/<dd>for glossaries and key-value explanations. - llms.txt — Add an
/llms.txtfile (Markdown format) at your domain root. It helps AI systems understand your site's purpose and content structure. See llmstxt.org. - Modular content — Design content in self-contained blocks that AI can extract independently without losing context.
E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is critical for AI visibility. Without verifiable authority, even high-quality content stays invisible.
- Author pages — Create dedicated author pages with credentials, bio, social links, and published works. Use
Personschema. - Original research — Publish industry studies, surveys, and unique data. AI systems strongly prefer citing original sources.
- Expert content — Guest posts, podcast appearances, interviews, and speaking engagements build recognizable expertise signals.
- Reviews & testimonials — Customer reviews, case studies, and testimonials provide social proof that AI recognizes.
- About & Contact pages — Clear organizational identity with verifiable contact information builds trust.
- Editorial standards — Transparent sourcing, fact-checking processes, and editorial policies signal reliability.
- AI Crawler management — Common AI crawlers:
GPTBot,ChatGPT-User,OAI-SearchBot,Google-Extended,Claude-Web,ClaudeBot,PerplexityBot,CCBot,Bytespider. Allow or block selectively inrobots.txt. - llms.txt file — Place at domain root (
/llms.txt). Describe your site, key content areas, and how AI should cite your content. Currently experimental but gaining adoption. - Server-side rendering — AI crawlers struggle with heavy JavaScript-only sites. Ensure critical content is in the initial HTML response.
- Fast load times — AI crawlers respect crawl-delay and skip slow sites. Optimize TTFB, enable compression, use CDN.
- Clean URL structure — Logical, descriptive URLs help AI systems categorize content topically.
- Internal linking — Strong internal link structure helps AI understand your topical authority and content hierarchy.
AI systems decompose user questions into multiple sub-queries (Query Fan-Out). Sites that comprehensively cover a topic are favored.
- Topic clusters — Build pillar pages with comprehensive subtopic coverage. Link related content together to demonstrate depth.
- Entity coverage — Explicitly name and describe all relevant entities (products, methods, people, places, studies). Use Schema.org markup.
- Multi-format content — Cover topics across text, video, infographics, and podcasts. AI aggregates from multiple formats.
- Fan-out analysis — For key topics, identify all possible sub-questions a user might ask. Create content that answers each one.
- Content depth — Aim for 500+ words on key pages. Thin content (<300 words) is rarely cited by AI.
Visibility is no longer limited to Google. AI systems aggregate information from across the web.
- Platform presence — Be present on YouTube, Reddit, TikTok, LinkedIn, and industry forums. AI crawls these for information.
- Social proof — Reviews, mentions, and discussions on third-party platforms increase your AI citation likelihood.
- Consistent identity — Use the same brand name, descriptions, and key messages across all platforms for entity recognition.
- Community engagement — Active participation in relevant communities (Reddit, Stack Overflow, industry forums) builds mention-based authority.
Search engines are evolving into AI agents that autonomously research, compare, and prepare decisions on behalf of users.
- Agent Optimization (AO) — A new discipline emerging beyond GEO. Optimizing for autonomous agents that evaluate and select sources without human intervention.
- Machine-readable data — APIs, structured data feeds, and clean data exports will become essential for agent accessibility.
- Trust signals — Verifiable credentials, consistent track records, and transparent provenance will be mandatory for agent trust.
- Decision-ready content — Content must provide clear comparisons, recommendations, and actionable conclusions that agents can directly use.
- Early mover advantage — Sites that structure data and build authority now will have a significant lead when AI agents become mainstream.
Import JSON
Load a previously exported JSON file to re-render all analysis tabs — no new network request needed.