Free Noindex Checker
Check if a web page is blocked from search engine indexing. This tool scans for noindex directives in both HTML meta tags and HTTP response headers, helping you identify why a page might not appear in search results.
Check Page Indexability
Why Checking for Noindex Matters
A single misplaced noindex tag can silently remove your most important pages from search results. This is one of the most common — and most costly — technical SEO mistakes. Pages can end up with noindex directives due to staging environment settings leaking into production, CMS plugins adding tags automatically, incorrect deployment configurations, or developer mistakes during maintenance.
Unlike other SEO issues that gradually reduce rankings, a noindex directive completely removes a page from the index. If your traffic suddenly drops, a rogue noindex tag should be one of the first things you check.
Where Noindex Can Be Set
Search engines check two locations for noindex directives:
- HTML Meta Tag:
<meta name="robots" content="noindex">— placed in the page's<head>section - HTTP Header:
X-Robots-Tag: noindex— sent as a server response header, works for HTML and non-HTML resources - Bot-Specific Tags:
<meta name="googlebot" content="noindex">— targets specific search engine crawlers
How to Use This Tool
- Enter a URL — paste the full URL of the page you want to check, including the path (e.g.,
https://example.com/blog/my-post) - Click "Check Noindex" — the tool fetches the page and inspects both the HTML source and HTTP response headers
- Review the results — you'll see a clear pass/warn/fail status for each directive source, along with the canonical tag status
What This Tool Checks
- HTTP Status Code: Verifies the page returns a 200 OK status and is accessible to crawlers
- Meta Robots Tag: Scans the HTML for
<meta name="robots">tags with noindex, nofollow, none, and other directives - Bot-Specific Meta Tags: Checks for googlebot, bingbot, and other crawler-specific meta tags
- X-Robots-Tag Header: Inspects the HTTP response headers for X-Robots-Tag directives
- Canonical Tag: Verifies whether a canonical tag exists and if it references the same page or points elsewhere
- Related Directives: Detects nofollow, nosnippet, noarchive, and noimageindex settings
Common Noindex Issues
- Staging settings in production: Many CMS platforms have a "discourage search engines" setting (e.g., WordPress). If this remains enabled after launching, it adds noindex to every page.
- Plugin or theme conflicts: SEO plugins like Yoast or Rank Math can inadvertently set noindex on pages, especially after updates or configuration changes.
- Conditional noindex logic: Some frameworks add noindex to paginated pages, tag archives, or search result pages — sometimes too aggressively.
- HTTP header from server config: Server configurations (Nginx, Apache, Cloudflare) can inject X-Robots-Tag headers that override HTML meta tags, making them harder to spot.
- Non-self-referencing canonical: While not a noindex directive, a canonical tag pointing to a different URL signals to search engines that the current page is a duplicate.
Frequently Asked Questions
What does noindex mean?
Noindex is a directive that tells search engines not to include a specific page in their search results. It can be set via a <meta name="robots" content="noindex"> HTML tag or an X-Robots-Tag: noindex HTTP header. When a search engine crawler encounters this directive, it will exclude the page from its index.
What is the difference between noindex in meta tags and HTTP headers?
Both methods achieve the same result — preventing search engine indexing. The meta robots tag is placed in the HTML <head> section, while the X-Robots-Tag is sent as an HTTP response header. The HTTP header method is particularly useful for non-HTML resources like PDFs, images, and video files. Both are fully respected by Google and Bing.
Can Google still index a page with noindex?
No. Google treats noindex as a strict directive, not a suggestion. If a page has a valid noindex directive and Google can crawl it, the page will be removed from search results. However, it may take some time for Google to re-crawl and process the change — removal can take days to weeks depending on crawl frequency.
How do I check if my page has a noindex tag?
Use this free Noindex Checker tool to instantly scan any URL for noindex directives in both meta tags and HTTP headers. Alternatively, you can manually view the page source (Ctrl+U or Cmd+Option+U) and search for "noindex", or check the HTTP headers in browser developer tools under the Network tab.
What is the difference between noindex and disallow in robots.txt?
Noindex prevents a page from appearing in search results but still allows crawling. Disallow in robots.txt prevents crawling entirely. Critically, if a page is disallowed in robots.txt, search engines cannot see a noindex tag on that page. The page might still appear in search results based on external signals like inbound links. For guaranteed de-indexing, use noindex and make sure the page is crawlable.
Track Your Brand Across Google & AI
Spotted an indexing issue? QuickSEO helps you monitor your entire site's search performance. Track how your brand appears across Google Search and AI chatbots like ChatGPT, Claude, and Gemini — all in one dashboard.
Try QuickSEO Free →Related Tools
Check your robots.txt file for crawling and indexing issues.
HTTP Header CheckerInspect HTTP response headers for security, caching, and SEO issues.
Canonical Tag GeneratorGenerate proper canonical tags to prevent duplicate content issues.
Sitemap ValidatorValidate your XML sitemap for protocol compliance and errors.