How to fix "Crawled – currently not indexed" in Google Search Console

Published April 20, 2026 · 7 min read

Short answer: Google visited the page, looked at it, and decided not to add it to the index. The fix is to make the page genuinely more useful than what Google already ranks for that query, then add internal links from a page Google already trusts, then request re-indexing.

The "Crawled – currently not indexed" status is the most common reason a page fails to show up in Google in 2026. It means Googlebot made it to your URL, read the HTML, and then quietly passed on listing it in search results. There is no error banner, no manual action, no broken technical setting — just a one-line note in the Page Indexing report that your page is not in the index.

What makes it frustrating is that the page worked. Google found it. It was not blocked by robots.txt. It rendered fine. Google simply did not think it was worth keeping. Below is what actually causes the status and a fix that does not rely on clicking "Request Indexing" and hoping.

What "Crawled – currently not indexed" actually means

Google crawls far more pages than it indexes. The index is a storage decision, not a discovery decision. Every URL Google indexes costs storage, retrieval, and ranking compute — so the index is curated. When the crawler visits a page, the quality algorithms ask a simple question: is this page adding enough unique value to justify a slot alongside what is already indexed for these queries?

If the answer is "not really," you get this status. The page is excluded, but it sits in a holding bucket. Google will revisit it periodically to see if anything changed. If you improve the page, the next crawl can flip the decision. If you do nothing, the page stays in limbo for months.

This is different from "Discovered – currently not indexed" (Google found the URL but has not crawled it yet, usually a crawl-budget issue) and different from "Excluded by 'noindex' tag" (a deliberate directive from you). Crawled but not indexed is purely a quality and value judgment.

Why Google skips crawled pages

After working through hundreds of these cases, the reasons cluster into five buckets. One of these — usually more than one — is behind almost every instance.

1. Thin or shallow content

The page answers the query but less completely than competing pages already in the index. If the top three results each run 1,500 words with original screenshots and your page is 400 words of generic summary, Google will often skip it. Length alone is not the fix — depth is. A concise answer that nobody else has is still indexable. Reworded common knowledge is not.

2. Duplicate or near-duplicate content

Category pages with the same boilerplate, product variants that differ only by a spec, AI-generated articles that echo the same two or three source pieces — Google collapses these into one "canonical" and excludes the rest. Run any suspect page through a diff against its nearest sibling on your site. If 70%+ of the wording overlaps, consolidate.

3. Weak internal linking

Orphan pages — ones reachable only through a sitemap, not through a link from any other page — often get stuck here. Google uses internal links as a proxy for "the site owner thinks this page matters." A page with zero internal links reads as an accident.

4. Site-wide quality drag

If a site publishes a lot of low-effort filler, Google's classifiers apply that signal site-wide. Good pages on low-quality sites get indexed less aggressively. This is the mechanism behind the March 2026 core update's heavier hits on content-farm domains — the classifier now weighs overall site expertise more than it used to.

5. Technical render issues

Heavy client-side rendering, critical content injected only after user interaction, or CSS/JS blocked by robots.txt can leave Googlebot with a mostly-empty page. The crawler saw the URL, rendered nothing meaningful, and decided there was nothing to index. Use the URL Inspection tool's "Test live URL" and look at the rendered screenshot — not the source HTML.

The step-by-step fix that actually works

This is the sequence that resolves the status for most pages. Do not skip the steps. Requesting indexing on an unchanged page is the number-one reason people say "the fix doesn't work."

  1. Audit the page against the top 3 ranking results. Open an incognito tab, search your target query, and read the pages already ranking. What do they cover that you do not? Original data, screenshots, step-by-step walkthroughs, direct experience. Your page has to add at least one thing that is genuinely not on those pages.
  2. Rewrite, don't just extend. Padding a thin page to 1,500 words of the same information does not work — Google's quality signals have been explicitly tuned against this since 2023. Cut shallow sections, expand the unique ones, add sub-headings that match the specific sub-questions a reader has.
  3. Add 2-5 internal links from indexed, trafficked pages. Find pages on your own site that already rank and get traffic (check the Performance report in Search Console). Add descriptive contextual links from those pages to the excluded one. Navigation-menu links don't carry much weight here — in-content links do.
  4. Check render in URL Inspection. Paste the URL into Search Console's URL Inspection tool, click "Test live URL," and open the rendered HTML and screenshot tabs. Confirm the main content is present in the rendered view. If it isn't, fix the rendering path before anything else.
  5. Fix the canonical if it conflicts. In the same inspection view, check "User-declared canonical" vs "Google-selected canonical." If Google picked a different URL than you intended, consolidate the duplicates or fix the canonical tag.
  6. Request indexing — once. After the above is done, click "Request Indexing." Do not spam it. Submitting the same URL five times does not help and can flag you.
  7. Wait 1-4 weeks. Google recrawls on its own schedule. If the page is genuinely improved, the status typically flips within a month.

Side note for new domains: if your whole site is fresh and most URLs are stuck in this status, the issue is usually site authority, not individual pages. The fix looks like a slower-burn version of the above — publish steadily, build external signals, and get the site discovered beyond Google. Our search-engine submission guide walks through the discovery side of that.

Cast a wider indexing net

Submit your URL to 500+ engines and AI crawlers in one click. Free, no signup.

Submit My Website Free →

How long before the status clears

For established sites with reasonable authority, most "Crawled – currently not indexed" cases clear within 1 to 4 weeks after the fix. High-traffic domains often see it in under a week because Googlebot recrawls them more frequently. Brand-new domains can take 4 to 8 weeks, sometimes longer. The signal to watch is the "Last crawled" date in URL Inspection — when that updates and the status changes, you know the recrawl has happened.

If six weeks pass with no change and you have genuinely improved the page, the issue is almost always site-wide. At that point the useful move is a content audit: find the weakest 20% of URLs on the site, either rewrite them to a publishable standard or noindex/consolidate them. A leaner site of strong pages almost always outperforms a bloated site of mixed quality.

Entireweb and other syndication networks can help by pushing your URL into secondary indexes that AI crawlers and smaller engines draw from — useful while you wait on Google, and useful for surfacing in non-Google answer engines that are becoming real traffic sources. It does not fix the Google indexing decision directly, but it widens the discovery surface so the page has more inbound signal for Google to see on the next crawl.

FAQ

How long does "Crawled – currently not indexed" usually last?

For most pages, it clears within 1 to 4 weeks after you improve the content and add internal links. High-authority sites often see re-indexing in 3 to 7 days. Brand-new domains can sit in this state for 4 to 8 weeks because Google has less crawl budget allocated to them.

Does "Request Indexing" actually work for this error?

Request Indexing triggers a recrawl, but if the underlying quality or linking issue is unchanged, Google will land on the same decision. Fix the page first, then request indexing. Clicking the button on an unchanged page is the most common reason people complain it "does nothing."

Is "Crawled – currently not indexed" a penalty?

No. It is an exclusion, not a manual penalty. Google decided the page did not earn index space relative to what already exists for those queries. There is no notification, no review process — just an algorithmic decision you reverse by improving the page's value signals.

Can thin content from years ago still cause this on newer pages?

Yes. Google evaluates overall site quality, so a large stock of thin or templated pages can drag down index decisions on newer, better pages. Pruning, consolidating, or noindexing the weakest URLs often frees up crawl and index budget for the content you actually want ranking.

Need broader discovery while you wait?

Push your URL to 500+ engines and AI crawlers so the page gets seen outside Google too.

Submit My Website Free →