How to fix "Discovered – currently not indexed" in Google Search Console

Published April 21, 2026 · 8 min read

Quick answer: "Discovered – currently not indexed" means Google found the URL but has not crawled it yet — a queue problem, not a quality rejection. The fix is to raise crawl priority: tighten your sitemap, link to the page from a higher-traffic part of your site, prove demand with external signals, and prune the low-value URLs competing for the same crawl budget.

If you have spent any time inside the Page Indexing report in Google Search Console, you have probably seen URLs piling up under Discovered – currently not indexed. The exact wording is technical, but the situation is plain: Google knows your URL exists, has chosen not to fetch it yet, and has given you no timeline. For new sites and content-heavy sites this status can easily account for hundreds of URLs. This guide explains exactly what is happening, why, and what to do about it.

What "Discovered – currently not indexed" actually means

Google's official documentation describes the status this way: the page was found, but not yet crawled, and crawling was deferred to avoid overloading your site. That is technically correct but misleadingly polite. The deferral is not really about your server's load. Most small sites could handle a Googlebot visit on every page within an hour. The deferral is about Google's own decisions on how to spend its crawl budget.

Crawl budget is the number of URLs Googlebot is willing to fetch from your site within a given window. It is shaped by three things: how fast and reliably your server responds, how much fresh and valuable content Google believes lives on your site, and how much external demand (links, queries, traffic) suggests the URLs are worth fetching. When budget is tight, Google parks URLs in the Discovered bucket and waits.

This is fundamentally different from "Crawled – currently not indexed", which is a separate status with a separate fix. Crawled means Google visited the page, evaluated its content, and decided not to add it to the index — usually a quality or duplication signal. Discovered means Google never even bothered to look. They are two different problems, and the fix sequences barely overlap.

Why Google decided to skip your URL

In practice, "Discovered – currently not indexed" piles up for a handful of recurring reasons:

Most stuck URLs hit two or three of these at once. The fix sequence below addresses them in the order that gives the biggest return.

The five-step fix that actually works

1. Cut your sitemap before adding to it

Open your sitemap and look at it like a stranger. For every URL listed, ask: would a human deliberately search for this page, or share a link to it? If the honest answer is no, remove it. Tag pages with three posts, paginated archives, near-duplicate filter combinations, and zombie product variants are all candidates. Submitting a tighter sitemap is a stronger signal of quality than submitting a long one.

This sounds counterintuitive — you are asking Google to index fewer URLs in order to get more indexed. But crawl budget is a scarce resource, and concentrating it on URLs that deserve the visit is how you move the queue forward.

2. Strengthen the internal link to each stuck URL

Pull the list of Discovered – currently not indexed URLs from Search Console. For each one, find at least two existing pages on your site that could legitimately link to it from within the body content (not just the footer or sitewide nav). Add those links with descriptive anchor text.

Two body links from established pages is a stronger signal than ten footer links. A footer link looks like a navigation default; a body link looks like an editorial decision. Googlebot reads them differently.

3. Push fresh signals through IndexNow and submission tools

For Bing, Yandex, and the long tail of engines that use the IndexNow protocol, you can notify them the moment a URL changes. We have a full IndexNow setup walkthrough that takes about five minutes. IndexNow does not directly affect Google's crawl scheduling — Google has not joined the protocol — but it gets the URL into Bing's index, which feeds ChatGPT and Copilot citations, which can in turn generate the kind of secondary traffic and brand signals that nudge Google to re-evaluate.

For broader discovery, push the URL through a syndication tool like Entireweb. The free FreeSiteSubmit wrapper uses Entireweb's pipeline to ping over five hundred downstream engines and directories. This will not move a URL out of Discovered on its own, but it puts the link into more crawler paths, which over time builds the kind of cross-web visibility that signals "this URL exists and is being referenced."

4. Earn at least one real external link

This is the hardest step and the one with the biggest impact. A single contextual backlink from a site Google already trusts is worth more than every internal-linking change combined. The link does not need to be authoritative; it needs to be real — a relevant blog post, a forum thread answer, a curated list, a tool directory.

If you cannot earn a link in the next two weeks, you can buy yourself a partial substitute by getting the URL referenced in places Googlebot crawls aggressively: a public GitHub README, a relevant Reddit comment (where allowed by the subreddit's rules), a Stack Overflow answer where the URL is a genuine reference, or a profile page on a high-authority site that allows a link.

5. Use URL Inspection sparingly to nudge specific URLs

The URL Inspection tool's "Request Indexing" button is rate-limited — Search Console will let you submit a small number of requests per day before silently throttling. Save it for the URLs that matter most after you have done steps one through four. Requesting indexing on a URL that still has none of the underlying issues fixed will produce no result; requesting it on a URL that now has tightened internal links and an external reference often gets it crawled within days.

Get your URLs into more indexes

While you wait for Google to clear its queue, push your site to 500+ search engines and AI crawler partner networks. Free, no signup.

Submit My Website Free →

When to escalate, when to wait

The fixes above are cumulative — none of them deliver instant results, and the queue clears unevenly. Some URLs move within a week, others take a month. Use these rough escalation triggers:

One more thing worth saying out loud: if your site is genuinely thin or the URLs in question genuinely have no audience, no amount of technical SEO will get them indexed. Google is making a reasonable call on those. The Discovered status is sometimes useful diagnostic feedback that the page should not exist in the first place. Cutting it from the sitemap, redirecting it to a stronger page, or merging it into a more comprehensive guide is often the right answer.

What to do this week

If you only have a couple of hours, do these in order:

  1. Pull the full list of Discovered – currently not indexed URLs from the Page Indexing report. Export to a spreadsheet.
  2. Tag each URL: keep, merge, or delete. Be ruthless.
  3. Update or trim your sitemap to reflect the keep list only.
  4. Pick the top ten "keep" URLs and add two contextual internal links to each.
  5. Set up IndexNow if you have not already, so future content does not land in the same queue.
  6. Submit your homepage and most important section pages through a syndication tool to widen your discovery surface.
  7. Check back in 14 days and re-export the report. Most of the work shows up on this second look, not immediately.

The status is recoverable in nearly every case. It just rarely fixes itself, and clicking "Request Indexing" on every stuck URL is not a strategy.

Ready to widen your discovery surface?

Get your site in front of 500+ engines and crawler networks in under 60 seconds. Free, no signup, no credit card.

Submit My Website Free →

FAQ

What does Discovered – currently not indexed mean?

Google has found a link to the URL — usually through your sitemap or an internal link — but has not yet visited the page itself. The URL sits in a queue. Google's published explanation is that they delayed the crawl to avoid overloading your site, but in practice the queue is also shaped by perceived importance, crawl budget, and your site's overall authority.

How long does Discovered – currently not indexed last?

There is no published timeline. Anecdotally it ranges from days to many months. Newer sites with low authority and dozens of similar URLs can sit in the queue indefinitely. The status is not an error — Google has not rejected your page — but it is also not a passive condition that fixes itself for low-priority URLs.

Is Discovered – currently not indexed the same as Crawled – currently not indexed?

No. Discovered means Google knows the URL exists but has not crawled it yet — a queue problem. Crawled means Google fetched the page, looked at it, and chose not to index it — a quality or duplication problem. Different statuses, different root causes, different fixes.

Will requesting indexing in Search Console fix it?

Sometimes, for one or two URLs at a time. Use it sparingly — the URL Inspection tool's request indexing button is rate-limited and works best as a nudge, not a workflow. If you have hundreds of URLs stuck, you need to address the root cause (crawl budget, internal linking, content quality), not click the button hundreds of times.