December 12, 2023

Website Pagination Issues: A Pagination Best Practices Case Study (With Examples!)

How we identified and resolved over 25 non-indexable blog URLs due to a pagination error, which resulted in our client multiplying their organic traffic 50x in 6 months.

Pagination Best Practices

Common Ways Pagination Can Hurt SEO Performance

How to Find Pagination Issues

The Importance of Internal Linking

The Results

Smooth navigation and an easily understandable website are crucial to ensuring a pleasant experience for both users and web crawlers. Among the many elements that contribute to accessible content is pagination.

Pagination seems simple at first. However, if not implemented properly, it can cause a complete inability for pages to be crawled and potentially lead to broken site navigation, lost organic traffic, and damaged keyword rankings. This article is a real case study where technical SEO allowed us to identify a pagination error on a client's blog and, upon fixing the issue, multiply their unique monthly organic visitors by 50x in just 6 months.

What Is Pagination?

Pagination is a term SEOs and web developers use to describe a series of content that’s broken up into separate, sequential pages. This feature is typically found on digital blogs or category pages to organize products or articles across multiple pages. Sometimes, longer articles utilize pagination to break the article into smaller, more easily digestible mini-articles.

What Are Some Pagination Best Practices?

According to Google, best practices for implementing website pagination include:

• Linking pages sequentially, ensuring all of the pages are in the correct order

• Using URLs correctly, giving each page a unique URL. Make sure each page has a self-referencing canonical tag

• Not using URL fragment identifiers (such as “#,” for example)

• Not indexing URLs with filters or other sorting methods. If you choose to have filters on your page, you want to implement a noindex tag on any of the URL variations. Or, you can include them in your site’s robots.txt file. For example, a URL generated by a filter would include something like this “?order=size,” and that URL should not be indexed by Google in order to prevent the URLs from competing with each other.

In the past, Google used <link rel="next" href="..."> and <link rel="prev" href="..."> to identify paginated pages. Google announced in March 2021 that it no longer uses these tags because Googlebot is able to differentiate between the pages without these signals.

Google’s first post announcing it no longer needs rel=prev/next tags to identify pagination. 

After the official announcement, John Mueller clarified, saying that Google treats paginated pages as “normal pages” and it no longer needs an attribution saying “pagination or not.” However, Google still recommends implementing these tags because they may be used by other search engines, and it isn’t harmful to your site to have them there. So, what’s the best way to format website pagination? 

You should still include the rel=”next” / “prev” tags, but complement them with a self-referencing canonical tag. For example, /blogs?page=2 should rel=”canonical” to /blogs?page=2.

What Was Our Client Doing Wrong?

Upon doing a technical audit of our client's site, we found they were going against a couple of the best practices mentioned above: 

1. Each paginated blog page had a canonical tag that referenced the first page in the paginated sequence, effectively telling Google they were all the same page and to not worry about crawling the rest of them.

2. Even if each page did have its own self-referencing pagination canonical tag, we found that Google wouldn’t have been able to crawl them. The URL structure on their blog used the fragment identifier “#” on each of the sequential pages, meaning the pages weren’t being read as separate URLs. Googlebot ignores fragment identifiers, meaning it will not follow links if the only unique text that differs in a URL is after the “#.” Crawlers ignore parts of the URL after fragment identifiers because they think they’ve already retrieved the page. 

3. Their site had very few internal links, so Googlebot had no additional links to follow to find the articles that were past the first page in the paginated sequence.

Because of these errors, our client had over 25 blog articles that Google was unaware of. As a result, they were missing out on tons of traffic, and they didn’t even know it!

How Can Pagination Hurt SEO Performance?

Now, I don’t mean to scare you! When done properly, pagination is a fantastic tool to encourage crawling on blogs and collections pages. However, as our client found out, if pagination is improperly implemented, it can keep your pages unnecessarily hidden from web crawlers.

Ensuring your site has SEO-friendly pagination is extremely important because, without it, you could be missing out on thousands of unique organic visitors every month. Let’s look at some common roadblocks that SEOs run into when dealing with pagination and how they can hinder performance in organic search, if not dealt with correctly. 

Pagination Can Cause Duplicate Content 

If your pagination is implemented incorrectly, this can be the case. A common example includes having both a “View All'' page alongside paginated pages without the correct canonical tag implementation. If you choose to go with paginated pages, follow Google’s best practices and make sure each page has unique content that differs from the last. 

Pagination Can Create the Appearance of Crawlable Pages (Even if They Aren’t)

When pagination is implemented incorrectly, it can create an illusion of pages that are automatically crawlable. However, just because you are able to find the pages in the paginated sequence, that doesn’t mean Google is having the same experience. In the case of our client, the pagination was implemented incorrectly, and Google was completely unaware of over 25 blog articles. At first, they were confused because they could manually find those pages themselves as a user. But, after some explaining about the specifics of how Google crawls paginated pages, we got them on the same page! 

Pagination Can Create Thin Content 

This is true if you split a piece of content into too many mini-articles, leaving each page with too little content. It’s important that each page is helpful, unique, and has a good amount of content on it.

How To Find Pagination Issues 

You can manually inspect your site’s pagination for some of the linking, URL, or indexation issues described above. But, for a much more thorough analysis, I recommend utilizing webmaster tools like Google Search Console, Screaming Frog, or Ahrefs

Each of those tools provides crawl stats that show areas of the site that aren’t indexed by Google. Digging into those reports allows you to see where and why pages aren’t in Google’s index. From there, you can drill down further to determine the reason, whether it be an issue with content getting blocked by the robots.txt file, sitemap, broken internal links, canonical tags, internal linking, or pagination. 

In this particular case with our client, their site had an issue with how the pagination was set up, coupled with poor internal linking. I believe that a well-optimized internal linking strategy could have allowed at least a few of these articles to be discovered and indexed by Google prior to GR0 stepping in, if not all of them. That’s why implementing internal linking best practices was a top priority!

What Is the Importance of Internal Linking? 

Along with pagination, this brings us to the importance of internal linking. It’s imperative for sites to have a well-developed internal linking strategy equipped with optimized anchor texts and link placements. That way, users and web crawlers have an easier time determining the relationships between the different pages. Internal links also allow SEOs the opportunity to pass on authority to internal pages that are a high priority (such as product or collections pages, top-performing blogs, etc.)

Perhaps the most important benefit of internal links is the fact that they open up additional “pathways” for crawlers to follow. Put simply, internal links help Google find, index, and understand all of the pages on your site. Having a thorough internal linking strategy makes crawlers’ jobs much easier, and in cases like our client’s, could have prevented pages that were undiscovered by Google.

Putting together an internal linking map was a high priority upon identifying the blog articles that weren’t indexed. We knew that strategically placing links would open up those additional pathways for Google to be able to find the undiscovered pages. That, in addition to fixing the pagination structure, allowed Google to discover, crawl, and index all of those articles, which opened up the doors for huge organic traffic growth!

Screenshot from Google Search Console’s Pages report, showing an increase in indexed pages (green) and impressions (blue) after the blog’s pagination structure was fixed and internal links were implemented. 

The Results

For our client, we fixed the “#” fragment identifier in their blog’s URL structure, implemented self-referencing canonical links on all paginated URLs, and put together a comprehensive internal linking strategy, focusing on linking from pages that were indexed to pages that weren’t. 

After implementing these changes, our client gradually saw their pages enter the index and begin receiving organic clicks and impressions, resulting in more organic traffic than they’d ever seen! 

Growth in organic clicks and impressions after the blog’s pagination structure was made accessible for web crawlers and internal links were implemented. 

Here is another graph from GA4 showing a very similar growth trend in organic users. The number of unique organic visitors is still growing rapidly at the time of publishing this article, with no signs of slowing down

Growth in organic users after the blog’s pagination structure was made accessible for web crawlers and internal links were implemented.

Showcasing technical SEO wins is typically more difficult than showing wins from other strategies, such as backlink acquisition and content marketing. This is because technical SEO is generally about preventing roadblocks rather than traffic growth. However, in this situation, our technical SEO expertise was able to unlock thousands of organic visitors that were previously unable to be reached. 

From digging into the Pages Report in GSC and utilizing the URL Inspection tool, we found that this client had over 25 blog articles published a year prior that were not indexed by Google. Not only were they not in the index, but Google didn’t even know the pages existed because its crawlers were unable to access the paginated blog pages (where all of this content lived) due to pagination structure issues. In addition to this, we determined the site’s internal linking was lackluster at best, making it extremely difficult for Googlebot to discover, crawl, and index the content. 

Google Search Console’s URL Inspection Tool, which provides more information on why pages aren’t in Google’s indexed.

Conclusion

I hope you enjoyed this article and learned something from one of our most interesting case studies. This study highlights the importance of conducting regular site health audits and having experienced partners with technical SEO expertise. If you want to add GR0’s expertise to your team to unlock your site’s full traffic potential, shoot us a message! 

Sources:

What is Pagination? And How to Implement it on Your Website | SEoptimer

Pagination Best Practices for Google | Google Developers

English Google Webmaster Central office-hours hangout | YouTube

SEO-Friendly Pagination: A Complete Best Practices Guide | Search Engine Journal

How to Correctly Implement Pagination for SEO & User Experience | Amsive

Internal Linking for SEO: The Complete Guide | Backlinko

Table of Contents

How we identified and resolved over 25 non-indexable blog URLs due to a pagination error, which resulted in our client multiplying their organic traffic 50x in 6 months.

Pagination Best Practices

Common Ways Pagination Can Hurt SEO Performance

How to Find Pagination Issues

The Importance of Internal Linking

The Results

Smooth navigation and an easily understandable website are crucial to ensuring a pleasant experience for both users and web crawlers. Among the many elements that contribute to accessible content is pagination.

Pagination seems simple at first. However, if not implemented properly, it can cause a complete inability for pages to be crawled and potentially lead to broken site navigation, lost organic traffic, and damaged keyword rankings. This article is a real case study where technical SEO allowed us to identify a pagination error on a client's blog and, upon fixing the issue, multiply their unique monthly organic visitors by 50x in just 6 months.

What Is Pagination?

Pagination is a term SEOs and web developers use to describe a series of content that’s broken up into separate, sequential pages. This feature is typically found on digital blogs or category pages to organize products or articles across multiple pages. Sometimes, longer articles utilize pagination to break the article into smaller, more easily digestible mini-articles.

What Are Some Pagination Best Practices?

According to Google, best practices for implementing website pagination include:

• Linking pages sequentially, ensuring all of the pages are in the correct order

• Using URLs correctly, giving each page a unique URL. Make sure each page has a self-referencing canonical tag

• Not using URL fragment identifiers (such as “#,” for example)

• Not indexing URLs with filters or other sorting methods. If you choose to have filters on your page, you want to implement a noindex tag on any of the URL variations. Or, you can include them in your site’s robots.txt file. For example, a URL generated by a filter would include something like this “?order=size,” and that URL should not be indexed by Google in order to prevent the URLs from competing with each other.

In the past, Google used <link rel="next" href="..."> and <link rel="prev" href="..."> to identify paginated pages. Google announced in March 2021 that it no longer uses these tags because Googlebot is able to differentiate between the pages without these signals.

Google’s first post announcing it no longer needs rel=prev/next tags to identify pagination. 

After the official announcement, John Mueller clarified, saying that Google treats paginated pages as “normal pages” and it no longer needs an attribution saying “pagination or not.” However, Google still recommends implementing these tags because they may be used by other search engines, and it isn’t harmful to your site to have them there. So, what’s the best way to format website pagination? 

You should still include the rel=”next” / “prev” tags, but complement them with a self-referencing canonical tag. For example, /blogs?page=2 should rel=”canonical” to /blogs?page=2.

What Was Our Client Doing Wrong?

Upon doing a technical audit of our client's site, we found they were going against a couple of the best practices mentioned above: 

1. Each paginated blog page had a canonical tag that referenced the first page in the paginated sequence, effectively telling Google they were all the same page and to not worry about crawling the rest of them.

2. Even if each page did have its own self-referencing pagination canonical tag, we found that Google wouldn’t have been able to crawl them. The URL structure on their blog used the fragment identifier “#” on each of the sequential pages, meaning the pages weren’t being read as separate URLs. Googlebot ignores fragment identifiers, meaning it will not follow links if the only unique text that differs in a URL is after the “#.” Crawlers ignore parts of the URL after fragment identifiers because they think they’ve already retrieved the page. 

3. Their site had very few internal links, so Googlebot had no additional links to follow to find the articles that were past the first page in the paginated sequence.

Because of these errors, our client had over 25 blog articles that Google was unaware of. As a result, they were missing out on tons of traffic, and they didn’t even know it!

How Can Pagination Hurt SEO Performance?

Now, I don’t mean to scare you! When done properly, pagination is a fantastic tool to encourage crawling on blogs and collections pages. However, as our client found out, if pagination is improperly implemented, it can keep your pages unnecessarily hidden from web crawlers.

Ensuring your site has SEO-friendly pagination is extremely important because, without it, you could be missing out on thousands of unique organic visitors every month. Let’s look at some common roadblocks that SEOs run into when dealing with pagination and how they can hinder performance in organic search, if not dealt with correctly. 

Pagination Can Cause Duplicate Content 

If your pagination is implemented incorrectly, this can be the case. A common example includes having both a “View All'' page alongside paginated pages without the correct canonical tag implementation. If you choose to go with paginated pages, follow Google’s best practices and make sure each page has unique content that differs from the last. 

Pagination Can Create the Appearance of Crawlable Pages (Even if They Aren’t)

When pagination is implemented incorrectly, it can create an illusion of pages that are automatically crawlable. However, just because you are able to find the pages in the paginated sequence, that doesn’t mean Google is having the same experience. In the case of our client, the pagination was implemented incorrectly, and Google was completely unaware of over 25 blog articles. At first, they were confused because they could manually find those pages themselves as a user. But, after some explaining about the specifics of how Google crawls paginated pages, we got them on the same page! 

Pagination Can Create Thin Content 

This is true if you split a piece of content into too many mini-articles, leaving each page with too little content. It’s important that each page is helpful, unique, and has a good amount of content on it.

How To Find Pagination Issues 

You can manually inspect your site’s pagination for some of the linking, URL, or indexation issues described above. But, for a much more thorough analysis, I recommend utilizing webmaster tools like Google Search Console, Screaming Frog, or Ahrefs

Each of those tools provides crawl stats that show areas of the site that aren’t indexed by Google. Digging into those reports allows you to see where and why pages aren’t in Google’s index. From there, you can drill down further to determine the reason, whether it be an issue with content getting blocked by the robots.txt file, sitemap, broken internal links, canonical tags, internal linking, or pagination. 

In this particular case with our client, their site had an issue with how the pagination was set up, coupled with poor internal linking. I believe that a well-optimized internal linking strategy could have allowed at least a few of these articles to be discovered and indexed by Google prior to GR0 stepping in, if not all of them. That’s why implementing internal linking best practices was a top priority!

What Is the Importance of Internal Linking? 

Along with pagination, this brings us to the importance of internal linking. It’s imperative for sites to have a well-developed internal linking strategy equipped with optimized anchor texts and link placements. That way, users and web crawlers have an easier time determining the relationships between the different pages. Internal links also allow SEOs the opportunity to pass on authority to internal pages that are a high priority (such as product or collections pages, top-performing blogs, etc.)

Perhaps the most important benefit of internal links is the fact that they open up additional “pathways” for crawlers to follow. Put simply, internal links help Google find, index, and understand all of the pages on your site. Having a thorough internal linking strategy makes crawlers’ jobs much easier, and in cases like our client’s, could have prevented pages that were undiscovered by Google.

Putting together an internal linking map was a high priority upon identifying the blog articles that weren’t indexed. We knew that strategically placing links would open up those additional pathways for Google to be able to find the undiscovered pages. That, in addition to fixing the pagination structure, allowed Google to discover, crawl, and index all of those articles, which opened up the doors for huge organic traffic growth!

Screenshot from Google Search Console’s Pages report, showing an increase in indexed pages (green) and impressions (blue) after the blog’s pagination structure was fixed and internal links were implemented. 

The Results

For our client, we fixed the “#” fragment identifier in their blog’s URL structure, implemented self-referencing canonical links on all paginated URLs, and put together a comprehensive internal linking strategy, focusing on linking from pages that were indexed to pages that weren’t. 

After implementing these changes, our client gradually saw their pages enter the index and begin receiving organic clicks and impressions, resulting in more organic traffic than they’d ever seen! 

Growth in organic clicks and impressions after the blog’s pagination structure was made accessible for web crawlers and internal links were implemented. 

Here is another graph from GA4 showing a very similar growth trend in organic users. The number of unique organic visitors is still growing rapidly at the time of publishing this article, with no signs of slowing down

Growth in organic users after the blog’s pagination structure was made accessible for web crawlers and internal links were implemented.

Showcasing technical SEO wins is typically more difficult than showing wins from other strategies, such as backlink acquisition and content marketing. This is because technical SEO is generally about preventing roadblocks rather than traffic growth. However, in this situation, our technical SEO expertise was able to unlock thousands of organic visitors that were previously unable to be reached. 

From digging into the Pages Report in GSC and utilizing the URL Inspection tool, we found that this client had over 25 blog articles published a year prior that were not indexed by Google. Not only were they not in the index, but Google didn’t even know the pages existed because its crawlers were unable to access the paginated blog pages (where all of this content lived) due to pagination structure issues. In addition to this, we determined the site’s internal linking was lackluster at best, making it extremely difficult for Googlebot to discover, crawl, and index the content. 

Google Search Console’s URL Inspection Tool, which provides more information on why pages aren’t in Google’s indexed.

Conclusion

I hope you enjoyed this article and learned something from one of our most interesting case studies. This study highlights the importance of conducting regular site health audits and having experienced partners with technical SEO expertise. If you want to add GR0’s expertise to your team to unlock your site’s full traffic potential, shoot us a message! 

Sources:

What is Pagination? And How to Implement it on Your Website | SEoptimer

Pagination Best Practices for Google | Google Developers

English Google Webmaster Central office-hours hangout | YouTube

SEO-Friendly Pagination: A Complete Best Practices Guide | Search Engine Journal

How to Correctly Implement Pagination for SEO & User Experience | Amsive

Internal Linking for SEO: The Complete Guide | Backlinko

Let's get started

We’re so excited to bring your story to life. What can we do for you?

Get ready to GR0! Keep an eye on your inbox — we’ll be in touch within one business day.
Oops! Something went wrong while submitting the form.