Rel= next and rel= prev are brand new tags that Google just came up with to fix the dreaded pagination duplicate problem.
Here’s How to Use rel=next and rel=prev
The URLs below will be our example of how to implement this long awaited Google meta tag for Pagination content
Step 1. On the first page of your pagination which in this case is http://www.yoursite.com/content-part1.html put the following markup in your <head>
<link rel="next" href="http://www.yoursite.com/content-part2.html">
Obviously, you don’t need to use rel=prev on the first page
Step 2. On the next two pages in our example, you will include the previous page in the sequence and the next page by including both links in your markup as I have done below
<link rel="prev" href="http://www.yoursite.com/content-part1.html"> <link rel="next" href="http://www.yoursite.com/content-part3.html">
Step 3. Then on the last pagination page which is http://www.example.com/content-part4.html> you just use the previous link, as there is no next
That’s it! No further adjustments needed, just a give it a few weeks for the duplicate titles to disappear in Google Webmaster Tools.
The Old School Solution
I initially wrote this post on August 8th, 2011 about Pagination Pages Causing Duplicate Content because, before the new Pagination tag that Google announced on August 15th of 2011, there was just too much misinformation out there on this topic. (Spooky, were they reading my blog???) Previously, not even Google gave straight answers on how to deal with it. Just look at the way a Google employee handled the question here on their webmaster’s support forum about duplicate pagination content. Don’t you just love how Google always dances around the answers? Sometimes they make you read between the lines and sometimes they completely leave you clueless. Moving on, pagination pages create duplicate content and give you a bad mark in Google Webmaster tools for duplicate titles. Just 5 days after I posted this article, Google added this Pagination solution to WMT help section. That being said, if you are interested in how I used to solve the problem, keep reading below.
Prevent Pagination Junk
Part of the reason it is so difficult to answer the question is because not all sites are the same and in some cases, people might actually want to index their pagination pages and in other cases they just want them followed.
The objective is to guide the bots to the content that you want them to see and keep them away from redundant and duplicate content that is inevitably produced by many CMSs.
That being said, I suggest a couple of measures that in combination will work for at least 90% of all pagination duplicate content problems
1. Use the <META NAME=ROBOTS CONTENT=NOINDEX, FOLLOW> tag in the header of all your pagination pages except for the first page as you will want to get that one indexed as the category page. This allows for indexing through the pagination without actually indexing the pagination page itself.
2. Whether or not you want the pagination pages indexed or not I still suggest you use a script that rewrites your pagination titles to legitimize them. Just because Google is not indexing them, they are still crawling, following and using the data in their algorithms. Using a technique that I have seen favored by Googles video search results that they use on Youtube, simply rewrite your titles like this
Domainname.com | Category Name | Page 2 of 49
Again that was the old way of doing things prior to August 15th 20011 and now you can block duplicate content caused by pagination pages with the very simple Pagination tags, rel=“next” and rel=“prev”.