Google SEO Tip #4 – Does PageRank Take Into Account Cross-Browser Compatibility?

Matt Cutts: The answer is no. I mentioned this in another video but let me just reiterate; PageRank is based on the number of people who link to you and how reputable they are, the links that come to your site. It is completely independent of the content of your site.

So, PageRank doesn’t take into account cross-browser compatibility because it doesn’t take into account the content of the website or the webpage, it only takes into account the links. That’s the essence of PageRank. It looks at what we think our opinion of the reputation of links.

So now the next question: if a site isn’t compatible with certain browsers does that make a difference for Googlebot? Well let’s play it through. Suppose Googlebot comes to your site and says, “I would like to crawl a page from your site. Please give it to me so I may index it.” We take that page and we look at it and we look for textual content on that page. But we’re almost always crawling as Googlebot.

Maybe we’ll crawl as Googlebot mobile or Ads bot or Google Image bot or something like that, but we try to provide very nice descriptive ways so that you can tell that Google is coming to your site unless we’re doing a spam check or something like that, or someone’s coming to your site to sort of see whether you are cloaking or something like that.

So Googlebot comes to your page, it tells you it’s Googlebot and it tries to index the page that it gets. So it really doesn’t have much of a notion of, “How do things render differently for a mobile browser vs. Internet Explorer 6 vs. Netscape 2 vs. Firefox 4 or whatever?” We’re just going to take a look at the textual content and try to make sure that we index it.

Now if you want to make sure that you don’t get in trouble in terms of cloaking or anything like that you want to make sure that you return the same page to Googlebot that you return regular users. Just make sure that you don’t have any special code that’s doing an “if Googlebot” or checking if the user agent is Googlebot, or the IP address is from Google.

If you’re not doing anything special for Google, you’re just doing whatever you would normally do for your users, then you’re not going to be cloaking and you shouldn’t be in any trouble as far as that goes.

So, Google doesn’t look into cross-browser site compatibility or things like that. In fact Google tries to be relatively liberal and expecting even somewhat broken HTML because not everybody writes perfect HTML. Broken HTML doesn’t mean the information on the page isn’t good.

There were some studies that showed that 40% of all webpages had at least some sort of syntactic error, but if we threw out 40% of all pages you’d be missing 40% of all the content on the web. So Google tries to interpret content even if it’s not syntactically valid, even if it’s not well formed, even if it doesn’t validate. For all of these sort of reasons we have to take the web as it is and try to return the best page to users even if the results that we see are kind of noisy.

Historically we haven’t provided any sort of penalty by saying, “oh you didn’t validate” or “it’s not clean HTML.” We don’t have any sort of factor to the best of my knowledge that looks at compatibility with certain browsers or cross-browser compatibility with a site.

Print Friendly, PDF & Email

About the Author