About Me

Keen Learner, enthusiast and a practicing SEO professional. I have spent around 6 years in SEO and SMO and love exploring the web. A civil engineer when I started out after college, I took to online marketing like fish to water in a few years. oDesk Certified Search Engine Optimization Consultant oDesk Certified Google Analytics Consultant oDesk Certified Google Webmaster Expert

Controlling Crawling and Indexing Explained

8:44 AM

We all know the use of Robots.txt file -
* To control the crawling and indexing of the web pages (By disallowing the files, folder)
* To let the Google know about the Site pages by including the XML sitemap link

To gain more knowledge on how Google's crawler, Googlebot, handles conflicting directives in your robots.txt file, To know how to prevent a PDF file from being indexed along with lots of other information about controlling the crawling and indexing of your site, are now available on code.google.com:




Now we can have a comprehensive resource and learn about robots.txt files, robots meta tags, and X-Robots-Tag HTTP header directives.

{Via Google Webmaster Central Blog}

0 comments:

Powered by Blogger.