Robots.txt Celebrates 20 Years of Blocking Search Engines

Robots.txt Celebrates 20 Years of Blocking Search Engines

Today is the 20th anniversary of the robots.txt directive being available for webmasters to block search engines from crawling their pages.

Robots.txt Celebrates 20 Years of Blocking Search EnginesThe robots.txt was created by Martijn Koster in 1994 while he was working at Nexor after having issues with crawlers hitting his sites too hard. All major search engines back then, including WebCrawler, Lycos and AltaVista, quickly adopted it; and even 20 years later, all major search engines continue to support it and obey it.
Brian Ussery posted on his blog about the 20-year anniversary and documented the most common robots.txt mistakes he has seen over his SEO tenure. It is well worth scanning through, because the robots.txt, if implemented wrong, can be severely detrimental to your rankings and search marketing success.
For more details on the robots.txt see robotstxt.orgWikipedia and Google support. #SEO #SMO #Internet_Marketing
📚

Continue Reading

Recommended articles you might enjoy

Hire SEO Consultant for Startups & Law Firms in Manzanillo

World Web Solutions — Local SEO Experts Hire SEO Consultation for YourStartups & Law Firm in Manzanillo Grow your startup…

Read Article →

Hire SEO Consultant for Startups & Law Firms in Lázaro Cárdenas

World Web Solutions — Local SEO Experts Hire SEO Consultation for YourStartups & Law Firm in Lázaro Cárdenas Grow your…

Read Article →

Write a Comment

Your email address will not be published. Required fields are marked *