What Is Robots.txt?

In a nutshell. Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. ... The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
 
A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want to be accessed by search engine crawlers. The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).
 
The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned.
 
Robots.txt file is used to guide the search engine automatically to the page you want it to search for and then index the page. Most sites also have the directories and files do not need to search engine robots to visit. Thus creating a robots.txt file can help you in SEO.
 
Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit.
 
The robots.txt is a text file in your web site that informs search engine bots how to crawl and index website or web pages. It is great when search engines frequently visit your site and index your content but often there are cases when indexing parts of your online content is not what you want.
 
Back
Top