What is a robots.txt file?

Robots.txt file is indicate website web page to crawler that which page are important to index on google search engine. We disallow those page which are not useful for users or for security purpose. Like login page etc.
 
Robots.txt is a text file that is inserted into your website and contains information for search engine robots. The file lists webpages that are allowed and disallowed from search engine crawling.
 
Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.
 
The robots.txt file is a simple text file placed on your web server which tells webcrawlers like Googlebot if they should access a file or not.
To allow full access of site, robot.txt file will be:

User-agent: *
Disallow:

Know more about robots.txt here:
https://varvy.com/robottxt.html
 
In a nutshell. Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. ... The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
 
Robots.txt file is indicate website web page to crawler that which page are important to index on google search engine.
 
Back
Top