More or less
Site proprietors utilize the/robots.txt document to give guidelines about their webpage to web robots; this is known as The Robots Exclusion Protocol.
It works prefers this: a robot needs to vists a Web webpage URL, say
http://www.example.com/welcome.html. Before it does as such, it firsts checks for
http://www.example.com/robots.txt, and finds:
Client specialist: *
Prohibit:/
The "Client specialist: *" implies this area applies to all robots. The "Prohibit:/" tells the robot that it ought not visit any pages on the site.
There are two essential contemplations when utilizing/robots.txt:
robots can overlook your/robots.txt. Particularly malware robots that output the web for security vulnerabilities, and email address gatherers utilized by spammers will give careful consideration.
the/robots.txt record is a freely accessible document. Anybody can see what segments of your server you don't need robots to utilize.
So don't attempt to utilize/robots.txt to shroud data.
The subtle elements
The/robots.txt is an accepted standard, and is not possessed by any norms body. There are two authentic portrayals:
the first 1994 A Standard for Robot Exclusion report.
a 1997 Internet Draft determination A Method for Web Robots Control
Likewise there are outer assets:
HTML 4.01 determination, Appendix B.4.1
Wikipedia - Robots Exclusion Standard