Robots.txt is a content record website admins make to train web robots (commonly internet searcher robots) how to slither pages on their site. The robots.txt document is a piece of the robots rejection convention (REP), a gathering of web measures that control how robots slither the web, access and file substance, and serve that substance up to clients.