Robots are referred to search engine programs that are responsible to crawl through webpage source to provide information to search engines. They are also known as spiders and crawlers.
The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned.