Hey guys,
The answer is very easy to understand. You can find on google results, A robots.txt file is a file at the root of your site that indicates those parts of your site you don't want accessed by search engine crawlers
Robots.txt is a common name of a Text file that is uploaded to a website's root directory and linked in the html code of the website. The robots.txt file is used to provide instructions about the website to web robots and spiders.
Robots.txt file is indicate website web page to crawler that which page are important to index on google search engine. We disallow those page which are not useful for users or for security purpose. Like login page etc.
Robots.txt is a text file that is inserted into your website and contains information for search engine robots. The file lists webpages that are allowed and disallowed from search engine crawling.
robot.txt file is simple text file where some instruction to search engine which have to crawl and which file to ignore so according to instruction Google ignore those files or folder
Robots.txt is a text file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt means mandatory for search engines but generally search engines obey what they are asked not to do.
The robots.txt file is a simple text file placed on your web server which tells webcrawlers like Googlebot if they should access a file or not.
To allow full access of site, robot.txt file will be:
In a nutshell. Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. ... The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
Robots.txt is simple text file & its one of very essential factor in on page SEO
Use of Robots.txt - The most common usage of Robots.txt is to ban crawlers from visiting private folders or content that gives them no additional information.
Robots.txt Allowing Access to Specific Crawlers.
Allow everything apart from certain patterns of URLs.
Robot.txt is a file which describe the links which web owner does not want to crawl by the search engine. He just put down the root path of the link which he does not want to crawl. Every time search engine robot comes to your website, it check for this file first.