A robots.txt record is a content document in a straightforward arrangement which offers data to web robots, (for example, web index arachnids) about which parts of your site they are and aren't permitted to visit.
On the off chance that you don't have a robots.txt then a web robots will expect that they can go anyplace on your webpage. This straightforward robots.txt permits robots access to anyplace on your site. The main preference of having one of these 'permit all' robots.txt is to stop you getting 404 blunders in your log documents when the bugs can't discover your robots.txt.
While robots.txt documents are typically used to request that robots evade a specific piece of your site, a sitemap is utilized to give the robot a rundown of pages that it is welcome to visit.
By giving the web crawler a sitemap you can (ideally) expand the quantity of pages that it records. And in addition telling the web index the URLs of your pages, the sitemap can likewise tell the robots when the page was last altered, the pages need, and how regularly the page is prone to be redesigned.