Spider: It is basically known as web crawler. It visits website or webpages or links etc and then crawls to index it and save it in its own database.
Robots.txt: It is a file in which we write commands to instruct crawlers to not visit those webpage or part of the website which we do not want it to..
but on the other side on robots.txt we commands to instruct crawlers not to visit those webpages or part of the website which we don't want it to index for crawling..
Robots.txt is a text file that lists webpages which contain instructions for search engines robots. The file lists webpages that are allowed and disallowed from search engine crawling.
Spider is a search engine automated program that are responsible to read through webpage source and provide information to search engines.
Spider, crawler , and robots both are the same program to reads the text information from World Wide Web and find hyperlinks if any, then followed those links and collect all data and records it into search engine database for the use of visitors.