Robots.txt is a text file which enlists all webpages that you want to restrict from search engine spiders. These pages are human visible but are never crawled by Google spiders and hence never cached and indexed.
The User-agent: rule specifies which User-agent the rule applies to, and * is a wildcard matching any User-agent. Disallow: sets the files or folders that are not allowed to be crawled.