A spider is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "crawler" or a "bot."
spider or Web crawlers are normally used to create copy of all the visited pages for the subsequent processing by the search engine that will index the downloaded pages and to facilitate fast searches.
Search engine spiders have a much tougher time with dynamic sites. Some get stuck because they can't supply the information the site needs to generate the page. Other spiders deliberately stay away from dynamic pages to avoid getting trapped in the site.
Googlebot is the software used by Google to fetch and render web documents collectively. They crawl each web page and its contents and store them in the Google index under the relevant keywords of the website. Google bots are also known as Spiders and Crawlers.