A web server is a PC framework that procedures demands by means of HTTP, the fundamental system convention used to disseminate data on the World Wide Web.
A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner. This process is called Web crawling or spidering. Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.
While searching any information in the search engine using a particular keyword, the search engine spider/bot find any meaningful information on any page related to that keyword this process is termed as crawling.
Googlebot (or any search engine spider) crawls the web to process information. Until Google is able to capture the web through osmosis, this discovery phase will always be essential. Google, based on data generated during crawl time discovery, sorts and analyzes URLs in real time to make indexation decisions.
A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The crawler for the AltaVista search engine and its Web site is called Scooter.
A web server is a computer that runs websites. It's a computer program that distributes web pages as they are requisitioned.
Crawling is also known as spider or Google bot. It visits on the web pages and check that everything is ok in the site if it finds duplicity it removes that page or site. The search engine crawler visits each web page and identifies all the hyperlinks on the page, adding them to the list of places to crawl.