Web crawler is a program that acts as an automated script which browses through the internet in a systematic way. The web crawler looks at the keywords in the pages, the kind of content each page has and the links, before returning the information to the search engine.
A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner.
A Web crawler, typically known as a spider, is a web larva that consistently browses the globe Wide net, generally for the aim of net categorisation (web spidering).
Crawling is the process performed by search engine crawler, when searching for relevant websites on the index. For instance,Google is constantly sending out "spiders" or "bots" which is a search engine's automatic navigator to discover which websites contain the most relevant information related to certain keywords.