Search engine spiders, "robots" or "web crawlers" are small, automated software programs used by every search engine company. Think of the internet as a world-wide spiderweb, just like one in a dusty corner. Visualize the "spiders" that "crawl" across all the interconnected links, moving quietly, disturbing nothing, but visiting each and every corner of that spider web. ( Spiders, the world wide web, crawling and robots.... now you see how these "technical" terms evolved. Computer geeks are great at explanatory analogies.) No matter what they are called, the assigned job is to constantly roam around the "world wide web" searching for new or updated / changed web pages. There are thousands + new pages added a day to the world wide web.
A search engine spider is only a service tool.
Their function is to help the search engines index or "catalog" every web site correctly. The content they come across are indexed in their massive search indices. The webpages that are indexed, are grouped according to their authority, trust, high-quality content and the number of backlinks