Many search engines use programs called spiders to index web pages. Spiders are also known as crawlers or robots. They act as automatic date search tools that visit every site to find new or up-to-date websites and links. This procedure is called tissue scanning. Spiders follow hyperlinks and gather text information and transitions for search engine databases. They collect as much information as possible before transferring it to the search engine server.
Spiders can also rate the content indexed to help the search engine determine the relevance levels in a search. They are called spiders as they visit multiple sites at the same time, meaning their legs continue to extend over a large area of tissue. All search engines use spiders to review and create their own indexes.