A search engine crawler is a computer program that goes through websites, page by page, gathering information about what's on each page. That information goes into the search engine's index, to help improve search results.
Crawlers discover new web pages via links from one page to another. That's true whether a link on a web page goes to another page on the same site, or a page on a different site.
A crawler is also called a spider or robot. The word "spider" refers, of course, to the fact that websites exist within the World Wide Web. The word "robot" refers to the fact that the crawling activity is automated.