DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING | DWIGHT LOOK COLLEGE OF ENGINEERING | TEXAS A&M UNIVERSITY
|
|
Scalable web crawling Sponsor: NSF Abstract Web crawling is a challenging issue in today's Internet due to many factors. These include the massive amount of content available to the crawler, existence of highly branching spam farms, prevalence of useless information, and necessity to adhere to politeness constraints at each target host. This project investigates scalable and efficient web algorithms that can be used in high-performance search engines to crawl hundreds of billions of pages and keep the overhead manageable. Unlike commercial search engines, our focus is to enable Internet-wide crawls and data mining without access to enormous server clusters or exotically expensive hardware. Journal Publications
Conference Papers
Datasets PLD (i.e., domain-level) out-graphs used in our ranking analysis (INFOCOM 2011, TWEB 2018) are available below. The graphs consist of 8-byte hashes, followed by a 4-byte out-degree and a list of 8-byte neighbor hashes. The map contains an 8-byte hash, followed by a NULL-terminated string of the corresponding domain. All numbers are in LSB-first byte order. Source nodes with zero out-degree and those referring to syntactically invalid PLDs have been eliminated.
|
|