230 likes | 529 Views
Data Collection and Web Crawling. Overview. Data intensive application s are likely to powered by some databases. How do you get the data in your database? Your private secret data source Public data from Internet In this tutorial, we will introduce how to collect data from Internet.
E N D
Overview • Data intensive applications arelikely to powered by some databases. • How do you get the data in your database? • Your private secret data source • Public data from Internet • In this tutorial, we will introduce how to collect data from Internet. • Use APIs • Web Crawlers
Collecting data from Internet: Use APIs • The easiest way to get data from the Internet. • Steps: • 1. Make sure the data source provide APIs for data collection. • 2. Obtain API key or other forms of authorization. • 3. Read documentation • 4. Coding
Collecting data from Internet: Use APIs • Example: Twitter Search API • 1. Make sure the data source provide APIs for data collection. • “Search API is focused on relevance and not completeness” • “Requests to the Search API, hosted on search.twitter.com, do not count towards the REST API limit. However, all requests coming from an IP address are applied to a Search Rate Limit. The Search Rate Limit isn't made public to discourage unnecessary search usage and abuse, but it is higher than the REST Rate Limit. We feel the Search Rate Limit is both liberal and sufficient for most applications and know that many application vendors have found it suitable for their needs.”
Collecting data from Internet: Use APIs • 2. Obtain API key or other forms of authorization. • Read through https://dev.twitter.com/docs/auth/tokens-devtwittercom and get them • 3. Read documentation • Found a Java implementation of Twitter API and read some documentation files and sample codes at http://twitter4j.org/en/index.html
Collecting data from Internet: Use APIs • 4. Coding • Code based on the documentation and code samples. • Refer to our sample code (DataCollection/TweetsCollector.java)
Collecting data from Internet: Web Crawlers • However, other providers hosting the data you are interested in may not provide API for you. • Example case: You want all movies’ information from IMDB, but IMDB doesn’t provide API for programmers. • e.g. You want all the movie information found at a starting page http://www.imdb.com/features/video/browse/ • You need to develop your own crawler. • Prerequisite: HTTP Client and Regular Expression
Collecting data from Internet: Web Crawlers • After browsing the website, you find out that each movie’s information can be found at http://www.imdb.com/title/tt******/ where *****=movie id • Pseudo Code:
Collecting data from Internet: Web Crawlers • Selected Useful Java methods: • Read html files: • Regex that finds specific patterns in a text: • Wait for several seconds to reduce the risks of being detected and banned
Regular Expression • Regex - An advanced search. • “Normal search” only deals with finding fixed character sequences. • Regex can handle various patterns. • An interactive tutorial: • http://regexone.com/ • A place to quickly test a written regex against a source text: • http://regexpal.com/
Regular Expression The most useful ones for web crawlers: <tag>(.*?)</tag> match everything surrounded by <tag><tags>
Example html content:
Example • Match the three names surrounded by <name> tags • <name size=\d>(.*?)</name>
Example • Convert this regex into Java expression: • we use \\d instead of \d in order to escape the escape character “\”. • () controls the group to be extracted. • Feel the difference: • What if we use (.*) instead of (.*?) ?
Collecting data from Internet: Web Crawlers • A complete sample code is provided in • DataCollection/MovieSpider.java