1 / 17

Introduction to Google MapReduce

An overview of Google's MapReduce programming model for processing large datasets on commodity computers, offering transparency and distributed processing. Learn about distributed grep and word count examples, along with partitioning functions and requirements for using MapReduce effectively.

vui
Download Presentation

Introduction to Google MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Google MapReduce WING Group Meeting 13 Oct 2006 Hendra Setiawan

  2. What is MapReduce? • A programming model (& its associated implementation) • For processing large data set • Exploits large set of commodity computers • Executes process in distributed manner • Offers high degree of transparencies • In other words: • simple and maybe suitable for your tasks !!!

  3. grep matches Split data grep Split data matches All matches grep cat Split data matches grep Split data matches Distributed Grep Very big data

  4. count count Split data count Split data count merged count count merge Split data count count Split data count Distributed Word Count Very big data

  5. Map: Accepts input key/value pair Emits intermediate key/value pair Reduce : Accepts intermediate key/value* pair Emits output key/value pair Partitioning Function Map Reduce R E D U C E M A P Very big data Result

  6. Partitioning Function

  7. Partitioning Function (2) • Default : hash(key) mod R • Guarantee: • Relatively well-balanced partitions • Ordering guarantee within partition • Distributed Sort • Map: emit(key,value) • Reduce (with R=1): emit(key,value)

  8. MapReduce • Distributed Grep • Map: if match(value,pattern) emit(value,1) • Reduce: emit(key,sum(value*)) • Distributed Word Count • Map: for all w in value do emit(w,1) • Reduce: emit(key,sum(value*))

  9. MapReduce Transparencies Plus Google Distributed File System : • Parallelization • Fault-tolerance • Locality optimization • Load balancing

  10. Suitable for your task if • Have a cluster • Working with large dataset • Working with independent data (or assumed) • Can be cast into map and reduce

  11. MapReduce outside Google • Hadoop (Java) • Emulates MapReduce and GFS • The architecture of Hadoop MapReduce and DFS is master/slave

  12. Example Word Count (1) • Map public static class MapClass extends MapReduceBase implements Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter) throws IOException { String line = ((Text)value).toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); output.collect(word, one); } } }

  13. Example Word Count (2) • Reduce public static class Reduce extends MapReduceBase implements Reducer { public void reduce(WritableComparable key, Iterator values, OutputCollector output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += ((IntWritable) values.next()).get(); } output.collect(key, new IntWritable(sum)); } }

  14. Example Word Count (3) • Main public static void main(String[] args) throws IOException { //checking goes here JobConf conf = new JobConf(); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(MapClass.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); conf.setInputPath(new Path(args[0])); conf.setOutputPath(new Path(args[1])); JobClient.runJob(conf); }

  15. One time setup • set hadoop-site.xml and slaves • Initiate namenode • Run Hadoop MapReduce and DFS • Upload your data to DFS • Run your process… • Download your data from DFS

  16. Summary • A simple programming model for processing large dataset on large set of computer cluster • Fun to use, focus on problem, and let the library deal with the messy detail

  17. References • Original paper (http://labs.google.com/papers/mapreduce.html) • On wikipedia (http://en.wikipedia.org/wiki/MapReduce) • Hadoop – MapReduce in Java (http://lucene.apache.org/hadoop/) • Starfish - MapReduce in Ruby (http://rufy.com/starfish/)

More Related