270 likes | 393 Views
Using Hadoop & HBase to build content relevance & personalization. Tools to build your big data application Ameya Kanitkar. Ameya Kanitkar – That ’ s me!. Big Data Infrastructure Engineer @ Groupon , Palo Alto USA (Working on Deal Relevance & Personalization Systems) a meya.kanitkar@gmail.com
E N D
Using Hadoop & HBase to build content relevance & personalization Tools to build your big data application Ameya Kanitkar
Ameya Kanitkar – That’s me! • Big Data Infrastructure Engineer @ Groupon, Palo Alto USA (Working on Deal Relevance & Personalization Systems) ameya.kanitkar@gmail.com http://www.linkedin.com/in/ameyakanitkar @aktwits
Agenda • Basics of Hadoop & HBase • How you can useHadoop & HBase for big data application • Case Study: Deal Relevance and Personalization Systems at Groupon with Hadoop & HBase
Big Data Application Examples • Recommendation Systems • Ad targeting • Personalization Systems • BI/ DW • Log Analysis • Natural Language Processing
So what is Hadoop? • General purpose framework for processing huge amounts of data. • Open Source • Batch / Offline Oriented
Hadoop - HDFS • Open Source Distributed File System. • Store large files. Can easily be accessed via application built on top of HDFS. • Data is distributed and replicated over multiple machines • Linux Style commands eg. ls, cp, mv, touchzetc
Hadoop – HDFS • Example: hadoopfs –dus /data/ 185453399927478 bytes =~ 168 TB (One of the folders from one of our hadoop cluster)
Hadoop – Map Reduce • Application Framework built on top of HDFS to process your big data • Operates on key-value pairs • Mappers filter and transform input data • Reducers aggregate mapper output
Example • Given web logs, calculate landing page conversion rate for each product • So basically we need to see how many impressions each product received and then calculate conversion rate of for each product
Map Reduce Example Reduce Phase Map Phase Map 1: Process Log File: Output: Key (Product ID), Value (Impression Count) Reducer: Here we receive all data for a given product. Just run simple for loop to calculate conversion rate. (Output: Product ID, Conversion Rate Map 2: Process Log File: Output: Key (Product ID), Value (Impression Count) Map N: Process Log File: Output: Key (Product ID), Value (Impression Count)
Recap • We just processed terabytes of data, and calculated conversion rate across millions of products. • Note: This is batch process only. It takes time. You can not start this process after some one visits your website. How about we generate recommendations in batch process and serve them in real time?
HBase • Provides real time random read/ write access over HDFS • Built on Google’s ‘Big Table’ design • Open Sourced • This is not RDBMS, so no joins. Access patterns are generally simple like get(key), put(key, value) etc.
Dynamic Column Names. No need to define columns upfront. • Both rows and columns are (lexicological) sorted
Note: Each row has different columns, So think about this as a hash map rather than at table with rows and columns
Putting it all together Store data in HDFS Serve Real Time Requests (HBase) Web Generate Recommendations (Map Reduce) Mobile Analyze Data (Map Reduce) Do offline analysis in Hadoop, and serve real time requests with HBase
Our Relevance Scenario Users
Our Relevance Scenario • How do we surface relevant deals ? • Deals are perishable (Deals expire or are sold out) • No direct user intent (As in traditional search advertising) • Relatively Limited User Information • Deals are highly local Users
Two Sides to the Relevance Problem Algorithmic Issues How to find relevant deals for individual users given a set of optimization criteria Scaling Issues How to handle relevance for all users across multiple delivery platforms
Developing Deal Ranking Algorithms • Exploring Data • Understanding signals, finding patterns • Building Models/Heuristics • Employ both classical machine learning techniques and heuristic adjustments to estimate user purchasing behavior • Conduct Experiments • Try out ideas on real users and evaluate their effect
Data Infrastructure Growing Deals Growing Users 2011 2012 2013 • 100 Million+ subscribers • We need to store data like, user click history, email records, service logs etc. This tunes to billions of data points and TB’s of data 20+ 400+ 2000+
Deal Personalization Infrastructure Use Cases • Deliver Personalized Emails • Deliver Personalized Website & Mobile Experience Email Personalize one of the most popular e-commerce mobile & web app for hundreds of millions of users & page views Personalize billions of emails for hundredsofmillions of users Offline System Online System
Architecture Email • We can now maintain different SLA on online and offline systems • We can tune HBase cluster differently for online and offline systems Relevance Map/Reduce Real Time Relevance HBase Offline System HBase for Online System Replication Data Pipeline
HBase Schema Design Append email history for each day as a separate columns. (On avg each row has over 200 columns) Overwrite user history and profile info • Most of our data access patterns are via “User Key” • This makes it easy to design HBase schema • The actual data is kept in JSON
Cluster Sizing • Machine Profile • 96 GB RAM (HBase 25 GB) • 24 Virtual Cores CPU • 8 2TB Disks • Data Profile • 100 Million+ Records • 2TB+ Data • Over 4.2 Billion Data Points HBase Replication Hadoop + HBase Cluster Online HBase Cluster 100+ machine Hadoop cluster, this runs heavy map reduce jobs The same cluster also hosts 15 node HBase cluster 10 Machine dedicated HBase cluster to serve real time SLA
Questions? Thank You! (We are hiring!) www.groupon.com/techjobs