470 likes | 563 Views
Migrating from PostgreSQL to MySQL at Cocolog. Naoto Yokoyama, NIFTY Corporation Garth Webb, Six Apart Lisa Phillips, Six Apart Credits: Kenji Hirohama, Sumisho Computer Systems Corp. Agenda. 1. What is Cocolog 2. History of Cocolog 3. DBP: Database Partitioning
E N D
Migrating from PostgreSQL to MySQL at Cocolog Naoto Yokoyama, NIFTY Corporation Garth Webb, Six Apart Lisa Phillips, Six Apart Credits: Kenji Hirohama, Sumisho Computer Systems Corp.
Agenda • 1. What is Cocolog • 2. History of Cocolog • 3. DBP: Database Partitioning • 4. Migration From PostgreSQL to MySQL
What is Cocolog • NIFTY Corporation • Established in 1986 • A Fujitsu Group Company • NIFTY-Serve (licensed and interconnected with CompuServe) • One of the largest ISPs in Japan • Cocolog • First blog community at a Japanese ISP • Based on TypePad technology by SixApart • Several hundred million PV/month • History • Dec/02/2003: Cocolog for ISP users launch • Nov/24/2005: Cocolog Free for free launch • April/05/2007: Cocolog for Mobile Phone launch
Cocolog (Screenshot of home page) 2008/04 700 Thousand Users
Cocolog (Screenshot of home page) Cocolog TypePad
Cocolog Growth (User)■Cocolog■Cocolog Free phase1 phase2 phase3 phase4
Cocolog Growth (Entry)■Cocolog■Cocolog Free phase1 phase2 phase3 phase4
Technology at Cocolog • Core System • Linux 2.4/2.6 • Apache 1.3/2.0/2.2 & mod_perl • Perl 5.8+CPAN • PostgreSQL 8.1 • MySQL 5.0 • memcached/TheSchwartz/cfengine • Eco System • LAMP,LAPP,Ruby+ActiveRecord, Capistrano • Etc...
Monitoring • Management Tool • Proprietary in-house development with PostgreSQL, PHP, and Perl • Monitoring points (order of priority) • response time of each post • number of spam comments/trackbacks • number of comments/trackbacks • source IP address of spam • number of entries • number of comments via mobile devices • page views via mobile devices • time of batch completion • amount of API usage • bandwidth usage • DB • Disk I/O • Memory and CPU usage • time of VACUUM analyze • APP • number of active processes • CPU usage • Memory usage Service APL DB Hard
TypePad Phase1 2003/12~(Entry: 0.04Million) Before DBP10servers Postgre SQL Register Static contents Published NAS WEB
2005/5~ 2004/12~ Phase2 2004/12~ (Entry: 7Million) Before DBP50servers Publish Book Rich template Tel Operator Support Postgre SQL Register Podcast Portal Profile Etc.. TypePad Static contents Published NAS WEB
Phase2 - Problems • The system is tightly coupled. • Database server is receiving from multiple points. • It is difficult to change the system design and database schema.
Phase3 2006/3~ (Entry: 12Million) Before DBP200servers Rich template Publish Book Web-API memcached Tel Operator Support Postgre SQL Register TypePad Podcast Portal Profile Etc.. Static contents Published NAS WEB
Phase4 2007/4~ (Entry: 16Million) Before DBP300servers Publish Book Rich template Typepad Web-API memcached Tel Operator Support Postgre SQL Register Atom Static contents Published Mobile WEB NAS WEB
Now 2008/4~ After DBP150servers Publish Book Rich template Web-API memcached Tel Operator Support Multi MySQL Register Typepad Atom Static contents Published Mobile WEB NAS WEB
Steps for Transitioning • Server PreparationHardware and software setup • Global WriteWrite user information to the global DB • Global ReadRead/write user information on the global DB • Move SequenceTable sequences served by global DB • User Data MoveMove user data to user partitions • New User PartitionAll new users saved directly to user partition 1 • New User StrategyDecide on a strategy for the new user partition • Non User Data MoveMove all non-user owned data
TypePad Overview (PreDBP) Internet Mobile Blog Readers Blog Owners https(443) smtp(25) / pop(110) Blog Readers http(80) WebServer ApplicationServer TypeCastServer MailServer smtp(25) / pop(110) memcached(11211) http(80) : atom api postgres(5432) nfs(2049) Storage Database(Postgres) MEMCACHED ATOMServer ADMIN(CRON)Server Dedicated Server for TypeCast (via ATOM) Cron Server for periodic asynchronous tasks Data Caching servers to reduce DB load Static Content (HTML, Images, etc)
Why Partition? TypePad TypePad TypePad TypePad TypePad TypePad TypePad TypePad Non-UserRole User Role (User1) GlobalRole User Role (User2) User Role (User0) Non-User Role User Role (User3) Current setup After DBP All inquires (access) go to one DB(Postgres) Inquiries (access) are divided among several DB(MySQL)
Server Preparation Information that does not need to be partitioned (such as session information) TypePad GlobalRole Non-UserRole SchwartzDB Maintains user mapping and primary key generation User Role (User1) Stores job details User Role (User2) User information is partitioned DB(PostgreSQL) Server for executing Jobs User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Current Setup New expanded setup ※Grey areas are not used in current steps
Global WriteCreating the user map TypePad ① GlobalRole Non-UserRole SchwartzDB ② Maintains user mapping and primary key generation User Role (User1) User Role (User2) DB(PostgreSQL) User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:For new registrations only, uniquely identifying user data is written to the global DB ②:This same data continues to be written to the existing DB ※Grey areas are not used in current steps
Global ReadUse the user map to find the user partition TypePad ② GlobalRole Non-UserRole SchwartzDB ③ ① Maintains user mapping and primary key generation User Role (User1) Migrate existing user data User Role (User2) DB(PostgreSQL) User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:Migrate existing user data to the global DB ②:At start of the request, the application queries global DB for the location of user data ③:The application then talks to this DB for all queries about this user. At this stage the global DB points to the user0 partition in all cases. ※Grey areas are not used in current steps
Move SequenceMigrating primary key generation TypePad ② GlobalRole Non-UserRole SchwartzDB ① Maintains user mapping and primary key generation User Role (User1) Migrate sequence management User Role (User2) DB(PostgreSQL) User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:Postgres sequences (for generating unique primary keys) are migrated to tables on the global DB that act as “pseudo-sequences”. ② Application requests new primary keys from global DB rather than the user partition. ※Grey areas are not used in current steps
User Data MoveMoving user data to the new user-role partitions TypePad ② GlobalRole Non-UserRole SchwartzDB ③ Maintains user mapping and primary key generation User Role (User1) Stores job details ④ User Role (User2) User information is partitioned DB(PostgreSQL) Server for executing Jobs User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Migrating each user data Non-User Role ① DB(MySQL) for partitioned data Explanation ①:Existing users that should be migrated by Job Server are submitted as new Schwartz jobs. User data is then migrated asynchronously ②:If a comment arrives while the user is being migrated, it is saved in the Schwartz DB to be published later. ③:After being migrated all user data will exist on the user-role DB partitions ④:Once all user data is migrated, only non-user data is on Postgres DB(MySQL) for partitioned data ※Grey areas are not used in current steps
New User PartitionNew registrations are created on one user role partition TypePad GlobalRole Non-UserRole SchwartzDB ① Maintains user mapping and primary key generation User Role (User1) ② User Role (User2) User information is partitioned DB(PostgreSQL) User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:When new users register, user data is written to a user role partition. ②:Non-user data continues to be served off Postgres ※Grey areas are not used in current steps
New User StrategyPick a scheme for distributing new users TypePad GlobalRole Non-UserRole SchwartzDB ① Maintains user mapping and primary key generation User Role (User1) ② User Role (User2) User information is partitioned DB(PostgreSQL) User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:When new users register, user data is written to one of the user role partitions, depending on a set distribution method (round robin, random, etc) ②:Non-user data continues to be served off Postgres ※Grey areas are not used in current steps
Non User Data MoveMigrate data that cannot be partitioned by user Information that does not need to be partitioned (such as session information) TypePad GlobalRole Non-UserRole SchwartzDB Maintains user mapping and primary key generation User Role (User1) User Role (User2) User information is partitioned DB(PostgreSQL) User Role (User0) User Role (User3) Migrate non-User data Job Server+ TypePad + Schwartz Non-User Role ① DB(MySQL) for partitioned data Asynchronous Job Server Explanation ①:Migrate non-user role data left on PostgreSQL to the MySQL side. ※Grey areas are not used in current steps
Data migration done Information that does not need to be partitioned (such as session information) TypePad ① GlobalRole Non-UserRole SchwartzDB ① Maintains user mapping and primary key generation User Role (User1) Stores job details User Role (User2) User information is partitioned DB(Postgres) Server for executing Jobs User Role (User0) User Role (User3) Job Server+ TypePad + Schwartz Non-User Role DB(MySQL) for partitioned data ② Asynchronous Job Server Explanation ①:All data access is now done through MySQL ②:Continue to use The Schwartz for asynchronous jobs ※Grey areas are not used in current steps
The New TypePad configuration Internet Blog Readers Mobile Blog Readers Blog Owners (management interface) https(443) smtp(25) / pop(110) http(80) WebServer ApplicationServer TypeCastServer MailServer http(80) : atom api smtp(25) / pop(110) memcached(11211) MySQL(3306) nfs(2049) Storage Database(MySQL) MEMCACHED ATOMServer ADMIN(CRON)Server JobServer Dedicated Server for TypeCast (via ATOM) Cron Server for periodic asynchronous tasks Data Caching servers to reduce DB load TheSchwartz server for running ad-hoc jobs asynchronously Static Content (HTML, Images, etc)
History of scale up PostgreSQL server, Before DBP • DB Node Spec History
History of scale up PostgreSQL server, Before DBP • DB DiskArray Spec • [FUJITSU ETERNUS8000] • Best I/O transaction performance in the world • 146GB (15 krpm) * 32disk with RAID - 10 • MultiPath FibreChannel 4Gbps • QuickOPC (One Point Copy) • OPC copy functions let you create a duplicate copy of any data from the original at any chosen time. http://www.computers.us.fujitsu.com/www/products_storage.shtml?products/storage/fujitsu/e8000/e8000
Scale out MySQL servers, After DBP • A role configuration • Each role is configured as HA cluster • HA Software: NEC ClusterPro • Shared Storage
MySQL Role3 MySQL Role2 MySQL Role1 Scale out MySQL servers, After DBP heart beat FibreChannel SAN DiskArray PostgreSQL … TypePad Application
Scale out MySQL servers, After DBP • Backup • Replication w/ Hot backup
MySQL Role1 MySQL Role2 MySQL Role3 Scale out MySQL servers, After DBP heart beat FibreChannel SAN DiskArray PostgreSQL mysqld mysqld mysqld … rep rep rep mysqld mysqld mysqld TypePad Application opc MySQL BackupRole
Troubles with PostreSQL 7.4 – 8.1 • Data size • over 100 GB • 40% is index • Severe Data Fragmentation • VACUUM • “VACUUM analyze” cause the performance problem • Takes too long to VACUUM large amounts of data • dump/restore is the only solution for de-fragmentation • Auto VACUUM • We don’t use Auto VACUUM since we are worried about latent response time
Troubles with PostgreSQL 7.4 – 8.1 • Character set • PostgreSQLallow the out of boundary UTF-8 Japanese extended character sets and multi bytes character sets which normally should come back with an error - instead of accepting them.
Split UTF8->UCS2->UTF8 Merge restore “Cleaning” data • Removing characters set that are out of the boundries UTF-8 character sets. • Steps • PostgreSQL.dumpALL • Split for Piconv • UTF8 -> UCS2 -> UTF8 & Merge • PostgreSQL.restore dump
Migration from PostgreSQL to MySQL using TypePad script • Steps • PostgreSQL -> PerlObject & tmp publish -> MySQL -> PerlObject & last publish • diff tmp & last Object (data check) • diff tmp & last publish (file check) data check Object Object TypePad TypePad PostgreSQL Document Document File check tmp last
Troubles with MySQL • convert_tz function • doesn't support the input value outside the scope of Unix Time • sort order • different sort order without “order by” clause
Cocolog Future Plans • Dynamic • Job queue
Consulting by • Sumisho Computer Systems Corp. • System Integrator • first and best partner of MySQL in Japan since 2003 • provide MySQL consulting, support, training service • HA • Maintenance • online backup • Japanese character support