590 likes | 749 Views
SQL924: Migrating Your SQL Remote Environment to Mobilink. Reg Domaratzki Sustaining Engineering rdomarat@ianywhere.com August 15-19, 2004. Outline. Assumptions MobiLink Overview Why Migrate? Changes on the Remote Changes at the Consolidated Database. My Foot.
E N D
SQL924: Migrating Your SQL Remote Environment to Mobilink Reg Domaratzki Sustaining Engineering rdomarat@ianywhere.com August 15-19, 2004
Outline • Assumptions • MobiLink Overview • Why Migrate? • Changes on the Remote • Changes at the Consolidated Database
My Foot • I turned my ankle, and the muscle that connects the fifth metatarsal (bone 12) to the cuboid (bone 7) was stronger than the bone • The fifth metatarsal broke just above the connection to the cuboid • Yes, it hurts
Assumptions • Working knowledge of SQL Remote, but not necessarily MobiLink • SQL Remote concepts I’ll assume you understand • Consolidated versus Remote Database • Publications, remote users and subscriptions • Subscribe by Columns • Default Conflict Resolution • This talks deals primarily with migrating SQL Remote for ASE to using MobiLink with an ASE Consolidated Database • dbremote, ssremote and SQL Remote
The Enterprise. Unwired. Industry and Cross Platform Solutions Manage Information Unwire Information Unwire People • Adaptive Server Enterprise • Adaptive Server Anywhere • Sybase IQ • Dynamic Archive • Dynamic ODS • Replication Server • OpenSwitch • Mirror Activator • PowerDesigner • Connectivity Options • EAServer • Industry Warehouse Studio • Unwired Accelerator • Unwired Orchestrator • Unwired Toolkit • Enterprise Portal • Real Time Data Services • SQL Anywhere Studio • M-Business Anywhere • Pylon Family (Mobile Email) • Mobile Sales • XcelleNet Frontline Solutions • PocketBuilder • PowerBuilder Family • AvantGo Sybase Workspace
MobiLink Overview • Scripts define actions performed at the consolidated database at each stage or event during synchronization • begin_connection • for each synchronization: • begin_synchronization • receive and apply upload stream • prepare and send download stream • end_synchronization • end_connection
MobiLink Overview • Scripts are either connection level scripts or table level scripts • Events occur for each synchronization, and events occur for each table that is being synchronized • The scripts determine how data is proceeded in the upload stream and what data gathered to be placed in the download stream
MobiLink Overview • No message system, direct connection to MobiLink, which is connected to the consolidated database using ODBC • The connection can use a variety of different communication streams including TCP/IP, HTTP, HTTPS, Serial, HotSync or ActiveSync • The remote sends changes up the consolidated and receives changes from the consolidated in the same synchronization session • You define synchronization scripts that define how MobiLink will handle the data that is uploaded by the remote database, and what data needs to be downloaded to the remote database • MobiLink does NOT scan the log file on the consolidated database • The ASA MobiLink client at the remote database (dbmlsync.exe) does scan the remote log file to determine what data to send to the consolidated database • The download script you write for a given table returns a result set. It is this result set that is sent down to the remote
Outline • Assumptions • MobiLink Overview • Why Migrate? • Changes on the Remote • Changes at the Consolidated Database
Why Migrate? • There is NO need to migrate a system that is in production and meets all your business needs • If it isn’t broken, there’s no need to fix it • There are many things that MobiLink can offer that SQL Remote does not • Any ODBC-Compliant consolidated database • The currently supported consolidated database are ASA, ASE, MS SQL Server, Oracle and DB2 • Can synchronize to an UltraLite application • More flexibility during upload and download of data • Less data latency in a MobiLink environment • Ability to easily encrypt the communication between remote and consolidated databases using 128-bit encryption • Server Initiated Synchronization
Why Migrate? • Sybase and iAnywhere Solutions provides three different technologies that allow you to move data from an ASE database to an ASA database • Replication Server : Connection Based • MobiLink : Session Based • SQL Remote for ASE : Message Based • For high-speed two-way replication between always connected databases with low latency demands, Replication Server is the obvious choice • In the disconnect environment, MobiLink and SQL Remote for ASE share a very similar solution space
Why Migrate? • With the upcoming Jasper release of SQL Anywhere Studio, iAnywhere Solutions has decided to announce the end of life for SQL Remote for ASE • The migration path for current SQL Remote for ASE users is to use MobiLink instead of SQL Remote for ASE • For the few people using SQL Remote for ASE to replicate between two ASE databases, the migration path is to use the new SQL Replicator that shipped with ASE v12.5.1 • Please note that the end of life notification for SQL Remote for ASE in no way affects the status of SQL Remote for ASA • SQL Remote replication between ASA databases continues to be a strong and viable product in SQL Anywhere Studio package
Migration Goals • Little or no impact on end users • Remote users should not know that anything has changed • No training costs associated with migration for end users • Gradual migration • Avoid a situation where current users cannot access new data until they upgrade • It is unreasonable to think that everyone will upgrade at the same point • No lose of current functionality • A migration path that limits functionality is not acceptable • The new system should be able to do everything the old system could, and hopefully more
Outline • Assumptions • MobiLink Overview • Why Migrate? • Changes on the Remote • Migration Strategies for Remote Users • Differences on the Remote • Migration Steps on the Remote • Changes to the Remote Application • Changes at the Consolidated Database
Migration Strategies for Remote Users • There should be no need to deploy a new database to the remote user when the switch is made from using SQL Remote to MobiLink • It is possible that new ASA software may need to be deployed if the dbmlsync.exe executable does not already exist on the remote database • As soon as you start talking about sending new software to the remote users, you may also want to consider whether you want to take this opportunity to upgrade to a more recent version of ASA • The only new ASA files that need to be deployed are dbmlsync.exe and the associated stream DLL (i.e. dbmlsock9.dll or dbmlhttp9.dll)
Differences on the Remote • Both SQL Remote and ASA MobiLink clients make use of a publication to determine which data needs to be sent to the consolidated database • The same publication that you defined for SQL Remote at the remote database can also be used for ASA MobiLink Clients • SQL Remote clients define a consolidated user on the remote database, and subscribes the consolidated user to a publication • An ASA MobiLink client defines a Synchronization User and you define a Synchronization Subscription to link a Synchronization User to a Publication
Differences on the Remote • Message system parameters are defined on the remote database so that SQL Remote knows where and how messages are written to shared message system • An ASA MobiLink client connects directly to the MobiLink server, so instead of specifying the location of the message system, you specify the communication stream type you are using and the location of the MobiLink server • This can be done when defining either the Synchronization User or when defining the Synchronization Subscription
Differences on the Remote • SQL Remote : CREATE PUBLICATION p1 ( TABLE t1 ); GRANT PUBLISH to u1; CREATE REMOTE MESSAGE TYPE ‘file’ ADDRESS ‘u1’; GRANT CONSOLIDATED TO cons TYPE file ADDRESS ‘cons’; CREATE SUBSCRIPTION TO cons FOR p1; SET REMOTE file OPTION “public”.”root_directory” = ‘r:\\msgs’;
Differences on the Remote • ASA MobiLink Client CREATE PUBLICATION p1 ( TABLE t1 ); CREATE SYNCHRONIZATION USER u1; CREATE SYNCHRONIZATION SUBSCRIPTION FOR u1 TO p1 TYPE ‘TCPIP’ ADDRESS ‘host=10.2.3.1;port=600’ OPTION ‘sv=v1’;
Migration Steps on the Remote • Run dbremote on the remote to send any outstanding changes to the consolidated database • This will also pick up any changes already sent from the consolidated and apply them to the remote database • Drop the consolidated user • No more SQL Remote messages can now be received or applied by this remote • Create a new Synchronization User and Subscription • All new changes made on the remote database will now be synchronized using MobiLink instead of SQL Remote • Use the CURRENT PUBLISHER of the remote database for the name of the Synchronization User • Run dbmlsync and perform your first synchronization
Possible Problems • If the last messages that are sent by the SQL Remote are lost, the guaranteed message delivery system can no longer be used, since the consolidated user was dropped • To get around this, you could choose only to drop the consolidated user if the log_sent value in the SYS.SYSREMOTEUSER table for the consolidated user was equal to the confirmed_received value • You could run dbremote in receive only mode after sending message and before dropping your consolidated user until log_sent was equal to confirmed_received • Only a viable option if the send frequency on the consolidated is short • This would guarantee that all messages sent had been received and applied on the consolidated database
Possible Problems • It’s possible that between running dbremote to receive your final messages and before your first dbmlsync run, that SQL Remote on the consolidated could generate a message for the remote, which would never be applied • This entire process will likely be automated in a batch file, so the chance is slim, but not zero • The chances of this occurring would be higher if the SQL Remote send frequency on the consolidated database was very small • Those paranoid about this could increase the send frequency on the consolidated to one hour, which would almost completely eliminate the chance of the problem occurring
Possible Problems • Those who were paying attention may have noticed that I presented two possible problems, and the solution to each problem was to increase or decrease the send frequency • Obviously, both cannot be done • The best way to guarantee that NO data is lost is to extract a new remote database for the user • This talks attempts to give you an alternative, that has a small chance of data being lost, but with very little impact to end users and very low administration costs
Changes to the Remote Application • Depending on how you were running dbremote on the remote clients, you may need to change the application that ran dbremote to now run dbmlsync instead • If dbremote were run as a service, and new service would have to be defined that ran dbmlsync instead of dbremote • If users clicked on a batch file, the batch file would need to be changed • If you were using the DBTOOLS API to call dbremote, you would need to change your code to now call DBSynchronizeLog(), and define a new structure to pass into DBSynchronizeLog() • If your application spawned an external process (I.e. dbremote) it would now spawn dbmlsync
Changes to the Remote Application • If you are making changes to your application, you should note that the DBMLSync Integration Component can be used to launch the synchronization process • The DBMLSync Integration Component is an ActiveX plug-in that can be imported into Visual Basic, PowerBuilder, Delphi, or any other application development tool that can utilize an ActiveX component • Note that there are third-party tools that allow you to import an ActiveX component into the .NET Compact Framework programming environment • There is both a visual and non-visual version of the DBMLSync Integration Component • The visual component will bring up a dialog box very similar to the dbmlsync executable and allows you to set the inputs to the component • The non-visual component requires you to do more programming work, but gives you access to much more customization, including access to every row being uploaded and downloaded during synchronization
Outline • Assumptions • MobiLink Overview • Why Migrate? • Changes on the Remote • Changes at the Consolidated Database • Setting up the Consolidated Database • Generating Upload Scripts • Generating Download Scripts • Downloading Deletes • Territory Re-alignment • Conflict Resolution • First Synchronizations
Setting Up the Consolidated for MobiLink • In the %ASANY%\MobiLink\setup directory, a file named syncase125.sql exists that must be run against the consolidated database • You should modify the script and add a “use db_name” command at the beginning to ensure the right database is modified • You should run this script connected as the same user that MobiLink will connect with • This user will need select, insert, update, and delete permissions on all tables involved in synchronization
Setting Up the Consolidated for MobiLink • You should create a DSN that will be used by MobiLink to connect to the consolidated database • Not all ODBC drivers are created equal • Be sure to read over the Recommended ODBC Drivers for MobiLink web page at the iAnywhere Solutions web site before creating your DSN • http://www.ianywhere.com/developer/technotes/odbc_mobilink.html • The SQL Anywhere Studio install proceed will install iAnywhere branded DataDirect OBDC drivers for most supported consolidated databases • These drivers are tested prior to release and we would almost always recommend using these drivers over any other ODBC driver
Generating Compatible Upload Scripts • With SQL Remote, the messages that are generated include actual SQL statements that are applied to the consolidated database • The upload stream that is generated by dbmlsync does not include SQL statements, but instead includes the data in rows that were modified, and the type of modification that was made (insert, update or delete) • You must define upload scripts that are used to determine what actions are taken for the data being passed up in the upload stream
Example Schema • Assume the following table structure exists at both the remote and the consolidated CREATE TABLE t1 ( PKEY INTEGER PRIMARY KEY, C1 INTEGER, C2 VARCHAR(100), C3 NUMERIC(10,2) ) go
Example Upload Scripts • The data that is passed up in the upload stream will be passed to the upload scripts as parameters and are represented with ? in the scripts • Upload_Insert Script : insert into t1 (pkey,c1,c2,c3) values ( ?, ?, ?, ? ); • Upload_Update Script : update t1 set c1 = ?, c2 = ?, c3 = ? where pkey = ?; • Upload_Delete Script : delete from t1 where pkey = ?; • There are ways to automatically generate default scripts, which are usually acceptable for upload scripts
Defining Your Synchronization Scripts • Synchronization scripts can either be defined using Sybase Central, or by executing stored procedures created by the syncase125.sql setup script : exec ml_add_table_script ‘v1’, ‘t1’, ‘Upload_Insert’, ‘insert into t1 (pkey,c1,c2,c3) values ( ?, ?, ?, ? )‘ go exec ml_add_table_script ‘v1’, ‘t1’, ‘Upload_Update’, ‘update t1 set c1 = ?, c2 = ?, c3 = ? where pkey = ?’ go exec ml_add_table_script ‘v1’, ‘t1’, ‘Upload_Delete’, ‘delete from t1 where pkey = ?’ go
Generating Compatible Download Scripts • Your download scripts determine what data to send down to the remote users • The simplest download script is simply a select statement that sends all rows down to every remote user • select pkey,c1,c2,c3 from t1 exec ml_add_table_script ‘v1’, ‘t1’, ‘download_cursor’, ‘select pkey,c1,c2,c3 from t1‘ go • The problem with the above download_cursor is that every row is sent to every remote user on each synchronization, not just rows that were modified since the last download
Filtering Out Previously Downloaded Rows • There are two other pieces of information that are included in the upload stream in each synchronization request from the remote • The last time the user successfully downloaded data • The name of the synchronization user • You can use the last successful download time to ensure that only modified rows are sent down to the remote for each synchronization
Filtering Out Previously Downloaded Rows • You need to add a last_modified column of type datetime to each synchronizing table so that rows aren’t downloaded each time • Ensure that the column has a DEFAULT value, which is defined as the current time ALTER TABLE t1 ADD last_modified datetime default getdate() go • You should set the initial value for this column as ‘1900-01-01 00:01:00’ for all rows
Filtering out Previously Downloaded Rows • You’ll also need to add a update trigger to the table to ensure that the last_modified column is updated each time the row is modified CREATE TRIGGER au_t1 ON t1 FOR UPDATE AS BEGIN UPDATE t1 SET last_modified = getdate() WHERE pkey IN ( SELECT pkey FROM inserted ) END go
Filtering out Previously Downloaded Rows • You can now re-write your download_cursor to only download rows since the last successful synchronization exec ml_add_table_script ‘v1’, ‘t1’, ‘download_cursor’, ‘select pkey,c1,c2,c3 from t1 where last_modified >= ?‘ go
Filtering Out Previously Downloaded Rows • It’s important to note that the last_modified column ONLY exists at the consolidated, and is NOT synchronized down to the remote • The schema may have changed on the consolidated, but not on the remote • Also note that you may now need to modify the publication definition on the consolidated database for current SQL Remote users • Since all columns are no longer being replicated, it’s now necessary to include the list of columns that are being replicated to SQL Remote users.
Filtering Out Previously Downloaded Rows • In some situations, you cannot modify the base table because of the way existing applications access the database • In this scenario, instead of modifying the base table, you can add a shadow table that stored the last download time CREATE TABLE t1_st ( pkey integer primary key, last_modified datetime default getdate() ) go • This table should be populated with rows and have last_modified values set to ‘1900-01-01 00:01:00’
Filtering Out Previously Downloaded Rows • Three triggers would need to be defined to track changes on the base table if you are using shadow tables create trigger ai_t1 on t1 for insert as begin insert into t1_st select pkey,getdate() from inserted end create trigger au_t1 on t1 for update as begin update t1_st set last_modified = getdate() where pkey in ( select pkey from inserted ) end create trigger ad_t1 on t1 for delete as begin delete from t1_st where pkey in ( select pkey from deleted ) end go
Filtering Out Previously Downloaded Rows • When using shadow tables, the download_cursor would now have to join two tables to filter out rows that have been previously downloaded exec ml_add_table_script ‘v1’, ‘t1’, ‘download_cursor’, ‘select pkey,c1,c2,c3 from t1,t1_st where t1.pkey = t1_st.pkey and t1_st.last_modified >= ?‘ go
Partitioning Rows by Remote User • With SQL Remote, on the consolidated database, you could partition rows by remote user by specifying a subscribe by column in the publication definition on the consolidated CREATE TABLE t2 ( pkey integer primary key, data varchar(100), rem_user varchar(128) NOT NULL, last_modified datetime default getdate() ) go sp_create_publication p1 sp_add_remote_table t2 sp_add_article p1, t2, NULL, rem_user sp_add_article_col p1, t2, pkey sp_add_article_col p1, t2, data sp_add_article_col p1, t2, rem_user go
Partitioning Rows by Remote User • With MobiLink, because the name of the MobiLink user is passed up in the upload stream, this value is passed into most synchronization scripts, including the download_cursor exec ml_add_table_script ‘v1’, ‘t2’, ‘download_cursor’, ‘select pkey,c1,c2,c3 from t2 where last_modified >= ? and rem_user = ?‘ go • With the above download_cursor, only rows meant for a given remote user that have been modified since the download will be synchronized to the remote database, just like SQL Remote • Note that if you only want to use the ML User paramter, because it’s the second parameter, there must be TWO question marks in your script exec ml_add_table_script ‘v1’, ‘t2’, ‘download_cursor’, ‘select pkey,c1,c2,c3 fom t2 where ? Is not null and rem_user = ?’ go
Downloading Deletes • Since MobiLink does not scan the transaction log of the consolidated database, and since downloaded data is based on a SQL statement, another mechanism was needed if you wanted to be able to send deletes to the remote databases in the download stream • Another download cursor called the download_delete_cursor is used to track which rows should be deleted from the remote database • A table we refer to as a “Shadow Table” must be maintained to track which rows should be deleted on the remote database
Downloading Deletes • Using the same table t2 from the previous slides, we’d create a table called t2_del CREATE TABLE t2_del ( pkey integer primary key, del_time datetime default getdate() ) go • Note that if you used a shadow table for tracking the last_modified value, you could simply add the del_time column to this table instead of having two separate tables
Downloading Deletes • We would also define a delete trigger on t2 that inserted a row into t2_del each time a row was deleted, that tracked the time the row was deleted create trigger ad_t2 on t2 for delete as begin insert into t2 select pkey,getdate() from deleted end
Downloading Deletes • This now allows us to define our download_delete_cursor for table t2 exec ml_add_table_script ‘v1’, ‘t2’, ‘download_delete_cursor’, ‘select pkey from t2_del where del_time >= ?‘ go • Note that allowing deletes on shared rows at the remote databases introduces the same possible RI problems that exist in SQL Remote
Territory Re-alignment • Territory re-alignment for SQL Remote for ASE is done by maintaining another subscribe by column on the child table that is maintained using triggers on the parent table • Since all the gear is already in place to maintain the subscribe by column on the child table, the territory re-alignment problem is already solved for MobiLink as well • It’s simply a matter of adding a where clause on the download_cursor that includes the subscribe by column that is being maintained on the consolidated