140 likes | 151 Views
Learn about the implementation of a Unified Migration Analytical System in Tbilisi in 2016 to centralize immigration data from various state agencies and ensure efficient processing and analysis. The system aims to standardize data management for effective migration control.
E N D
Unified Migration Analytical System Ministry of Justice Public Service Development Agency Secretariat of the State Commission on Migration Issues Tbilisi 2016
Coordinating & Implementing Bodies General supervising: State Commission on Migration Issues (SCMI) Implementing/responsible body SCMI Working Group for Setting up a Unified Migration Analytical System
1st Phase of the Project The 1st phase of the project deals with gathering and processing the data on immigration (foreign country nationals and stateless persons). This data is scattered in several state agencies. Consolidation of the data of seven agencies is planned to be done as a 1st step of the 1st phase of the project. - Public Service Development Agency; - Ministry of Internal Affairs; - Ministry of Foreign affairs; - Ministry of IDP’s Refugees and Accommodation; - National Agency of Public registry; - Ministry of Education and Science ; - Ministry of Finance, Revenue Service; - Ministry of Laboure Health and Social Affairs; - Ministry for Diaspora Issues; - State Security Service.
Introduction Unified Migration Analytical System is a set of electronic tools to ensure that the data being necessary for migration management is collected, processed and analyzed in a centralized way.
General Overview of UMAS GEORGIAN Public and Private Sector Collecting Migration Data Government/Public Entity Data for Migration System Data for Migration System Data for Migration System Migrant Registration Data for Migration System Border Control GATEWAY PUBLISHED DATA Migration system PROCESSING APPLICATION PROCESSING DB ALL DATA COLLECTED AND STRUCTURED STATISTIC APPLICATION WAREHAUSE DB
Data Management • At the 1st stage it is necessary to register and systematize the data stored at each state agency. Data quality should be analyzed, possible grey areas should be reveled and corrected. • Each state agency should prepare an event-based data for sending, meaning that a separate dataset should be created on each event. • e.g.: • Receiving application on obtaining residence card; • Completing application; • Issuing residence card (in case of positive decision). • Data is advisable to collect in an unsynchronized way; for instance, once a day, at a pre-defined time. • Data will be subject to incremental update, based on the last date the update has been made. • In order to reduce the overload of the data systems of participating state agencies, the process is advisable to perform with a maximum speed, without any check-up.
Data Standardization Collected data will be sent to a so-called unstructured storage. Participating state agencies store and process the data using different technical standards; therefore the data should be brought under single unified standard. This process includes a 3-stage processing of data (ETL) before it is stored in the main, high performance data storage (analytical module). A powerful tool is needed to carry out the process (ETL) enabling to manage this most important phase of data collecting and processing effectively, saving on involving an expensive developmental resource.
Data Standardization Process • Extracting necessary data • Extracting necessary information from the dataset • Data transformation • This is a very complex process and includes simple technical actions (e.g. Schema mapping, type converting) as well as a complex business analysis. • Loading structured data into the storage • At this stage of data processing the personal data can be anonymized (name, last name • can be replaced). Technically, this can be done by using a crypt or assigning a conditional • code.
Data Standardization Process • Proper and effective implementation of the process has a direct impact on the quality of the end-product and the output of the system. • Data technical processing phase: • Technical processing of the data implies bringing the information provided by different agencies under one standard (such as codes of the countries, dates, gender, etc.) • Data logical processing phase: • Primary validation (based on the dataset only) for revealing “rough” errors. • The next step is a data cleaning that can be divided into 5 methods, used both separately and in a combined manner: • Data enriching (Eg., defining state of destination by border crossing point) • Completing / editing incorrect data by transferring data from other systems • In case of incorrect data, using standard data meanings • Removing / filtering incorrect data • Identifying duplications
Data Standardization Process Personalization: Coincidence / correlation is a critically important factor in order to ensure the final proper quality of the product. Guideline has to be elaborated to define rules for automatic data coincidence/correlation and rules for creating a new person; with developing relevant software. Information coming from the agencies can be added to the profile of the concrete person thus substantiating creation of a new person. Rule for creating a new person has to be created by data sources (Eg., new person should not be created based on the PSDA data, since this data should already be available, based on, for example, border crossing data provided by Ministry of Interior). Data validation in the context of the complete profile on the person: Rules have to be elaborated (in accordance with the norms established by the law), which should not contradict the new information added to the profile of the person. Upon completing the process, the original data should be stored. Otherwise cleaning/enriching will not be “memorable”, i.e. complete picture of data creation will be lost. This procedure will however cause increasing the data storage (since a differently structured by the same data should be stored twice).