Category: Software Engineering
How Do You Migrate Systems?
Well-prepared system migrations can go smoothly and exactly as planned, or just the opposite, they can go sideways. They can take minutes, or they may drag on for hours or even days. In the worst case, everybody involved is unhappy and the system is frozen.
Read the article to find out:
- Examples of system migration projects
- Migration recurring problems
- Principles to follow when writing data migrators
Examples of system migration projects:
Let’s take a look at three examples of IT projects, where I’ve led the system migration, and the problems that I encountered:
- The HRD System is a domain registration system. It contains a very large database, which was migrated all at once. During the migration the system had to be offline. Therefore, I had to do the migration as quickly as possible. However, it took the whole weekend to migrate such a big data set. After switching the system, it turned out that the new system wasn’t stable. However, you couldn’t go back to the old version, because the deadline was over and decisions were made. The resulting issues spilled into the next 6 months.
- In the case of the Kozaczek portal, instead of system migration, I prepared a synchronization. Thanks to this, it was possible to launch a new portal in parallel with the existing old one. The new service was operating in “read-only” mode. Data from the old site was injected, on a regular basis, into the new database. At the time system had to be switched to the new one. Everything was well tested, and the final switch took us few minutes. You can read more about the project in our case study.
- The project Zeberka consisted of synchronization implemented for the Kozaczek portal, and the improvement of it. Instead of copying the data directly to the new database, I used new system API to save it. This solved a lot of problems with firing system hooks and creation of autogenerated content. The new portal could easily generate all its required content and caches during the data injection period. In this project, I had already applied the best migration practices described below.
System migration recurring problems:
During the migration of systems in the above-described projects I noticed some recurring problems:
- When implementing a migration, you’ll find that the access you often get is only to a part of the data. Production data often contains other standard cases. Make sure you can test your migration on the whole data set before actual migration, not only some data sample.
- After migrating the data, final tests often point out that something is missing or something is wrong. Usually, you have to correct the migrations then, and repeat it. It’s good to be able to migrate what is missing and without having to start the whole process over again.
- Some problems with missing data won’t be noticed until after the production launch of the application. At this time there’s already new data in the database which wasn’t in the old system so you are not able to run migrations again. You have to prepare yourself for that situation up front, to have a possibility of finding data in data source based on value in the new system.
- A long-term shutdown of the system during the upgrade annoys both the client and development team, especially when other problems continue to appear while switching systems. It is best to run migrations before switching systems.
Principles to follow when writing data migrators
Fortunately, my experience from these projects helped me to enumerate significant and repetitive problems. As a response I’ve created a few principles that I always follow when writing data migrators. I’m happy to share them, here:
PRINCIPLE 1: Always write the data source clearly and concisely in the structure of databases.
That is because you need a way to connect the record in a new system with its source in the old one. You never know when it is going to save your nerves. The amount of information saved depends on the type of data we enter, but most often we save:
- The name of the resource the data originates from. This column is insignificant if all data comes from the resource site, but it really matters when we import data from more sources. It’s best to define the enum type with unique values for each resource.
- External identifier, that is the id of the object in the resource it comes from. Most often it is an id or UUID column. For users, it is often an email. It is important that this value is unique and allow us to find the object in the source data set.
- The URL of the object is not necessary and it is a redundant value, but you can always generate it. This often turns out to be useful, because by opening such a link, we immediately get a preview of the object from the old system. This is most useful when the resource objects are files.
PRINCIPLE 2: Prepare synchronization not migration. Simply by checking whether the migrated object already exists in the new system, and if so, instead of adding a new object, update the existing one, accordingly.
Thanks to this solution, we can run the same migration several times and do not risk duplicating our data. That’s how our migration becomes a synchronization of the new system with the old one, with the possibility of restarting the migration that gives us many additional advantages. If there is an error in the migration and some of the data has been omitted, we can fix the migration and run it again without having to wipe the data we’ve already imported from the new system.
- If we add new hooks or triggers to the new system, we won’t have to worry about the data that we had imported. We’ll only need to run migrations again.
- In the final stage we can run a migration repetitively and synchronize new system with old data source online. This allows us to test and view data in the new system while we are still using the old one.
- We can migrate data to the new system well before its production launch, meaning we are better prepared to switch systems.
PRINCIPLE 3: while migrating data, we use the new application’s API instead of direct injection to the database.
Using the API ensures all hooks and triggers are invoked into the new system. If we inject data directly to the database, we have to implement and call all these procedures manually. Of course, saving data by API may be slower than direct writing to the database, but basically it is less of the problem than the inconsistency of data that can arise when we forget to call important procedures.
The main determinant of how complex the migration is, is the amount of data in the system. The more data, the longer the migration takes. The more special cases occur, the longer time it takes to migrate related errors. When writing information systems, we often have to deal with cases where the client is already using the application that they want to replace. When writing an application from scratch, we design a new database structure for it, which is almost never consistent with the previous system. At the end of the project, the situation arises where the old system needs replacement with the new one. However, the client expects the injection of data from the old system to the new application, when it still actually needs to be imported into this new application.
We ususally measure data migration in hundreds of megabytes, and often in hundreds of gigabytes. Automation of this process is the most obvious option in this case. As it is a very important process, please remember to write scripts for migration carefully, prepare them in advance and test accordingly. Otherwise, the system switching process will prove to be long and stressful for those preparing the new environment. You will find that the customer isn’t satisfied with the fact that they and their clients have been frozen out of their system.
The principles of migration developed over the three projects previously mentioned, worked perfectly in the last project, Zeberka. The migration was smooth and both the developer and the client were stress free. Zeberka was already launched in production, there was 4 weeks phase when the new system was running in synchronization with old portal. That was enough time to find and fix most issues, while old portal was still available for end users. After that we switched the systems and that completed the entire migration process with huge success.