FileMaker 17: Data Migration Tool Overview Part Three

FileMaker 17: Data Migration Tool Overview Part Three

Managing a File Migration – Considerations and Best Practices

Robin Story, FileMaker Consultant, IT Solutions

As we mentioned in the Data Migration Tool (DMT) Overview: Part One, the DMT available in FileMaker 17 offers significant benefits over homebrew scripted imports, but it's not a silver bullet. In some FileMaker environments, a scripted import migration may still be preferable to the DMT script, based on the needs of the developer. 

Why might you migrate? 

If you have a large, complex system with a lot of users, adding new features and modules might not be something you can do on the fly while the system is in use. In such instances, a development copy is placed in a separate environment so that the necessary changes can be implemented and tested prior to release. When it comes time to push the changes out to the production environment, you need to get the up-to-date data out of the current production system, push it into the development system, and then that development system becomes the new production system. For very complex systems, you might even add a third iteration of your database system in between the production and development system as a staging or beta testing system, where users can go in and try out new features and add an extra layer of review and quality control prior to release. 

Data Migration considerations 

The DMT is a great way for pushing the data, security layer, and value lists from a populated file into an unopened clone. But because of this, it’s important to take into account what is happening with your development environment before you throw out a scripted import and move to the DMT. 

Consideration #1: Container Fields 

Externally stored container fields are a powerful tool in FileMaker but they can be tricky when you’re moving from one file to another. The DMT will not migrate externally stored container files, and instead use the existing external storage files from the source files. If your solution makes heavy use of externally stored container fields, extra care should be taken to back up your container directories. When you bundle into that the problem of a hosted solution using secure external storage encrypted by a specific file key, things can get messy quickly if you don’t mind your p’s and q’s. It is important, for example, not to use the Upload to Server functionality on your newly migrated file, but rather to replace the file on the server in order to preserve the encryption key. Furthermore, as an extra precaution when preparing to migrate a file that has container data, I often export records from tables with externally-stored container fields - including the primary key field and any container fields - to a FileMaker Format file. This will create a handy restore option should the migration break the container data. 

Consideration #2: Static Value Lists 

The Data Migration tool allows you to toggle updating the value lists in your clone file with the value lists from the source file. This can be extremely helpful, particularly for files where users are allowed to maintain one or more static value lists and the development cycles are long. But the reverse could also be true – it may be that there are a number of value lists in the system that are maintained by the developer as they are used to drive programmatic behaviors in the system (for example: a value list that triggers a Go to Object script step, a value list that allows a user in a self-service account management module to select the privilege set for the new user, etc). Changes to these value lists are part of the development cycle, and you don’t want those program-driving changes wiped out on upgrade. 

It is very likely, particularly in larger systems, that you have value lists from both columns: There are value lists that are specifically maintained by the end users, and there are also value lists that are specifically maintained by the developer. It is important to take an inventory of these value lists as you will likely have to manually copy some values from one file to another once the data migration is complete. 

Consideration #3: Security Layer 

Like the previous item, the Security Layer should be an important consideration in any migration tool, and the DMT shines in this regard because it will simply push over the accounts from the old system into the new system. Previously, for longer development cycles, you had to assume that there would be new users, removed users, and password changes between when you took the development snapshot and when you perform the release. This meant that in order to reduce headache, you would either have to:  

1. Store users’ passwords on their staff record (this is not recommended).  

-or-  

2. Create an import process that would loop through the development environment’s users table and delete any user accounts, then import the production data, and re-create the user’s data with a standard reset password, which isn’t great, as it requires users to reset their passwords.  

Using the DMT removes this frustration. 

Consideration #4: System and Developer-Owned Tables vs. User Data 

With a smaller system, typically all tables are user data entry tables. But as your system grows in complexity, and particularly as you begin to create modular responsive environments, value lists that drive complex business rules, and other data-driven processes, you might find yourself in a mixed environment. Now, in addition to the tables that contain user-entered data, you are also maintaining a number of back-end data tables containing necessary data to drive your modules and features. Bringing over the data from the production file wholesale as part of the migration process would wipe out necessary records added to those tables during the development cycle which drive the new features, creating a big headache. As with value lists, it’s always a good idea to review your data tables and ask yourself: Who is responsible for the data in this table? Is it the end-user entering the data that drives this organization, or is it the developer entering data that drives the work environment itself? The former’s data can come from the production environment on a migration, the latter should probably be maintained in the development file itself. Often, scripted imports will include 3 files: The production snapshot for user data and the development snapshot containing the development data, which will be filtered into an empty clone receptacle file for the new release. This allows the developer to indicate, table-by-table, where the system of record for that data lies: user or developer. 

Consideration #5: Data Massaging and Modification Fields 

Finally, when building out features in the development system, you might find yourself creating fields or modifying fields that will necessarily require updates to the data to the production system: Perhaps you’ve created a new field that needs its values looked up from a new relationship, or a field’s value that drove some of the interface options has changed and is now split out or merged with another option. As a result, the data from the production file needs to get with the times as part of your migration process. 

Unfortunately, what this usually means is that you have to run those Relookup and Replace Field Contents actions, which necessitates consideration of your Modification auto-entry fields. Simply adjusting the value of an existing field allows you the luxury of turning off the modification timestamp auto-enter in the old production file and replacing/relookup-ing that field in the old file prior to running the import. However, if there’s a new field that needs populating, there isn’t much for it but to bite the bullet and perform the import, turn off the modification auto-entry fields, perform your necessary data massaging, and then turn those fields back on when you’re done, which is a lot to remember on top of what is already a hectic migration schedule. One of the benefits to using a scripted process is that (provided you turn off the modification auto-enters before the migration), you can script the data massaging as part of the process—allowing you to declare the business rules for those necessary data changes at the time you are building that feature into your system, rather than having to remember it after-the-fact. 

Consideration #6: Corrupted Data 

In a perfect world, there is never a network interruption, disk error, or other adverse event that would result in data being corrupted in your production system. The real world that we all live in, however, is a different matter. Broken indexes, data fragments, and hard drive degradation are a real thing and can be a real pain to deal with. Fortunately, FileMaker’s onboard Recovery tool is a pretty reliable way to fix your systems. As mentioned in part 1, the DMT is not a data recovery tool. As such, if you are noticing that some records and indexes are corrupted in your production file, you will want to include a recovery step on your production data prior to performing your migration to get the cleanest possible version of your data over. 

Miscellaneous Considerations for a Scripted Import process. 

  • The DMT will automatically set the next serial value on a migrated file. For a scripted migration, you will need to set next serial values as a script step for every table migrated. This means sorting your records by their primary key, going to the last record, and then setting the next serial value to that value +1 (or whatever your incrementation schedule is). 
  • When running a scripted import, you will need to take inventory of new tables that have been added since the previous migration, and confirm that they are accounted for in your import. 
  • Similarly, it’s always a good idea to check your field matching on your import records script steps when migrating. Typically, Matching Names is pretty stable, but it’s always a good idea to make sure that fields that were new in the previous migration are being properly mapped for the current migration. 
  • How large is the file, how long will the migration take? Can it happen overnight, or should it take place over the weekend? Always migrate into a clone file when possible – this start from the smallest possible file size and a “Delete all records” step on a large file can slow your migration process to a crawl. 

Migration as part of Deployment 

Developers can be forgiven if their top priority for migration is getting the right data in the right places with minimal amount of stomach acid. But it’s important to keep in mind that migrations are stressful for end users as well. When we talk about Migration, we’re really talking about a release sub-process that is part of a larger deployment process, and that a well-planned strategy for deployment should take the potential headaches of end users into account as well. 

While a developer’s migration plan is more likely to look like a game-day checklist (close file off server, move file to desktop, perform migration, check data, rehost file), a proper deployment plan should allow all users to feel that they are ready and comfortable with the new version, and may involve multiple people in various departments who should have clear guidelines for what is required of them to make the process as bump-free as possible. For example: 

One week prior to release 

One week out, your power users should be aware that a deployment is on its way, and include a brief description of what features they can look forward to in the new version. 

If possible, this would be an excellent time to practice-run your migration script, by creating a staging environment where power users can check out the new system prior to the scheduled migration. This is also an excellent time to set up training sessions, ensuring that your power users are comfortable behind the wheel of the new version. 

2-3 days prior to release 

By now, development should have identified and corrected any late-breaking minor issues with their migration process as a result of final acceptance testing by your power users. If larger issues have cropped up, you have the ability to postpone the release without having to send out a company-wide announcement. 

This is when user training should be performed, so that the users don’t have too many days between poking around in the new version of the system and actually using it in production. This allows what they’ve learned in training to stay relatively fresh and minimize the need for review during an already stressful process. Depending on the size of your organization, you may not be able to train every user, but you should train at least one person per department so that they can be a resource to others. 

As part of the training, you should prepare short-but-sweet “what’s new” handouts. Take the brief feature descriptions that you sent out earlier in the week and add bullet points describing the business rules behind them so that the users can refer to that as they are navigating a new module or changed rules. 

Eve of release 

As you’re prepping the release, you’ll want to send an email out to all of your users that the release is scheduled and there will be expected downtime of the database. Give them a clear “stay out of the pool” window, while allowing yourself wiggle room for the unforeseen. If your training focused on a subset of power users, make sure that the rest of your users receive the handouts you prepared as part of this announcement. 

Release 

Remaining users are given a warning to exit out of the system to avoid losing work, and after a reasonable interval, the production file is closed on the server, and any scheduled scripts for that file are temporarily disabled. The production file might be zipped and transferred to the Developer’s machine at this point, or the developer might perform the migration right on the server and save that time. Either way, the migration is run. When the migration is finished the developer will compare the files to confirm that the record counts are correct, spot checks the data, and then performs any after-the-fact data massaging (container field updating, value list maintenance, security layer cleanup, etc). 

Once the developer is satisfied that the release has gone smoothly, the new production file is re-opened on the server. The developer checks the file to make sure everything is where it needs to be. Depending on how the release is being handled, the Developer may send an “all set” email to a subset of users so that they can confirm that the system is working as expected, before notifying the entire user base that the system is once again live. 

Post-release 

Once the system has been inspected and the go-ahead to let all users back in has been given, any disabled server schedules are re-enabled. The developer may implement some form of issue tracking system so that users can organize and coordinate feedback should there be any problems with the new release. A simple system can allow users to submit and track bugfix requests, whereas a more complex system might involve end users submitting bugfix requests that are reviewed and approved by a supervisor, allowing for training issues and out-of-scope features to be managed locally. 

We’ve created a sample deployment schedule spreadsheet that will help you coordinate your release schedule. For example, if Amalgamated Widgets Corporation was upgrading their Job Tracking software on May 11, with Robin being the Release Developer and Len being the QA lead to coordinate training and testing, it might look like this: 

Note, the developer might have a separate Migration Checklist for things like backing up container data, turning off modification timestamps, comparing table record counts, and checking specific views to confirm the data moved the way it was supposed to. 

Your Deployment schedule will be highly determinate on the size of your system, the complexity of your organization and your various standard operating procedures regarding change management. 

Migrations are a lot of work, and they have the potential for a lot of frustration and worry. But carefully planning your migration and considering the best tools for the job should do a lot to manage those frustrations. In our next article, Colin will put all of this together and present our final version of the Data Migration Tool Helper file. 

ITS Email Newsletter Archive

© 2018 IT Solutions Consulting, Inc. All rights reserved. Privacy Statement
IT Solutions
Top