Schema migration tips and FAQs

Unison Cloud's native storage layer lets you save arbitrary data in typed tables without SQL adapters or serialization formats; however, the freedom to save arbitrary types comes with some pitfalls. It's important to consider how you will manage your tables as your types evolve, since the data that you have already written to a table on the Cloud isn't linked to the same update process as the data in your UCM codebase. Here are some tips and answers to common questions about schema evolution.

How should I iterate on datatypes that will ultimately get saved to Cloud storage as a schema?

It’s pretty common to start saving a data type to a table and then realize it’s missing a field. The safest thing you can do is to use the Cloud.run.local or Cloud.main.local.serve interpreters as you're developing your app. They provide a nice, ephemeral sandbox when iterating on your schemas.

As a quick and easy workaround, if your previously saved deployed data is not important, include a version number in the table name when you create it; then you can increment the table name as you make type changes. Unfortunately, the compiler will not stop you from updating data types after you’ve already written earlier versions into Cloud storage—fortunately, making new tables is cheap. 😉

How do I run a basic offline schema migration from one type to another type?

If you have a OrderedTable and you want to do an offline migration, the process is simple:

  • Take down (undeploy) all the services that depend upon the database you intend to migrate.
  • Create a new OrderedTable, making sure to change or increment the table name so it does not conflict with the old OrderedTable.
  • Stream out the existing data from the old table with OrderedTable.toStream writing it into the new table with OrderedTable.write
  • Bring the dependent services back up, this time using your new table name.

What should a user do if they need a non-downtime migration?

We have a provisional design for online migrations of services, but zero-downtime migration is not something we support right now.

How do I run a schema migration from an older version of a type to a newer version of the type?

This is an unfortunate consequence of working with natively typed schemas, you really can’t upgrade from an old version of a type to its new version, since the old version of the type is no longer available. If you know that your domain models will change over time, consider writing them as unique types with a version indicator either in the namespace path or the type name itself, like type db.v1.User = User Text. When it comes time to add a new "version" of the type, rather than updating in place, you should create a new type with a different version number, type db.v2.User = User Text Text.

What happens if heterogeneous data types get written to a typed table by accident?

Alas, once you’ve written two versions of the same data type to a table, you can’t easily recover. Logic that tries to read a record from cloud data storage as type A_V2, fails gracefully to deserialize it, then tries to read the record as type A_V1 is not possible with our current reflection primitives. We’re aware that this is an annoying limitation and plan to change this in an upcoming release. For now, you should follow the tips in this doc to avoid this situation.

How should I clean up previous schema versions or data in old tables that can just be deleted?

If your data is in a OrderedTable and you want to delete entities in your old tables, you can easily scan the table, get its keys, and delete items from there. We don’t currently have a function for OrderedTable.drop - which would drop the entirety of a OrderedTable Table’s data, so getting the keys with OrderedTable.toStream.keys and deleting the old records with OrderedTable.delete is your best option.

We do have a Database.delete function, akin to “dropping a database.” You can use that for a complete reset of all the tables associated with the database.