Amazon Glacier Improvements


Amazon has pushed out a couple of important updates related to Glacier recently. The most notable is a new S3 feature to automatically migrate data from S3 to Glacier based on certain rules like age and organization structure. The other update deals with partitioned retrieves of large data sizes; for instance restoring a large file in multiple operations to keep restore costs under control.

Now, as much as I love the idea that S3 based structures can be automatically archived or migrated to Glacier, I would really prefer a storage client that offer intelligent use and configuration of these services. These two services offer very different price points and usage patterns, and a healthy combination would probably make sense for most backup scenarios. One example is that read/write data like a backup manifest or database would need a service like S3 whereas stable data like photos and videos would benefit from Glacier storage. Data used in sync scenarios would probably need to be on S3.

Browsing through various developer forums it is obvious that a lot of them are trying to find a reasonable user interaction model to deal with both the recovery delay and recovery throttling needed for Glacier. There is little doubt in my mind that there is ample room for a good Amazon AWS client in the backup and sync space. I would really like to have my entire backup on this platform, and the ability to offer certain parts of it for instant file sync and other parts of it for slow file sync. Within the current Glacier pricing model, keeping file history around for a long time becomes very appealing – so good purge options would also make a lot of sense. Even better if the purge options were respectful of the Glacier early delete penalty model.

In short, all I need now is a great client – and if that is not available then at least a decent one.

Comments

Popular posts from this blog

No more Google Apps for Families

Adventures in Cloud Backup Land