A common use as a compile, build, and/or test server. I have often have projects that require significant customization and setup simply to do the compile/build/test. These projects go in and out of activity and it’s convenient to do the necessary setup on a vm, and then just the vm as needed.
I’m familiar with chef/ansible/puppet, etc., but these all take setup work. They are great and represent a more robust approach, but sometimes it’s convenient to simply setup a machine manually and then snapshot/thaw it periodically as needed. And in some cases, especially in setting up more esoteric environments, it can be especially difficult to use the automatic provisioning tools.
Another use case is for part-time development projects. For example, I have a project that requires loading a large postgreSQL database. I only work on this project periodically so can’t justify running a machine 24x7. At the same time, it takes so many long to get the data loaded that if I had to do that every time I worked on the project, even if using chef/ansible/etc, I’d never make progress on it.
These are use-cases for which I currently use AWS EC2 instances. Usually, I just shutdown the EC2 instance when not needed at which point I’m only paying for the very small EBS storage costs. I can then restart as needed. I’ve never given much thought to how they handle the OS network configuration upon restarting an instance because it just works. I believe in the past I’ve also launched new EC2 instances from volume snapshots that I’ve taken without any additional effort, but I’d have to try that again to be sure. It’s possible I was creating and launching AMIs instead.
If it’s difficult to launch a new DreamCompute/OpenStack instance from a snapshot, then I’m curious what is the main value of snapshots? Backups?