How do I know if my instance is ephemeral?

dreamcompute

#1

So I’m looking over the guide to migrating instances and it mentions different methods depending upon whether my instance is ephemeral or not.
However, I don’t remember what I selected as it was nearly a year ago to the day; it doesn’t seem like something I’d have selected, but I’d rather be certain before I risk it.
But what the guide neglects to mention is how I would check whether my instance is ephemeral or not; there doesn’t appear to be anything telling me in the list, or if I use the nova show command.
Can I determine if my instance is ephemeral?

On the issue of migration though, my instance is a web-server, so I’d like to reduce downtime to a minimum. What I was thinking of doing is something like this:
[list=1]
[]Setup a new instance on US-East-2 with nginx and my SSL certificates so I can provide a temporarily unavailable error page with details of what’s happening for anyone that stops by.
[
]Add the IP of the new instance to my DNS records.
[]When I’m ready, shutdown my existing instance (or just its services, if ephemeral) and copy across to a new image on US-East-2.
[
]Reboot my new instance from the new image, allowing it to resume exactly where my old instance left off.
[*]Remove the IP of the old instance from DNS, and eventually get rid of the instance itself.
[/list]
Does this seem reasonable? Anyone able to flesh out the details of what I’ll need to do to achieve this?


#2

You can tell if your instance is ephemeral or volume-backed by looking at the Volumes section of the DreamCompute dashboard. If you see a volume with “Attached to my-server on vda” then you know that instance was booted from that particular volume. If you don’t have any volumes, or there is no volume attached to that instance in the list, then it’s an ephemeral instance.

As for your migration plan, it sounds pretty solid. Since you’re running a web server, I’m not sure you actually need to bother with images. You could complete steps 1 and 2. Then instead of creating a new image, you could just copy the site data across to the new instance with your favorite method (scp, ftp, copying a tar file, etc.). Once the site data is copied over, you’d make sure it works on the new instance and complete step 5.


#3

Thanks!

Actually there’s a lot more to migrate than that, installing nginx and the certificates is just the bare minimum to present something to users (without having to learn something else at least). I also have PHP, all the various components of a mail server and so-on, and I’m not eager to go through all the hassle to set up everything all over again :wink:


#4

@haravikk: this could be the chance to put all your configuration in source control, so that you can re-install and reconfigure your systems in minutes. Something like Ansible is quite easy to understand and once you learn the basics, you’ll save a lot of time … and you’ll sleep better at night knowing that you can re-install a server in a short time.

Read some of the tutorials mentioning Ansible on the knowledge base to get an idea: https://help.dreamhost.com/hc/en-us/sections/203167038-Advanced-Tutorials

I’d be happy to give more details on this thread for your specific use cases if you’re interested.


#5

Thanks for the suggestion, but I think that’s something I’d be better investigating later; I just want my instance on US-East-1 to be on US-East-2 as easily as possible, everything else can wait :wink:

On that note, I just created an instance on US-East-2 to try, but even if I suspend it or shut it off I can’t detach its volume; this seems like a wrinkle in the plan as I was hoping to keep the instance but swap its volume for the new one I’ll be copying across.

Is there something I’m missing; some way to swap volumes, or hold onto a public IP after deleting an instance?


#6

So is it possible to do this? For an easy changeover it seems like what I need to be able to do is create an instance with a temporary volume (or start it from an image only?) then once I have a new volume created from my old server, shut down the new instance, switch it over to the new volume, and reboot, thus causing the new instance to be an exact replica of what I copied across.

But I’m confused as to what the process for this would be; the Dreamhost migration guide just assumes that creating the new instance will be the last step, but that means a delay before DNS propagates, whereas if I can have the instance created in advance, thus reserving a public IP, I can have it in my DNS records in advance to handle the downtime, and ready to switch over the moment it’s ready.


#7

So is it not possible to swap storage for an instance or hold onto an IP address?

If it isn’t then it seems like I’ll have to just put up with the DNS TTL delay as it doesn’t look like I’ll be able to reserve an IP address after all, and I just have to put up with the DNS TTL delay?


#8

I understand you don’t want to put too much into one migration… if you need help getting started with Ansible we can help you out.

Going to your questions: the IP address of the instances will have to change between US-East 1 and 2, as these clusters have separate networks.

Virtual networking works differently also between the two clusters: in US-East 2 all instances get a public IP by default but such IP is not reserved to your account (differently from US-East 1). Such IP will be assigned to the instance, but if you need to destroy it and create a new one from scratch, the IP may change.

If you need to ‘own’ the IP addresses, you can have Private Networking enabled. Read https://help.dreamhost.com/hc/en-us/articles/229789688-What-is-private-networking- to learn more about this option.

Swapping storage is also not an option: the storage clusters are totally separate things, you need to download the volume first and upload it again.

I understand it’s a complicated thing to do and there may also be other/better ways to do it. Maybe you can specify what kind of applications you’re running on US-East 1 and we can give you more precise suggestions to help you migrate.


#9

That’s not quite what I meant.

Okay, let’s say my US-East-1 instance is called A, and I setup an instance of US-East-2 called B and will act like a placeholder (so I’m still serving up a page to those whose DNS updates quickly).

I will then copy A to an image on US-East-2, and once that’s done use it to create a new storage volume. What I want to then do is shut-down B, then tell it to start using my new storage volume, effectively causing it to pick up immediately where A left off. Is it possible to do this (tell an instance to restart with a different storage volume)?

It’s looking like I may not need to worry too much about DNS delays though, as my sites at least are served through Cloudflare where the TTL can be set as low as two minutes, though obviously that may not account for ISPs, local caches etc. that enforce a minimum TTL of their own. Everything else shouldn’t be overly affected by longer delays.


#10

If you want to use the virtual image, like the migrating guide suggests, you should start instance B after you’ve taken the image of A. This will allow you to run instance B identical to A at the time of you took the instance.

I assume you’re concerned that by the time instance B comes live, image A will be different because it’s a live system. Fair point: once you have done the bulk of the replication, you may be able to rsync files between the two virtual machines, test that B has the correct data and works well and then flip the DNS.

What kind of application do you run on the instance?


#11

That’s not it either, but it doesn’t matter.

I just tried to transfer across today, and although I successfully created the image, created a volume from it and then started an instance using it, it won’t boot.

Looking at the log (in the dashboard) the instance (named “WebHost”) is stuck at:

It seems like it’s expecting some kind of login information that I am unable to provide, as a result it isn’t even starting up, meaning I can’t connect to it normally (the IP isn’t reachable via ping).

I was able to mount the volume by attaching it to another instance, and everything looks like it’s there, fsck returns no errors etc. Are there steps missing from the guide?


#12

By the look of it, something went wrong when the image booted, most likely it’s not getting the IP address. This has happened in the past and the issue was solved … maybe it’s back. We’ll investigate further, I opened an internal ticket: what operating system and version are you running?

Meanwhile, there are two alternative measures you can take to solve the issue:

  1. On the instance to migrate (on US-East 1) Edit /etc/network/interfaces (or related file on other operating systems), and change it to dhcp instaed of static, then take your snapshot and migrate it or…
  2. Set a root password, so you can login to the console on the new instance, and manually edit the entwork config with the right info, if it gets stuck again.

We’re going to run the migration again to see what’s wrong. Thanks for helping out.


#13

Ubuntu Server 16.04.

What do you mean by run the migration again? I’d rather not have my US-East-1 instance shut-down down again without advanced warning.
[hr]
I tried changing the network interface to dhcp instead of static on the volume I created, and now the log is just filling up with entries of:

Other services do seem to be starting, but the instance is still inaccessible remotely.

I have to give up for now; my transferred volume is the only thing I really have on US-East-2 right now, plus a suspended instance I was trying to boot from it, if someone can fix it then be my guest.
Like I say, it’s possible to attach the volume to another instance as an additional volume, so any editing required can be performed on the US-East-2 copy without touching the US-East-1 original; if it can be made to work then all I should have to do is rsync a few folders over.


#14

That should work too: boot another instance on US-East 2 and mount the volume that fails to boot, edit /etc/network/interfaces, save and try to boot again. Let us know how that goes.


#15

I already said; while editing from static to dhcp helped a bit, the server is still not remotely accessible and the log is full of messages to the effect of “A start job is running for Raise network interfaces”.


#16

could you please post the content of the file /etc/network/interfaces?


#17

I’ve been watching this thread for a while. Why isn’t this being done through a support ticket?


#18

This sort of questions often arrive to Support via tickets, indeed. In general, any request that is time sensitive or requires sharing private/sensitive details is diverted to tech support.

Personally, I find that requests that are not time sensitive nor private are better in a public forum. Conversations in public get more people involved, often resulting in more knowledge being shared.


#19

I do have a support ticket open for this now, but at the time I was hoping there might be a quick fix I could apply myself, and as smaffuli says if the knowledge can be shared it may help others who run into the same problem; I haven’t done anything unusual that I can think of so I don’t imagine I’m going to be the last person to run into this as the January deadline approaches.

I’m not at home atm so I can’t grab the /etc/network/interfaces file, but it looked pretty normal; just a single “ethernet” interface with MAC address and some other details, it was set to static but as suggested I changed that to dhcp, which as I say made some improvement but didn’t fix it. I’ll try to grab it later if I remember, but I’m not sure what it can show. Is there any other way to refresh it perhaps? Can it be safely deleted or does OpenStack provide it? Never had to solve network interface issues on Linux before, it’s always just worked in the past.


#20

Apologies for the delay, but here’s what my /etc/network/interfaces file looks like:

[code]# Injected by Nova on instance boot

This file describes the network interfaces available on your system

and how to activate them. For more information, see interfaces(5).

The loopback network interface

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
hwaddress ether fa:16:3e:98:cf:35
address 10.10.10.2
netmask 255.255.255.0
broadcast 10.10.10.255
gateway 10.10.10.1[/code]

The thing that I’m suspicious of are the IPs, as it’s making me wonder if the issue might be to do with the fact that US-East-2 doesn’t provide a private network by default, but I’m at a loss as to what to do about it, as with the interface configured as dhcp the IP addresses listed there shouldn’t matter, so I’m not sure what else to change.

I had hoped to complete my migration tomorrow but it’s looking increasingly like I won’t be able to. From the sounds of it support staff have pretty limited access to instances which is frustrating, as I’m not sure why migration should even be my responsibility; I was forced to do all the grunt work to move to US-East-1 in the first place by the ridiculous two-week VPS root access ultimatum (despite the fact US-East-1 was a beta service which I would not have moved to if I’d had a choice), now I’m being forced to do all the grunt work to move onto US-East-2 because US-East-1 was apparently never fit for purpose judging by the massive outage a short time ago.

Edit: facepalms so it turns out the IP lines were the problem, as they should be omitted after setting to dhcp. Looks like it’s another classic case of a terrible error message as “A start job is running for Raise network interfaces”, doesn’t exactly scream “you’ve left IP details in your dhcp settings”. At least now my US-East-2 instance boots up normally, so the next challenge is to rsync across all the stuff that’s changed since I first tried to migrate.