Creating larger instances

dreamcompute

#1

Something I’ve used Google Compute for in the past is to spin up ridiculously large instances to compute some data I’ve been working on that I couldn’t do on a local computer. Something like 32 CPUs with 64GB of memory.

Right now, the biggest DreamCompute instance I can launch is 8 vCPUs (with no mention how fast those are), and 16GB of RAM. This is nearly equivalent to my laptop.

Today, is it possible to go larger than 8 CPUs and 16GB of memory with DreamCompute? Are “super-sized” instances a use case that DreamCompute hopes to target?


#2

Hi Chris,

In our beta cloud, we had instance sizes ranging all the way up to 128GB. What we learned during the beta period was that most people want smaller instances. That’s proven to be the case with our production cloud as well and we’ve sized our hypervisor hardware primarily for those users. We don’t currently have any plans to offer larger sizes, but I wouldn’t rule it out. I’d love to hear more about your use case for the larger sizes if you don’t mind sharing.


#3

Hi, appreciate the reply.

Understandable that large instances aren’t what most people are looking for. I’m probably not a compelling business case for you as I do this infrequently and the money comes from infrequent grants.

For my use case - Several times now I have written programs to process data, starting small initially. Once everything was working fine, I could scale up by just putting my program on a huge Google Compute instance. After a couple of days the results I needed are complete, I shut it off, and I’m on my way.

I know there are other methods for distributing data processing work, but the architecture of the computer I develop on is essentially the same as the Google Compute instance, so it’s just a really easy method of testing small then going big.


#4

Scientific researcher by chance? :slight_smile: I’m curious: do your applications also need large amount of data? Do you transfer those close to GCE before processing starts or do you rely on remote network access?


#5

Strangely, no, the data being processed so far has been relatively small (under 100GB small). The raw output, which I only keep in case I needed to go back but didn’t want to regenerate it, was again around 100GB.

I tried doing remote database access for one project, but in my particular case the latency really threw everything off (which is one reason why I like sticking with one big “computer”).