Are EC2 Instances Resizable
When launching an Elastic Compute Cloud instance with Amazon Web Services, you need to select a type and size for the instance. This selection will dictate how many CPU cores are available, how much memory is available and how much storage is available for the virtual machine. What happens if your initial selection was wrong for the task at hand?
Are EC2 Instances Resizable? An AWS EC2 instance is resizable if the instance is an elastic block store backed instance. The instance type or instance family can be updated meaning the number of cores, amount of memory, amount of storage, and other aspects of the instance can all be changed.
Given that this is a possibility in the Amazon Web Services cloud it probably makes sense to know how this might be accomplished. It also would be good to learn more details around what is possible to change within the instance configuration.
After using an instance that was deployed to an availability zone in the Amazon Web Services cloud, it may become apparent that the current features of the instance are either too little resources or too much resources. At this point it probably makes a lot of sense to change the instance type or instance family to get the proper resource allocation for that instance.
To do this update, it will require some downtime for the virtual machine. So it would be best to either have another instance take over while the EC2 instance is down or to at least let any user or service that is communicating with the instance know there is going to be some downtime. Once ready, the first step would be to stop the instance that needs to have its properties changed.
Once the instance has been properly shut down, the cloud customer is then able to change the instance type or instance family of the virtual machine. After waiting for this update to be applied by the EC2 service, the machine is ready to be started again. At this point, starting the machine will locate new hardware within the availability zone that matches the new instance type or instance family of the instance and use that to start up the virtual machine for the cloud customer.
Even though the previous description gives details on how to change the properties of an instance that is running for a given customer, one thing that the cloud customer should be made aware of is that this update will only work if the original virtual machine was an elastic block store backed instance. This is because the elastic block store essentially acts as a scalable network drive that is attached to the selected hardware for the instance.
When starting up a new instance on new hardware in the regions availability zone, this previous elastic block store which contains all of the setup and configuration of the operating system and any software installed can simply be attached to the newly chosen hardware. If the machine was configured to start up with an instance store OS drive, which is local storage of the virtual machine, as soon as the machine is stopped, all data stored on that instance stores drives are lost and not recoverable.
This is one very good reason why it makes a lot of sense to run virtual machines backed by Elastic Block Stores within the Amazon Web Services cloud. It essentially helps make the compute hardware independent of the storage that is attached to that virtual machine, and allows you to move that storage to another instance if needed.
Within the Amazon Web Services cloud there are many different instance families. These families are usually related to different features of the instance and are optimized for that specific feature. For example there are various compute optimized instances, there are various memory optimized instances, there are various storage optimized instances, and there are various specialized instances which would include specialized hardware like GPUs or FPGAs.
Within each of these families there are also many different instance types. Usually the change in an instance type from low to high is an approximate doubling of the amount of CPU cores, memory and storage with an emphasis on the feature that the family is optimized for. For example the initial instance type of a family might have a single core, 2 GiB of RAM and 200 GiB of local storage. The next instance type up from this might have 2 CPU cores, 4 GiB of RAM and 400 GiB of local storage. The following instance type might have 4 CPU cores, 8 GiB of RAM, and 800 GiB of storage and so on. Usually the price increases by the same amount, doubling for each double in capacity available to the node.
So using this as a baseline, if the requirements of a virtual machine double, it could make sense to move up to the next instance type from a given family that the virtual machine is currently using.
One major reason that someone might want to change the properties of their currently running virtual machine is that it either may not have enough CPU cores or that it may have too many cores for the current project. When there are too many cores on the virtual machine compared to what it needs to do the job, the cloud customer is essentially paying more for the machine than they need to. On the other side, if the virtual machine isn’t provisioned with enough cores, the work being done is likely being completed at a much slower rate than desired.
There are many different CPU configurations available to virtual machines running in the Amazon Web Services cloud. There are also different styles of CPUs that are available which may be better or worse for certain use cases. When resizing the virtual machine, you would be able to scale the machine up to have nearly one hundred cores available for processing if the workload requires that many. If reducing the number of cores of the virtual machine, the number of cores can be reduced from the current number of cores all the way down to a single core.
Other tasks that run in the cloud need different amounts of memory, or RAM, in order to process their workloads properly. The amount of memory that a virtual machine needs to have to properly run a task can be hard to determine at the outset of a project. This is one reason why being able to resize a virtual machine within the Amazon Web Services cloud can be very beneficial.
If the current workload needs more memory to complete the task, it would make sense to stop the machine and then update its type or family to be one with more RAM available than what the current virtual machine is configured for. Also, if the virtual machine has too much memory available for the current workload that is running on the virtual machine, it makes sense to stop that machine and scale it down to an instance type or family that has less memory available to it. The main reason to do that would be to save costs and not overpay for something that isn’t being used.
The current instances available to the amazon cloud allow customers to run instances with memory configurations starting at 2 GiB all the way up to 24576 GiB. So really, there should be something available for most workloads that people would like to run.
Some workloads that run in the cloud environment may need a lot of storage to successfully complete the tasks assigned to the instance. The Amazon Web Service instances usually have a certain amount of local storage available to them with some instance types and families having more available than others.
When considering resizing a virtual machine for storage, resizing the machine really only makes sense when the local storage drives need to be used for the work. This is because extra EBS volumes can be added to an EC2 instance pretty much any time and on demand. The only real time when you might resize the machine for the local storage is when you need the raw speed available from an internal NVMe type drive instead of a networked storage like EBS.
If this type of work applies to the work that would be running on your virtual machine then there are many different local storage options available. For example the i3en instance family can have up to 60 TB of local NVMe solid state drives available to them. For most high storage jobs that need very fast local performance, this can be a really good option to look at.
Other users may be more concerned about the networking configuration of their instance. If more or less networking resources are required to accomplish the task at hand, the instance can be resized to meet those needs. If for example the current virtual machine being looked at is configured for low network capacity to reduce costs but the work needs faster networking to meet the demands of the users, it could make a lot of sense to stop the virtual machine and change the instance type or instance family so that it has medium or high network capacity available to it.
The instance type or family can also have an effect on the network bandwidth available to the instance. For example some instances may only be able to use up to 25 Gbps while others can use 100 Gbps. If the higher bandwidth is needed it can make sense to change the instance type to the kind that has this higher bandwidth allotment. Otherwise, all else being equal, if the bandwidth isn’t a real requirement for the current workload, it can make sense to scale the instance type down to a lower bandwidth version instance to save on cost, if the amount of CPU, RAM, and storage is good enough on the smaller sized instance.
What if during the process of working with a virtual machine it becomes apparent that certain specialized hardware is required. If this is the case then it would likely make sense to resize the machine to another instance type or instance family that includes this required specialized hardware. For example, the machines may include things like GPUs or FPGAs.
For a GPU type workload, Amazon does have the ability to dynamically attach GPUs to an instance using their Elastic Graphics capability, but for some workloads it may make more sense to have a locally attached GPU for better performance. This is when you would want to resize the EC2 instance to another style of instance that has the locally attached GPU or GPUs. Within the Amazon Web Services cloud, instances are currently able to have up to eight local GPUs connected to them via NVIDIA NVLink.
However, if the task had started out with the thought that a local GPU was needed, but then it was later found out that this actually wasn’t the case, it would make sense to resize the instance so that it did not have a local GPU attached to it. The main reason for doing this would be to reduce the cost of the running virtual machine as the machines with local GPUs can be quite expensive to run.
One thing to note when resizing a virtual machine is that as soon as the instance is stopped, the underlying hardware that was being used by the cloud customer is released back to the Amazon cloud to be recycled for another customer to use. This means that any data that was stored on the local instance storage will be gone when the virtual machine is started again. For some types of workloads this can be catastrophic, and those workloads should be using EBS type storage instead and that data will survive the stop and start cycle when resizing an instance.
However, it can still make sense to use this local storage in certain use cases. For example if this space is just used for local scratch space to store intermediate calculations or temporary results, it would be alright to lose that data if the virtual machine was stopped. A good example of this type of use case is with the Apache Spark framework. As the framework is computing the different jobs, it may need some temporary storage, or a place to spill data to disk if there isn’t enough memory to keep everything in RAM. This data isn’t really needed for the final result as it is thrown away after the final result is calculated. This is the perfect use case for when the local ephemeral storage can be useful. This is especially true when looking at the speed and IOPS available to the NVMe drives in the cloud.
Another thing to consider when resizing a virtual machine in the AWS cloud is that if the virtual machine is not using an elastic IP, the public IP that is associated with the instance could be lost. This is because once the virtual machine is stopped in order to resize it, the public IP is released as it is no longer being used. This lets the Amazon cloud recycle the IP address to another customer that may need a public IP address at that point in time.
If you need to keep the public IP address that is associated with the virtual machine during the entire resize process, then you should attach an Elastic IP to the virtual machine. This is an IP address that is elastic to the virtual machine it is attached to. Basically the Elastic IP belongs to the Amazon Customer and the customer will pay for that Elastic IP whenever it isn’t attached to a virtual machine.
So when stopping the virtual machine that is using an Elastic IP for its public IP address, it can be assigned to the new underlying hardware that is started up after the machine instance type or instance family has changed. There will still be downtime during this process, but once the virtual machine is started again, it will still have the previous public IP address that was assigned to the Elastic IP.
As you can see, based on all of this information, Amazon EC2 instances really can be resized in many different dimensions due to all of the available instance types and families that Amazon Web Services provides. As the needs and goals of a project change, the instance running the work for those goals can change with them. This is great news for anyone running workloads in the cloud as it makes things very flexible and it means that requirements for a virtual machine don’t need to be set in stone.
However, there are a few caveats that go along with this resizing, like that the instance must have its main operating system installed on an elastic block store device. Also, any data stored on the local ephemeral drives will be lost in the process, or a public IP address will change if it is not assigned via an Elastic IP. But even with these restrictions, resizing an EC2 instance in AWS is quite flexible.