When migrating to the cloud in the form of “lift & shift”, you are paying pointlessly for hot air.
If someone moves their local servers to the cloud in an unchanged computer configuration, they will needlessly overpay for the time there. This is because in the cloud the servers are scaled differently and for the same capacity you need far fewer computer resources. Getting the same performance from smaller servers is one of the many ways how you can save money using the cloud.
Servers are mostly overdimensioned i companies at present. When we measured, for example, the load of local infrastructure at two Slovak production companies,i one of medium size we measured a median load on the CPU of just 10% on 30 servers. This low value is common because when scaling your own equipment you have to count with swings in the load burden or future growth in output, and so you purchase your own HW with significant reserve. In the majority of corporations the load is even lower - in the second company we measured a median use of processors on 300 servers below 3%.
The cloud lets you add and take away processors or disk memory according to current need, and so it is not necessary to procure reserve capacity in advance. Why pay for hot air? For sure, the addition of a processor and memory will require a restart of the operating system, but it only takes a few minutes. In the case of horizontally scalable architectures on several servers, it is necessary to add or remove a single whole server and then no outage occurs. There are public clouds that are capable of automatically creating and cancelling server when required and Azure, for instance, calls it „scale sets“.
Another reason why we do not recommend flipping your servers to the cloud in the form of “lift & shift” is that the unit processor output in the CPU in the cloud is often higher than in your own equipment, because cloud providers are purchasing constantly new machines.
Example: In your own environment you have a virtual server with four in the CPU with a load of 17 % on the processor E5-2620 V2. It has a value of the benchmark PassMark 1 299. In Azure you consider the universal server class D with a processor E5-2673V4, where the benchmark is substantially higher – 1 805. That is why you don’t need four cores, but just one, while the resulting load will be on the level of 4 × 17% × 1 299 ÷ 1 805 = 49 %, which is totally fine. We recommend customers to dimension their cloud infrastructure so that around 30 to 70% of total capacity is used. We recommend measuring the existing output over a longer period of time and then take account of its standard deviation – in the case of servers, for example, above 10 means a more conservative approach is required.
How to reduce the price further? Payment on the cloud is often by the minute, meaning if it is a testing or development server, you can switch it off at night and weekends and save up to two thirds of the cost. Also, many cloud providers let you reserve or prepay for capacity for a certain period, typically one or three years, meaning you get a good discount. You should also ask your provider whether it is possible to transfer existing licences for OS Windows, which could also produce major savings. Azure enables this, for instance.
There is yet one more new way to save more. In a scenario resembling a Russian Matryoshka, you can now launch various other virtual servers inside a virtual server on certain clouds. In Azure this function is called „nested virtualization“ and is suitable for economic operation of a large number of less burdened servers.
Cloud can help make big savings, but you have to know how to map existing hardware correctly to infrastructure in the cloud. We proposed to the said customer initially to replace the 750 CPU of the on-premises output to 380 cores in the cloud. by which the load would increase just to 3.6%. How would it work out for you?