Cloud Vision – anno 2016 - TriNimbus

Cloud Vision – anno 2016

Thank you Kima! Originally posted February 25, 2013 on Operating Dev Blog.

It is the year 2016 and Amazon is getting ready for their fifth re:Invent conference. The conference attracted a lot of attention already – after all one of the keynotes will be broadcast from aboard MV Ushuaia as Amazon is ready to launch their newest data centre, built next door to the Palmers Station on Anvers Island in the Antarctic Peninsula region. With this, they will now have data centres on all continents on our planet! (Some say they are building one under water in the Pacific too but these are only rumors for now.)

The interest for the conference has been growing since the first one in 2012 and Amazon decided this year to hold simultaneous events in several locations around the world. The audiences will follow the broadcast of the keynote from Antarctica using the latest video streaming service the AWS team plans to officially announce at the conference – AWS Cineplex – which is seriously threatening Netflix as the main provider of online movie streaming.

But the most interesting part is not the fact that Amazon built a data centre in Antarctica nor that this could be the end of Netflix. The rumour has it that Amazon has a new trick up their sleeve that will leave all of their competitors to scramble in keeping their market share. Ironically, if the rumour is true, Netflix’s former CEO Reed Hastings, who mysteriously left the company in 2015 to start a new business dealing with big data, would’ve approved of. (Some folks think that his departure from Netflix last year is a clear sign the company is going down after re:Invent.)

At the first re:Invent in 2012, Mr. Hastings compared the state of AWS and cloud computing to the state of programming in the early days, when programmers were expected to use assembly language instructions to code their applications. As he said“as wonderful as AWS is, we are still in the assembly language phase of cloud computing”. He continued to add that “developers shouldn’t have to be picking individual instance types, just as they no longer need to worry about CPU register allocation because compilers handle that for them.”

If the rumour is to be believed, it seems Amazon has finally managed to push cloud computing out of the assembly level into a completely new category. True, improvements have been made in the past so people didn’t have to care any more about instance and storage types. Still, some regarded the ability to directly adjust the CPU speed or available memory as more complicated than choosing an instance type. After all, already back in 2012 Joyent has been able to provide a form of vertical scaling that did a similar adjustment automatically to handle peek loads without any involvement from IT.

The days when IT folks with specialized performance and scalability testing and tuning skills were needed to size the instances properly might be gone forever. We may all now be able to control our cloud environments by simply telling AWS how much we can afford to spend in $$. (Unfortunately Artur Clarke’s prediction that in 2016 all existing currencies will be abolished and an universal currency based on the “megawatt-hour” will be adopted didn’t come true yet.)

My sources tell me that AWS will announce a new service at re:Invent that will go under the name TruScale. AWS TruScale is an evolution of the old AWS AutoScaling service that was capable of automatically adding or removing instances in a group based on performance metrics that could be pre-configured by the IT operators. AutoScaling would then use the monitoring and alerting service CloudWatch to keep an eye on the performance of all instances in the group and could decide to add more to handle increased load or remove some to reduce cost when the load decreased.

Six years ago, AutoScaling had put AWS ahead of its competitors, including that unicorn in the cloud arena – the private cloud – which simply couldn’t compete with the pay-as-you-go model and was later discredited as the snake oil of the cloud. By providing true ability to adjust resources as needed and keep cost and performance in balance with load, AutoScaling was a major contributor to the proliferation of startups that had built products for the cloud and were able to handle the growth of their customer base as they were scaling their business – and did that without having to invest in infrastructure upfront nor having to learn about clustering, load-balancing or other techniques that were not their core competence.

The major difference between TruScale and AutoScaling is its capability to scale both vertically and horizontally at the same time. It can initially load your app onto a smaller number of instances with limited CPU and memory capacity and then constantly measure the use of computing resources along with load metrics like connections, active sessions, etc. and make decisions to add more capacity to those instances (vertical scaling) or add more instances (horizontal scaling). It can even deal with I/O and storage capacity needs and provision more IOPS or add more storage to your instances as needed.

To configure TruScale you can choose an initial load profile for you app that gives the service a model of the expected usage of your application. On top of that you can specify the minimum cost you can accept when the system is idle, the maximum hourly rate you’re willing to pay when the system scales up and how you want to handle load beyond that limit – e.g. you can engage the AWS Throttle service to keep track of the requests and slow down those who are more frequent or carry larger payloads.

Cloud computing has come a long way since its debut over ten years ago. With services like TruScale, AWS is proving that one can build and run applications under the assumption of unlimited resources (at least if you can afford the cost of ‘unlimited’). We have already seen in the past architectures that were built with this assumption in mind and allowed applications to be self-healing, auto scaling to meet the load and resilient to failures within individual components or services. Who knows what will services like TruScale do for the new architectures in the years to come.

My goal in writing a fictional article about the cloud in the near future is to raise an important point that is easily missed by many thinking about the cloud. While moving CapEx to OpEx and reduced cost in general is an important driver for many to adopt the cloud, the true value is in its ability to realize the concept of unlimited resources. Today, in 2013, services like AutoScaling and Elastic Load Balancer allow us to already implement architectures with this assumption in mind, but they’re not easy to configure by the average AWS user and they leave much to be desired. I hope the article helps you appreciate the potential behind this assumption and makes it easy to see why private clouds can never deliver a true cloud experience.