If you care at all about the cloud, you probably know that the largest cloud provider, Amazon Web Services, had held their annual re:Invent conference in Las Vegas this week. This is my second year attending the conference and given the previous tendency to launch new services during the conference I was eagerly anticipating this one to see what goodies AWS has in stock this year. Oh boy was I in for a treat!
Aptly named, re:Invent is unlike most big conferences held by other technology organizations because it focuses on education instead of sales. AWS uses the opportunity to offer plenty of breakout sessions, mixed with hackathons, keynote presentations, etc. to the participants during which they learn not only how to use the plethora of services that the platform has to offer but how to build great scalable, fault-tolerant and secure architectures on the AWS cloud (often these lessons apply elsewhere too.) The goal of the conference is to make sure that the users of the platform strive to make something great with it and are given an opportunity to learn from others how they build their products on AWS so they can improve their solutions too.
This year’s re:Invent was much more special then the previous ones, however. AWS brought so many new services focusing on DevOps and in support of the development lifecycle, which made me realize that they’re no longer seeing themselves as the largest cloud infrastructure provider but they have a new vision for their platform, one that I feel very happy about. AWS decided that it needs to do everything in their power to make it easier for developers to write, test and deploy their code, ensure their deployments are secure, scalable and fault-tolerant and give them tools to review, monitor and manage their resources. A whole line up of services – CodeDeploy, CodePipeline, CodeCommit, Key Management Service, Config, joined the services like VPC, EC2, EBS, ELB, AutoScaling, RDS, ElastiCache, Redshift, etc. These services already did a great job in providing value to the organizations using the platform but AWS wasn’t happy with greatness and has decided to clearly show that they want to help their customers achieve excellence by focusing on their products instead of worrying about how to get new features in front of their customers faster.
This was only the tip of the iceberg, though. AWS revealed that they’ve been working on a new SQL database engine, Aurora, that not only provides commercial grade or better performance with no licensing fees or lock-in and is compatible with the open source MySQL engine, it actually features mystical qualities like auto scaling storage, mind boggling fault-tolerance and fully automated lightning speed disaster recovery. All that for a price around 50% cheaper for an equivalent MySQL RDS deployment (when you account for the MultiAZ cost to implement fault-tolerance and the fact you need to allocate more storage for it then you need to account for future growth as adding storage is disruptive to the operation of the database.)
To top it all off, AWS revealed what I strongly believe is the future of how cloud applications will be developed and deployed. The first service showing this is the so called EC2 Container Service, which is a management service that makes it easy to deploy Docker containers on groups of EC2 instances. It is a first step towards abstracting the deployed stack from the underlying infrastructure and a smart way to utilize the compute capacity cost effectively without trading off performance. The second service, which I truly think is a window into the future of cloud computing, is a service they call Lambda.
Deriving its name from the Greek letter often used to describe expressions referring to computation based on function abstraction, Lambda is a service that can probably be best described as a platform that gives Excel Macro-like functions magical powers to run in a distributed fashion, bringing along the event-driven model they were built on and which allowed Excel to be used in applications that range from simple spreadsheets to complex form processing, accounting applications and even analytics and business intelligence. Or another way to understand Lambda is to compare it to trigger-based database programming that was successfully used to breathe life into the dry and boring records stored in tables and make the databases behave like living organisms that react to changes to their internal states from the outside and do cool things like automatically generate new fields, aggregate data, version and control records, validate updates, create historical records for each update, etc.
I know that both Excel macros and database triggers have bad reputation because it is easy to abuse them and create solutions that are ridden with bugs, difficult to manage or are causing the underlying Excel documents or SQL databases to stop performing and even come to a halt. But I think the AWS Lambda service offers a great promise for adopting the same event-driven rule-based programming model implemented in Excel and SQL databases and extending it to any resource in the cloud, without the limitations and problems the previous implementations ran into. Lambda not only works without making you worry about the compute resources by letting you simply focus on your code, it actually promises to provide the same performance for your functions regardless if they’re executed once a week or millions of times every hour. It also provides a programming model and built in support for testing and managing your code deployments that should greatly reduce the opportunities for mistake.
Preparing to end the conference and pack to go home, I can’t help but reflect on the exciting news all of these services will mean for us here at TriNimbus, as well as to our customers who already run various workloads on AWS or are in the process of migrating there. I believe that this week in Vegas brought a tectonic shift and moved the cloud into a new abstraction level above the infrastructure roots it started with. Things like auto-scaling storage, effortless fault-tolerance, smart and efficient automated deployments across compute clusters, direct ability to produce and run code in an integrated fashion within a cloud deployment are only the beginning of the new cloud chapter. Whether we end up calling it Cloud 2.0 or come up with another name that the marketing departments would love to buzz about and everyone will try to claim they support it, AWS has clearly shown they prefer to be pioneers and innovators, not just mere leaders.
I think the future looks very exciting and I am happy to be on this journey along with AWS and all of their partners and customers. I hope to see you join in too if you haven’t already.
At TriNimbus we have successfully implemented a DevOps as a Service model with our clients that allows us to partner closely with them and together design, implement and support processes and tools for building, deploying, monitoring and tuning their application stack. Our goal is to improve their ability to innovate and rapidly deliver new functionality to their customers. We’re excited to bring all of the new AWS services to them and take advantage of the new opportunities that AWS has opened for all of us this week.