It is always exciting to sit at the re:Invent keynote and hear about all the new services and features. This year is especially remarkable to me and my continuing work with container orchestration on AWS with the advent of two new compelling services: Elastic Container Service for Kubernetes (EKS) — the managed Kubernetes control plane — and Fargate — container orchestration without provisioning or managing servers.
With the two newly announced services, there are now at least four ways to run container workloads. Let’s take a look at these two new services and compare them to what exists today and how you might potentially take advantage of them.
The following is a quick overview of what we have seen so far about the new services and how they compare to the existing services (Elastic Container Service and Kubernetes on EC2). Each has its own specific advantages. Empowered by the two new services and the two existing container orchestration services, AWS proves its commitment to support all different types of container workloads.
Elastic Container Service for Kubernetes (EKS): Easily Operating Kubernetes on AWS
Elastic Container Service for Kubernetes is the managed Kubernetes control plane. Many have wanted it, many have expected it, many cheered at the announcement of it. Here are a few signals leading for the advent of Elastic Container Service for Kubernetes:
- Arun Gupta(@arungupta) joining AWS as the Principal Open Source Evangelist earlier in April, who also wrote a blog which mentioned:
“Yes, we’d like you to use EC2 Container Service. But if you want to use Docker, Kubernetes, DC/OS or any other open source orchestration framework, so be it! We will continue to work with our partners and the open source community, including contributing to these projects, to make sure AWS remains the best place to run your containerized workloads.”
- Adrian Cockcroft(@adrianco) joining Cloud Native Computing Foundation (CNCF) — famous for hosting Kubernetes, along with other widely popular tools such as Prometheus and fluentd — as a member of the governing board along with AWS joining the organisation as a platinum member back in August
- Other alternatives to Kubernetes — Docker and DC/OS — started embracing Kubernetes
All these lead to the speculation that the ‘customer-obsessed’ Amazon Web Services would answer the market’s obvious desire for managed Kubernetes on AWS; a request that is fulfilled by EKS that has been announced today. With the managed Kubernetes control plane, the indispensable – and often most challenging – pieces of the Kubernetes cluster (the etcd — or alike — configuration persistence tier and processes such as kube-api-server, kube-scheduler, controller manager) no longer require the expensive management overhead, giving the team more time to focus on valuable activities developing products instead of operating infrastructure, thereby reducing mean time to market.
Kubernetes on AWS – What’s the Appeal?
Running Kubernetes on AWS is attractive for numerous reasons. Arguably the most attractive characteristic of Kubernetes is its open architecture: because of the pluggable provider interface, Kubernetes can run both on premise and across different cloud providers, making the deployment code reusable regardless of the underlying infrastructure. It is a feature-rich container orchestration platform that supports widely different workloads ranging from highly fluctuating stateless services to containerized persistence backends. Furthermore, it fully aligns Linux security paradigms such as namespaces, cgroups and capabilities to create a highly scalable yet secure container deployment experience.
While Kubernetes has internal support for container autoscaling, service discovery and secrets management within the platform, running Kubernetes on AWS allows the platform to leverage some key AWS services such as autoscaling groups (underlying container host instances), ELB (load balancing), EBS volumes (storage) and Route53 (DNS). For an organisation that is already operating on AWS, many of the existing operating paradigms and provisioning tools can be extended to also run Kubernetes with relative ease.
Running Kubernetes on AWS is an opportunity to take advantage of the feature-rich container platform and robust scalable on-demand infrastructure – it seems to be the best of both worlds.
Managed Kubernetes on AWS – Drastic Reduction of Operating Pains
Many organisations have adopted Kubernetes and its promised advantages. However, during my diverse experience working with organisations implementing Kubernetes on AWS, I have seen too many engineers flummoxed by the steep learning curve. While Kops certainly helps ease the creation and upgrade of the Kubernetes clusters on AWS, configuring the master nodes, troubleshooting network errors, managing etcd backup/restoration and performing other control plane related tasks in order to keep a cluster functional takes a lot of work and learning. When Kops cannot be used because of various compliance reasons, it gets even more challenging. The mere exercise of figuring out the various options to configure kube-api-server and the myriad of alpha or beta features to be activated or not is enough to make one’s head spin.
Without fully appreciating the complexity of maintaining a highly available Kubernetes control plane before the Kubernetes adoption, engineers can quickly find themselves descending into a rabbit-hole that is neither foreseen nor wanted. Unfortunately, there is no real magic in DevOps. Many engineers working on Kubernetes often spend too much time reading documentation and user blogs, working through tutorials and source code and catching up on the latest messages of on Kubernetes Slack channels. On top of that, the rapidly changing platform with the multitude of features, enhancements and add-ons make navigation in the Kubernetes world even more difficult. To be sure, everyone likes the euphoria when one solves a puzzle, but it is an abominably high price to pay to justify the initial requirement “to just deploy a few Docker images on something cloud agnostic.”
That is exactly why the announcement of EKS is such good tiding to people having only conventional “I want some way to run containers in a portable way” Kubernetes needs. With the managed Kubernetes control plane, suddenly a chunk of the headache is removed (no more etcd troubleshooting, hurrah!). Moreover, with the official support on AWS, the plugins running on the platform are considerably more closely aligned with the AWS core design. IAM is ingrained into EKS workflow, such that pods assume finely tuned IAM roles without compromising the security of the underlying worker instances. Traffic across pods on different worker nodes closely follow the VPC network and security model, making the process very efficient. It will no longer require professional network engineering knowledge just to debug simple pod networking.
EKS is not just another managed Kubernetes service, but a Kubernetes setup that advertises the best way to run Kubernetes on AWS, with minimal fuss and maximal efficiency.
EKS will remain in limited preview for the next few months, no date set. By General Availability the latest stable version is expected to be made available along with the two previous minor versions (e.g. 1.9 along with 1.8 and 1.7). Automated patching is expected as well for patch releases (e.g. 1.8.0 to 1.8.1), while in-place upgrade between minor versions is completely controlled by the user. Even when worker nodes are not managed by EKS, CloudFormation templates and scripts are provided to create and upgrade the worker nodes accordingly. As EKS is closely aligned with the open-source Kubernetes and AWS are contributing to the Kubernetes project, any new releases of Kubernetes will be quickly made available in EKS. If a workload runs on Kubernetes on premise on a version that is supported by EKS, it is expected to run smoothly on EKS too.
It almost sounds too good to be true, but I like this early Christmas present – a lot!
Fargate: the Marriage of Serverless and Containers
Fargate is announced as the container orchestration tool with no management. At launch it is supported on Elastic Container Service (ECS), and it will be supported in EKS in 2018. Even though it is not announced as such, but given the extremely easy requirement to run Fargate is CPU memory, I cannot help associate the new service with AWS Lambda.
I first heard about Lambda back in 2014, when I was still working at my previous job and pretty new to AWS services. After learning about Lambda, my normally stoic manager suddenly started singing praises to the service and evangelising on all the workflows that could be run on demand with practically zero management (which was quite an eye-opening experience, on many different counts). “Serverless” and event-driven programming has since occupied an increasingly large part of my work at TriNimbus; because of its ease to setup and trigger, it becomes the go-to service when one needs processing without having to care much about scaling and availability. Lambda has become my favourite service on AWS for a while now.
Fargate, in a nutshell, is Lambda for containers. Instead of developing function code in one of the supported languages, one can now just run the image directly in a serverless fashion. Provided a task definition, which is the same format as the one that drives workloads on Elastic Container Service (ECS), Fargate will run the containers purely driven by triggering events, without the user having to care about the container instance types and volume size, how and when to scale.
To develop a Lambda function, it is most effective to set up the SAM local to simulate the cloud environment locally. To develop a workload for Fargate, one does not even have to set up anything extraneous. If you can run the image with a local Docker run, you should be able to run the container on Fargate.
Simple is indeed beautiful.
Elastic Container Service(ECS): Born and Raised in AWS
So where does that leave Elastic Container Service?
While Fargate serves some straightforward event-driven workloads, Elastic Container Service provides a rich experience for running containers. ECS – a product that is born and raised in AWS – has embedded within it the DNA of succinct integration with other AWS services. Logging needs can be answered by the awslog option (amongst other choices) that points straight to CloudWatch Logs, autoscaling and monitoring are concisely integrated with CloudWatch, while permissions and security is heavily backed by IAM and security groups. Given container instances are specialized EC2 instances, any needs for spot instances and spot fleets are also supported by ECS. On top of that, if one is looking for a CI/CD experience within AWS – as most people should – Elastic Beanstalk can be used to manage the ECS workloads easily.
Recent announcements on ECS reinforce how beautifully integrated ECS is with the rest of the AWS ecosystem. The awsvpc setting that allows a specific Elastic Network Interface (ENI) attached with a container directly paves the road leading to more fine-grained network segregation in support of compliance requirements. Surely there are more features to come to make ECS the go-to container orchestration service on AWS.
K8s on EC2: Specific Requirements Need Specific Tools
Earlier in the blog, I wrote about the sometimes harrowing experience operating Kubernetes on EC2. It is true that EKS is going to solve a notable amount of Kubernetes needs on AWS, it is however not going to support at time of writing all possible requirements. It is the same pattern that can be seen in other managed services e.g. RDS. If you need a SQL Server database that fits into the operating parameter of RDS SQL server you can easily and should leverage RDS; but if you have specific requirements for SQL Server Analysis or Reporting services, which is not supported by RDS SQL Server, the ideal solution is still running SQL server on EC2 instances. It is the usual balance between control and delegation. There are more limitations with managed services, while one is winning the peace of mind; conversely, with flexibility you have to assume more operational responsibility. The same logic applies to running Kubernetes on AWS.
If the non-descript Kubernetes cluster works for you (“I only need an up and running cluster to support my Kubernetes deployment”), it is the most effective way to run on EKS. If you have specific compliance (e.g. HIPAA), performance and management requirements, anything that requires setting up specific features and behaviour of the cluster by configuring parameters into the Kube API server, Kubernetes on EC2 is still your optimal solution.
However, as we have seen from ECS, new features and enhancements are added to the service at an ever increasing pace. What we see today in EKS will likely be greatly enhanced in the next few months.
Until then, the recommendation is to continue your great work running Kubernetes on EC2. At the same time, do apply for preview access and keep an eye on the evolution of EKS. Better still, since the EKS plugins are open source projects, contribute to the evolution! Even when EKS does not work for you, running Kubernetes on EC2 is a rewarding experience because of the armful of knowledge across the full stack you will gain. There is also the very active Kubernetes on AWS community that continues delivering solutions to lower the entry barrier. Beyond that, there are also AWS partners — Trinimbus being one — to help you navigate around the choices and challenges associated with Kubernetes. With the very sizable user base for Kubernetes on EC2, there is increasing convergence leading to more best practice patterns. Even though it can be tricky, it is gratifying when one wades through the mud and emerges triumphant exclaiming “I got it!!” You know what I mean, right?
All Container Workloads are supported on AWS
As is the norm for AWS, every service in the ever increasingly dynamic portfolio serves a specific workload. While there is no one size fits all, each can be customized to create a fitting solution to solve your needs. With the two newly announced container orchestration services – EKS and Fargate – AWS is doubling down on supporting container workflows and making sure that all container workloads are welcome on AWS. There are different services for different workflows, just pick the one that works for you.
It is AWS’ annual re:Invent after all, and with so much exciting news to absorb, this is a quick impression on these two new cool services. I can’t wait for the following weeks/months to be filled with experiments and deep-dives and future knowledge sharing. AWS have proclaimed that containers are here to stay by providing a myriad of choices to serve your workload. If you are curious or perplexed about where to start, or how to evolve, consider getting in touch with external experts like TriNimbus who live and breathe containers and all things AWS. In the fast evolution of containerised workflows — which is only going to get faster from here — there is a lot of excitement and benefit. If you have been wondering whether it is for you, just give it a try — you will most likely be pleasantly surprised by what you see!