top of page

Choose your right AWS Container Service to run application and workload

Writer: Dibyojyoti SanyalDibyojyoti Sanyal

In the previous post I have explained the options a developer has while choosing AWS compute services and which one to select in which scenario. The compute services mostly differ depending on where the compute instance is running and which purpose its being used. The developer is still responsible for deploying code in the compute instance and preparing the environment where the workload can execute. Be it a web server, static web site, micro service or batch jobs the developer still needs to install all the softwares, needed supporting libraries, set all the environment variables, deploy and run the piece of software, application or service and need to do many more other things. AWS Container services provides different kinds of containerization solutions(e.g. Docker, Kubernetes) to make the developers life better. The benefit of containers is, you can bundle the code along with the environment it needs to run and deploy it anywhere. You can repeat the deployment as many times as you like so you don't have to prepare the machine before to run your workload. I am going to explain what your options are in terms of AWS Container Services and which one you may choose depending on your need.


 

Contents


 

Elastic Container Registry(ECR)

As you might already know that containers are created based on container images. These images need to be kept “somewhere” so that the images can be taken and containers can be launched when needed. This “somewhere” is called the container registry. You can create several repositories inside the registry and keep the images. Repositories are like folders. One of the public container registries is Docker Hub. Similarly AWS provides its own container registry named Elastic Container Registry or ECR. Repositories in ECR allows using namespaces to group repositories and repositories also can be replicated across regions and across several AWS accounts.


Elastic Container Service(ECS)

Elastic Container Service is a container management solution. It gives you a highly scalable environment to manage containers. You can specify the containers you want to run in a task definition or in a service definition. A service definition allows you to run tasks (and inturn containers) continuously. Typically web servers are run as services. In a task you can specify which image(s) from repositories in ECR you want to use to invoke the containers. An image contains the application/service or part of it along with its required environment to run. You can run any kind of workload in these containers. You have to manage the interactions between the containers, between container and outer world or other AWS services. The tasks and services can be launched in a cluster of AWS EC2 instances which are managed by you or in serverless fashion in a cluster using AWS Fargate. When you use AWS Fargate the servers the containers run are managed by AWS. You as developer don’t have to know where the containers are running. The underlying infrastructure is totally managed by AWS. In ECS the container orchestration and container nodes are managed by ECS. You can also choose whether you want to use auto scaling and load balancing. A cool feature in ECS is the capacity providers and the capacity provider strategy. You can associate one or more capacity providers with an ECS cluster. The capacity provider strategy allows you to define how the tasks will be distributed across capacity providers. You can define or choose a capacity provider to define how the tasks will be launched in the cluster, for example how to scale the underlying infrastructure.


Amazon ECS Anywhere

Amazon ECS Anywhere enables organizations to extend Amazon ECS capabilities to their on-premises infrastructure, reducing operational overhead while maintaining a familiar container orchestration experience. It allows external instances, such as on-premises servers or VMs, to be registered to an ECS cluster via the new EXTERNAL launch type. These instances are ideal for outbound data processing workloads but lack support for Elastic Load Balancing and service discovery. To integrate external instances, an IAM role (ECSAnywhereRole) and the AWS Systems Manager (SSM) Agent are required, ensuring secure communication with AWS APIs. The SSM Agent also refreshes IAM credentials every 30 minutes, maintaining security even during network disruptions. ECS Anywhere supports edge data processing, GPU-based workloads like ML and big data, and allows Windows container workloads using existing licenses. Pricing includes potential charges for AWS Systems Manager when managing over 1,000 instances per region, as well as data transfer fees. However, a free tier provides 2,200 instance hours per month for six months. Amazon ECS also supports container instance draining, ensuring seamless workload transitions by stopping pending tasks and replacing running tasks efficiently. Despite some limitations, ECS Anywhere empowers businesses to maintain consistency in container management across cloud and on-premises environments.


Elastic Kubernetes Service(EKS)

EKS is a managed service that simplifies running Kubernetes on AWS without requiring users to install, operate, or maintain their own control plane or nodes. It ensures high availability by running and scaling the Kubernetes control plane across multiple AWS Availability Zones. Amazon EKS integrates with AWS services like Amazon ECR for container image management, Elastic Load Balancing for efficient traffic distribution, IAM for authentication, and Amazon VPC for network isolation. It runs up-to-date versions of Kubernetes and offers different deployment options, including EKS, EKS on Outposts, EKS Anywhere, and EKS Distro. Users can create and manage clusters using the AWS CLI, eksctl, and kubectl, or leverage the AWS Management Console. Proper IAM permissions are required to work with Amazon EKS roles and CloudFormation resources.

An Amazon EKS cluster consists of a control plane and worker nodes. The control plane, managed by AWS, runs essential Kubernetes components like etcd and the Kubernetes API server. It is single-tenant, unique to each cluster, and runs on dedicated EC2 instances. The worker nodes, running in a user’s AWS account, connect to the control plane via an API server endpoint secured through IAM and Kubernetes Role-Based Access Control (RBAC). The control plane is provisioned across multiple Availability Zones, using an Elastic Network Load Balancer and elastic network interfaces in VPC subnets for connectivity.

The API server endpoint is publicly accessible but secured through IAM and RBAC. Users can enable private access, which provisions a Route 53 private hosted zone for secure internal communication. Amazon EKS provides a scalable and resilient platform for deploying Kubernetes workloads, leveraging AWS’s cloud infrastructure while maintaining flexibility and security. EKS also has autoscaling capability.


Amazon EKS Anywhere

Amazon EKS Anywhere is a deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises. Both Amazon EKS and Amazon EKS Anywhere are built on the Amazon EKS Distro.


Amazon EKS Distro

Amazon EKS uses Amazon EKS Distro, a Kubernetes distribution built and maintained by AWS. Amazon EKS Distro makes it easier to create reliable and secure clusters. While EKS is a fully managed platform by AWS, you can use EKS Distro to install and manage a EKS by yourself.


AWS AppRunner

AWS App Runner is a fully managed service for deploying containerized web applications and APIs at scale without requiring infrastructure expertise. It seamlessly integrates with AWS services like databases, caching, and messaging queues to support applications. Developers can define and configure deployments using the App Runner console, API, CLI, or SDKs. The service automatically scales container instances up and down, with a default minimum of one instance and a configurable maximum limit to manage costs. App Runner allows users to specify memory, vCPU, and concurrency settings to optimize performance and cost efficiency. When idle, applications incur charges for provisioned memory, keeping them warm and preventing cold starts, while active instances are billed based on CPU and memory consumption. Users can easily pause and resume applications, reducing compute costs when not in use.

App Runner supports automatic deployments by linking to code repositories or container registries, ensuring continuous integration. It includes built-in load balancing, auto-scaling, and logging with Amazon CloudWatch for monitoring performance. Fully managed TLS certificate management ensures secure communication without manual setup, with automatic certificate renewals. App Runner also provides networking flexibility, allowing services to connect securely within Amazon VPC environments. For custom domains, App Runner provides DNS configuration options, enabling custom URLs with validated SSL certificates through ACM. Amazon Route 53 can be used for domain management and traffic routing. With its automation, security, and scalability features, AWS App Runner is ideal for running microservices, APIs, and web applications efficiently while minimizing operational complexity.


AWS Fargate

AWS Fargate is a serverless compute engine for running containers without managing the underlying infrastructure. It eliminates the need to provision and maintain EC2 instances, allowing users to focus on deploying and running containerized applications. Fargate works with both Amazon ECS and EKS, providing a fully managed experience where AWS handles scaling, security, and networking configurations. Fargate is ideal for workloads that require flexibility, scalability, and minimal operational overhead. It is best suited for short-lived, burstable workloads, microservices architectures, and applications that need rapid scaling without manual intervention. Compared to Amazon ECS or EKS with EC2 instances, Fargate is preferable when users want to avoid capacity planning and infrastructure management. However, for workloads requiring greater control over instance configurations, GPU support, or cost optimization for consistently high workloads, ECS or EKS on EC2 may be more suitable.

Fargate is particularly useful for applications with unpredictable traffic, batch processing jobs, and event-driven applications. It integrates with AWS security services like IAM and VPC networking, ensuring isolation and compliance. By offering a pay-as-you-go pricing model based on vCPU and memory usage, AWS Fargate optimizes cost efficiency for dynamic container workloads.


With this information you will be better equipped to choose which container service to choose whether you want maximum control on your infrastructure and deployment or ease of use.




Comments


Contact us : Ask anything

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • Github
  • dev.to

CloudNative Master

© 2023 by Inner Pieces.

Proudly created with Wix.com

Thanks for your message!

bottom of page