There are a lot of AWS customers who run Kubernetes on AWS. In fact, according to the publishes, 63% of Kubernetes workloads run on AWS. While AWS is a popular place to run Kubernetes, there is still a lot of manual configuration that customers need to manage their Kubernetes clusters. The user has to install and operate the Kubernetes master and configure a cluster of Kubernetes workers. In order to achieve high availability in Kubernetes clusters, the user has to run at least three Kubernetes masters across different AZs. Each master needs to be configured, consistently share information, load balance, and fail-over to the other masters if one experiences a failure. Then once the user has all set up, the user still has to deal with upgrades and patches of the masters and workers software. Explore this article and know more about AWS ECS Kubernetes release.
ABOUT AMAZON EKS
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a completely managed service that makes it easy for the user to use Kubernetes on AWS without having to be a proficient in managing Kubernetes clusters. There are few points that developers will really like about this service. Firstly, Amazon EKS runs the upstream version of the open-source Kubernetes software, so the user can use all the standing plugins and tooling from the Kubernetes community. The users can easily migrate their Kubernetes applications to Amazon EKS with zero code changes. Secondly, Amazon EKS spontaneously runs K8s with three masters across three AZs to protect against a single point of failure. This multi-AZ architecture provides resiliency against the loss of an AWS Availability Zone.
Thirdly, Amazon EKS also automatically detects and replaces unhealthy masters, and it provides automatic version upgrades and patching for the masters. The Amazon EKS joins with Amazon features like Elastic Load Balancing for load distribution, IAM for authentication, Amazon VPC for isolation, and AWS Cloud Trail for logging.
HOW IT ACTUALLY WORKS
Amazon EKS integrates IAM authentication with Kubernetes RBAC in association with Heptio. The user can assign RBAC roles to every individual IAM entity allowing the user to control the access authorizations to their Kubernetes masters. This allows the users to easily manage their Kubernetes clusters using Kubernetes tools such as Kubectl.
The user can also use a private link if he/she wants to access their Kubernetes masters directly from their Amazon virtual private cloud. This process lets the users access their Kubernetes masters and Amazon EKS straight from their own Amazon VPC without using public IP address.
Lastly, an open source CNI plugin that anyone can use with their Kubernetes clusters on AWS. This allows the user to natively use Amazon VPC networking with their Kubernetes pods. With Amazon EKS, introducing a Kubernetes cluster is as easy as a few clicks in the AWS Management Console. Amazon EKS handles the rest, the improvements, fixing, and high availability.
AWS Greengrass is a software that extends AWS cloud capabilities to local devices, making it possible to collect and analyze data closer to the source of information, while also steadily communicating with each other on local networks. More precisely, developers who use AWS Greengrass can author server less code in the cloud and conveniently deploy it to devices for local execution of applications.
When Andy Jassy announced the release of AWS Greengrass in Limited Preview back in November at AWS reinvent 2016, the show floor was vibrant for hours. AWS Greengrass promises to power up what “the Edge” by allowing connected devices to run local compute, messaging, data caching, and synch capabilities securely, even when they are not connected to the Internet.
And while connectivity stands at the core of IoT, if the user is developing or deploying connected devices the user cannot always count on a steady connection. AWS Greengrass also lets one device act as a hub for others, so they can save energy by keeping data connections with a local intranet.
The latest announcement of general availability for AWS Greengrass is a big deal for IoT developers. It is a one of the important new component in the Cloud toolkit that powers IoT, and gives us few things to consider.
KEY TAKEOUT’S FROM AWS GREENGRASS ANNOUNCEMENT SOFTWARE SKILLS ARE NOW PORTABLE TO IOT
Greengrass moving more of what software developers do to the edge of the network is a major inflection point.
The enormous software developer community is already moving into IoT, and tools like AWS Greengrass will help to rule the space over time. Their familiarity with programming in languages such as Python and writing lambdas to process streaming data in-flow now becomes applicable on the other side of the “choke point.”
CONNECTIVITY IS MORE IMPORTANT
When it comes to connectivity, IoT developers need solutions that are simple, reliable, and flexible.
Existing product offerings rarely deliver on all three, and often burden projects with significant commercial and technical constraints even during development and testing. Signing complicated contracts in the prototype stage, when there is no guarantee that devices will even work as designed, can still require a years-long upfront commitment.
The communication between a device and the cloud comes with the burden of retransmitting repetitive, undifferentiated data related to security, encryption, and packet headers.
A significant portion of IoT cellular opex is 25% to as much as 75% or more in many cases it is related directly to this overhead or “connectivity tax.”
Both are highly preferable for the performance and a cost perspective to transmit only the meaningful data.
THE TOTAL PACKAGE FOR IOT
Device-based environments such as AWS Greengrass promise a new, easier IoT featuring:
Familiar development tools.
Pre-processing on constrained devices.
Decreased cellular data connectivity (or newer variants such as RPMA, NB-IoT, CAT-M).
A flexible, take-what-you-need, pay-as-you-go fee structure.
If you are around DevOps or people who are working with deployments in your industry, then you must have heard of the name blue green deployment. Most of the organizations in the world use this technique to get the minimum down time for their respective products. This deployment is old but still one of the finest to use. So, explore this article and know what actually is blue green deployment.
WHAT EXACTLY IS THE BLUE-GREEN DEPLOYMENT
A blue-green deployment is a change management strategy for releasing software code. Blue/green deployment is also referred as A/B deployments which require two identical hardware environments that are configured exactly the same way. While one environment is active and serving end users, the other environment remains idle.
Blue-green deployments are often used for consumer-facing applications and applications with a serious uptime requirement. New code is released to the inactive environment, where it is thoroughly tested. Once the code has been assessed, the team makes the idle environment active, typically by adjusting a router configuration to redirect application program traffic. The process gets reversed when the next software iteration is ready for release.
THE STEP-BY-STEP PROCESS OF BLUE-GREEN DEPLOYMENT
To demonstrate this concept, first the user needs to set up two server environments. Each will have a web server installed. In this example, the web server represents an entire application stack which could include a load balancer, multiple web servers, and distributed or replicated databases in the backend. In this example the user has used a web server because it represents the smallest environment that can demonstrate the release pattern.
CREATE A LOCAL APPLICATION
We will start by creating “application”. This is an index page that the web servers can display. It allows the user to demonstrate different “versions” of the app without the overhead of actual development. On local system, install git using the platform’s preferred method. If the user’s local machine is running Ubuntu, then the user can install by typing,
local$ sudo apt-get update
local$ sudo apt-get install git
The user need to set a few configuration settings in order to commit to a git repository. The user can give name and email address by typing:
With the configuration set, the user can create a directory for their new application and move into it:
local$ mkdir ~/sample_app
local$ cd ~/sample_app
Initialize a git repository in our application directory by typing:
local$ git init
Now, create the index.html file that represents the application:
local$ nano index.html
Save and close the file when it is finished.
To finish up, the user can add the index.html file to the git staging area and then commit by typing:
local$ git add .
local$ git commit -m “initializing repository with version 1″
CONFIGURE THE BLUE AND GREEN WEB SERVERS
Next, work on setting up green and blue environments with functional web servers. Log into your servers with your sudo user to get started.
HOW DOES BLUE GREEN DEPLOYMENT WORK WITH AWS?
DNS routing is a common method for Blue Green deployments. With DNS the user can easily switch traffic from the blue environment to the green and vice versa if the rollback is needed. Route 53 can be used to implement switch when bringing up the new “green” environment. The switch could consist of a single EC2 instance, or an entire ELB. The resource record set has to be updated so that it points to the domain or subdomain of the new instance or the new ELB. It works for a varied variety of environment configurations, as long as the endpoint is a DNS service or an IP address.
As a substitute to this DNS approach, the user can also use Route 53 with designated resource record sets. The traffic can be switched from blue environment to the green environment by updating the designated record of the record set. The user can easily rollback to blue deployment in case of an error by updating the DNS record.
Another approach to perform the Blue Green switch is using the weighted distribution with Route 53. Here the user can shift the traffic based on weightage of environment. Amazon Route 53 enables the user to define a percentage of traffic for the green environment and gradually update the weights until the green environment carries the full production traffic. This method provides the ability to perform canary analysis that slowly introduces a small percentage of production traffic to the new environment.
AWS offers several methods to cost-effectively deliver live video content on the cloud. This answer provides an AWS solution that combines AWS Elemental Cloud, a service that enables customers to rapidly deploy multiscreen offerings for live and on-demand content, with other AWS services to build a highly resilent and scalable architecture that delivers your live content worldwide.
AWS is providing a range of services to help the user to develop mobile applications that can scale up to hundreds of millions of users and can reach global audiences. With AWS the user can get started quickly, it ensures high quality by testing on real devices in the cloud, and measures and improves the user engagement. So let’s dive deep and know what is AWS Mobile Hub and setting-up of a mobile hub.
What exactly the AWS Mobile Hub is :
AWS Mobile Hub provides an integrated console experience that enables the user to swiftly create and configure powerful mobile app backend features and integrate them into their mobile app. The user can create a project by selecting the features to add to their application.
When the user builds her project for iOS Objective-C, iOS Swift, or Android, Mobile Hub automatically provisions and configures all of the AWS service resources that the app’s features require. Mobile Hub then guides the user through integrating the features into the user app code and downloading a fully working quick start app project that demonstrates the features.
After the mobile app is built, the user can use Mobile Hub to test the app, then monitor and visualize how it is being used.
AWS Mobile Hub enables the user to select the region in which their project’s resources will be created.
When the user uses AWS Mobile Hub, the user can pay only for the underlying services that Mobile Hub provisions based on the features they have chosen in the Mobile Hub console.
Setting-up AWS Mobile Hub: Sign-up for AWS
To use AWS Mobile Hub, the user needs an account in AWS. The user account will have the access to all available services, but the users are charged only for the services they use. If the user is a new AWS customer, they can get started with the AWS Free Tier.
Creating an I am user
To provide better security, the user is recommended to not to use their AWS root account to access the Mobile Hub. Instead of that, the user can create an AWS Identity and can Access Management (IAM) user, or can use an existing IAM user, in their AWS account and then access Mobile Hub. If the user has already signed up for AWS but has not created an IAM user for themselves, they can create one by using the IAM console. First, they have to create an IAM administrator group, then create and assign a new IAM user to that group.
Enabling AWS Mobile Hub
AWS Mobile Hub administers AWS resources for mobile app projects on behalf of the customer. This includes the automation that creates AWS Identity and Access Management roles to the mobile app users and updates their permissions based on the features that are enabled in a mobile app project. Because these operations require administrative privileges, then only a user with administrative privileges may enable to Mobile Hub too. These are the steps an administrative user must take in order to enable AWS Mobile Hub in an AWS account. This only needs to be done once.
To enable Mobile Hub in an AWS account:
Navigate to the AWS Mobile Hub console at https://console.aws.amazon.com/mobilehub/.
Select Get Started.
Select Yes, grant permissions.
Signing in to Mobile Hub and Creating the user’s Project:
A Mobile Hub project is a logical workspace which contains the features which the user chooses to incorporate into their mobile app. The user can create as many projects as they wish.
To create a Mobile Hub project
Select Get Started or Create new project.
For Project name, type a name for the project.
Select Create project.
Getting Started: App Content Delivery:
The App Content Delivery feature enables the user to store app assets, such as resource or media files, so that a user can download and cache them within their app. Mobile Hub offers two choices for distributing these files: 1. from a single location using an Amazon S3bucket or 2.
distributed through a global content delivery network by using Amazon CloudFront.
The Cloud Logic feature lets the user to build backend services using AWS Lambda functions that the user can call from their mobile app. Using Cloud Logic, a user can run code in the cloud to process business logic for their apps and share the same code for both iOS and Android apps. The Cloud logic feature is powered by AWS Lambda functions, which allows the user to write code without worrying about managing frameworks and scaling backend infrastructure.
The Push Notifications feature enables the user to send push notification messages to their iOS and Android apps using Amazon Simple Notification Service. The user can integrate with Apple and Google messaging services by providing credentials that are provided by those services. They can send messages directly to individual devices, or publish messages to the SNS topics.
User Data Storage
The Mobile Hub User Data Storage feature, creates and configures four folders for each user, inside an Amazon S3 bucket belonging to the app. The following table shows the details of permission policies that are provisioned for each folder type.
With Mobile Hub and the creations of a Mobile Hub project, Mobile Hub allows the user to return and modify its features and configurations. With all these features available, combined with AWS services, client SDKs, and code makes it fast and easy to add new capabilities to the user mobile App. With the fast turnaround of information, whether it is social, media or business, AWS Mobile Hub provides an ideal platform for mobile apps and services to work seamlessly together.
AWS Lambda is a compute service that lets the user to run the code without provisioning or managing servers. AWS Lambda executes the code only when it is needed and scales automatically. The user pays only for the compute time that they consume – there is no charge when the user doesn’t run the code. With AWS Lambda, one can run code virtually for any type of application or backend services. AWS Lambda runs the code on a high-availability compute infrastructure and performs all the administration of the compute resources, such as server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging. The user need to do is to supply the code in one of the languages that AWS Lambda supports (such as Node.js, Java, C#, and Python).
The user can use AWS Lambda to run the code in response to events, such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table to run the code in response to HTTP requests using Amazon API Gateway or to invoke the user’s code using API calls made using AWS SDKs. With the above capabilities, the user can use Lambda to easily build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB process streaming the data which is stored in Amazon Kinesis or the user can create their own back end which operates at AWS scale, performance, and security.
AWS LAMBDA– RUNS JAVA CODE IN RESPONSE TO EVENTS
Many of the AWS users are using AWS Lambda to build clean and straightforward applications which handles image and document uploads, process log files from AWS CloudTrail, and handles the data which is streamed from Amazon Kinesis, and so forth. With the newly launched synchronous invocation capability, Lambda is becoming the most favourite choice for building mobile, web, and IoT backends.
LAMBDA FUNCTION IN JAVA
AWS Lambda has become even more useful by giving the ability to write the user’s Lambda functions in Java.
The user’s code can make use of Java 8 features along with any desired Java libraries. The user can also use the AWS SDK for Java to make calls to the AWS APIs.
AWS is providing two libraries specific to Lambda: aws-lambda-java-core with interfaces for Lambda function handlers and the context object, and aws-lambda-java-events containing type definitions for AWS event sources (Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon DynamoDB, Amazon Kinesis, and Amazon Cognito). User can author their Lambda functions in one of the two ways. Firstly, they can use a high-level model that uses input and output objects:
public lambdaHandler( input, Context context) throws IOException;
public lambdaHandler( input) throws IOException;
If the user does not want to use POJOs or if Lambda’s serialization model does not meet the user’s needs, they can use the Stream model. This is a bit lower-level:
public void lambdaHandler(InputStream input, OutputStream output, Context context)
The class in which the Lambda function is defined should include a public zero-argument constructor, or to define the handler method as static. Alternatively, the user can also implement one of the handler interfaces(RequestHandler::handleRequest or RequestStreamHandler::handleRequest) which is available within the Lambda core Java library.
PACKAGING, DEPLOYING, AND UPLOADING
User can continue to use their existing development tools. In order to prepare their compiled code with Lambda, user must create a ZIP or JAR file that contains their compiled code (CLASS files) and any desired JAR files. The handler functions should be stored in the usual Java directory structure and the JAR files must be inside a lib subdirectory. In order to make this process easy, AWS has published build approaches using popular Java deployment tools such as Maven and Gradle.
Specify a runtime of “java8” when the user uploads their ZIP file. If they implement one of the handler interfaces, then they have to provide the class name. Otherwise, they have to provide the fully qualified method reference (e.g. com.mypackage.LambdaHandler::functionHandler).
AWS Lambda (Amazon Web Services Lambda) is a compute service that runs developers’ code in response to events and automatically manages the compute resources for them, making it easy to build applications that respond quickly to new information. It runs the code within milliseconds of trigger. AWS Lambda service is a service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. This upload and create is called as AWS Lambda function. It also takes care of provisioning and managing the servers that you use to run the code.
The user needs to supply one of the languages which are supported by AWS Lambda. Currently, AWS Lambda supports Node.js, Python, and Java.
AWS Lambda executes the developers code only when needed and scales automatically. A user can use Lambda for building data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB, or a user can create his own backend which operates at AWS scale, performance, and security.
WHEN SHOULD A USER USE AWS LAMBDA?
AWS Lambda is an optimal compute platform for many applications scenarios, where a user can write the application code in languages supported by AWS Lambda and can run within the AWS Lambda standard runtime environment and resources provided by Lambda.
When using AWS Lambda, a user is only responsible for the code. It manages and balances the memory, CPU, network, and other resources. These constraints enable AWS Lambda to perform operational and administrative activities on users behalf, including provisioning capacity, applying security patches, deploying the code, monitoring and logging to Lambda functions.
HOW AWS LAMBDA WORKS
While building applications in AWS Lambda, the core components are Lambda functions and event sources. An event source is an AWS service or custom application that publishes an event, and a Lambda function is the custom code that processes the events. Below are the following scenarios to be considered.
File Processing: For an instance, you have a photo sharing application. People use your application to upload photos, and the application stores the user photos in Amazon S3 bucket.Then your application creates a thumbnail version of each user photos and displays them on the user’s profile page. In this scenario, you may choose to create a Lambda function that creates a thumbnail automatically. Your Lambda function code can read the photo object from the S3 bucket, create a thumbnail version and can save it in another S3 bucket.
Data and Analytics: If suppose you are building an analytics application and storing raw data in a DynamoDB table, when you write, update, or delete the items in a table, DynamoDB streams can publish item update events to a stream associated with the table. In that case, the event data provides the item key, event name, and other details as needed. A user can write a Lambda function to generate custom metrics by accumulating raw data.
Websites: If you are creating a website and you want to host the backend logic on Lambda. You can invoke your Lambda function over HTTP using Amazon API gateway as the HTTP endpoint. Then your web client can invoke the API gateway to route the request to Lambda.
Mobile Applications: If you have a custom mobile application that produces events. You can create Lambda function to process events published by your custom application.
COMPUTE REQUIREMENTS – LAMBDA FUNCTION CONFIGURATION
A Lambda function consists of code and associated dependencies. A Lambda function also has configuration information associated with it. At the beginning, you specify the configuration information when you create a Lambda function. Lambda provides an API for the user to provide some of the configuration data. Lambda function configuration includes the following elements:
Compute resource that a user needs: A user can specify the amount of memory that he wants to allocate for the Lambda function. AWS Lambda allocates CPU power proportional to the memory by using the same ratio as an Amazon EC2 instance type.
Maximum execution time: The user pays for the AWS resources that are used to run the users Lambda functions. To prevent the Lambda function running indefinitely, a user specifies a timeout. When it reaches the specified timeout the AWS Lambda function terminates the Lambda function.
Execution role: This is the role that AWS Lambda assumes while executing the Lambda function on behalf of a user.
Handler name: The handler refers to the method in the user’s code where AWS Lambda begins execution.
What are the main advantages of using the AWS Lambda Service?
No Servers to Manage: AWS Lambda automatically runs your code without requiring you to provision or manage servers. Just write the code and upload it to Lambda.
Continuous Scaling: AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.
Subsecond Metering: With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered. You don’t pay anything when your code isn’t running.