WHAT’S THE DIFFERENCE BETWEEN A POLYFILL AND A SHIM?

What’s the difference between a Polyfill and a Shim?

A Polyfill is a piece of code that checks if a certain Browser API is available across all browsers. If not, you can write a polyfill to manually implement it.

A Shim is a piece of code that intercepts an already existing API (not necessarily Browser) and implements a different action/ behavior. Typically, Shims are used for backward compatibility.

For example, let us take two browsers – one running the ES5 engine and ES3. In ES5, we can:

var currentDate = Date.now();
But in ES3, this would fail and we would need a Shim like:
if(!Date.now) {
Date.now = function shimmingFunc() {
return new Date().getTime();

WHAT ARE THE PROS OF USING THE HANDLEBARS TEMPLATE OVER UNDERSCORE.JS?

What are the pros of using the Handlebars template over Underscore.js?

Picking one over the other is a subjective thing – it really comes down to your preferred programming style – but when compared, handlebars edges out underscore.js as below:

Logic-less templates do a great job of forcing you to separate presentation from logic.

Clean syntax leads to templates that are easy to build, read, and maintain.

Compiled rather than interpreted templates.

Better support for paths than mustache (ie, reaching deep into a context object).

Better support for global helpers than mustache.

Requires server-side JavaScript to render on the server.

Clean syntax leads to templates that are easy to build, read, and maintain.

Compiled rather than interpreted templates.

Better support for paths than mustache (ie, reaching deep into a context object).

Better support for global helpers than mustache.

Requires server-side JavaScript to render on the server.

WHAT HAPPENS TO THE TOTAL MEMORY WHEN PYTHON EXISTS?

What happens to the total memory when python exists?

Whenever Python exits, especially those python modules which are having circular references to other objects or the objects that are referenced from the global namespaces are not always de – allocated/freed/uncollectable.

It is impossible to deallocate those portions of memory that are reserved by the C library.

On exit, because of having its own efficient clean up mechanism, Python would try to deallocate/destroy every object.

ON WHAT CONCEPT THE HADOOP FRAMEWORK WORKS?

On what concept the Hadoop framework works?

Hadoop Framework works on the following two core components :

HDFS – Hadoop Distributed File System is the java-based file system for scalable and reliable storage of large datasets. Data in HDFS is stored in the form of blocks and it operates on the Master Slave Architecture.

Hadoop MapReduce-This is a java-based programming paradigm of Hadoop framework that provides scalability across various Hadoop clusters. MapReduce distributes the workload into various tasks that can run in parallel. Hadoop jobs perform 2 separate tasks- job. The map job breaks down the data sets into key-value pairs or tuples. The reduce job then takes the output of the map job and combines the data tuples to into smaller set of tuples. The reduce job is always performed after the map job is executed.

WHY PROMETHEUS IS BETTER WHEN COMPARED TO OTHER MONITORING METRIC SYSTEMS?

Why Prometheus is better when compared to other monitoring metric systems?

First of all, we don’t have to define a fixed metric system to start working with it; metrics can be added or changed in the future. This provides valuable flexibility when you don’t know all of the metrics you want to monitor yet.

Prometheus supports a DNS service for service discovery. It already has an internal DNS service.

There is no need to install any external services (unlike Sensu, for example, which needs a data-storage service like Redis and a message bus like RabbitMQ). This might not be a deal breaker, but it definitely makes the test easier to perform, deploy, and maintain.

Prometheus is quite easy to install, as you only need to download an executable Go file. The Docker container also works well and it is easy to start.

SED AND AWK IN LINUX

SED and AWK in Linux

The Linux’s ecosystem has two other very useful and powerful tools for patterns search: sed that stands for stream editor, and awk that is named by the names of its creators, Aho, Weinberger, and Kerningham. In this article we are going to explain, what is the major difference? Which is the best usage for each one of the two? So, let’s dive deep in.

SED

A fast stream editor, is able to search for a pattern and apply the given changes and/or commands; still easy to combine in sophisticated filters, but serving in different aim; modifying the text in the stream. Its key usage consists of editing in-memory a stream according to the given pattern.

AWK

A slackly typed programming language for stream processing, where the basic unit is the String (intended as an array of characters) that can be 1. Matched 2. Substituted and 3. Worked around most of the times, it is not needed to combine awk with other filters, since its reporting capabilities are very powerful (the printf built-in function allows format the output text as in C). Its main usage consists fine-grained (variables can be defined and modified incrementally) and programmatic manipulations (flow control statements) to the input stream.

According to the above definitions, the two tools serve different purposes but it might be used in combinations, and as said work in matching patterns, but, there is still no net difference between sed and awk so let’s try to clarify by examples.

AWK_SED

MACHINE LEARNING IN WEB DEVELOPMENT

Machine Learning in Web Development

Machine Learning in Web Development

As a type of artificial intelligence (AI), machine learning uses algorithms to make computers learn without being openly programmed. It is a method of data analysis that automates analytical model building. The automated analytical model building makes computers find unseen insight. It also makes computer programmers change when exposed to new data. At present, machine learning is one of the newest trends in software development. Many predictors believing that machine learning will wholly transform the development process of various software including web applications. So, explore this article and know more about impact of machine learning in web development.

IMPACT OF MACHINE LEARNING ON WEB APPLICATION DEVELOPMENT
ALTERNATIVE TO CONVENTIONAL DATA MINING

Most of the organizations use data mining to produce new information based on huge volumes of existing data. There are various websites that use specialized data mining techniques like web mining to discover patterns based on huge amount of online data. The enterprises can use machine learning as an alternative to conventional data mining. Data mining, machine learning can also identify patterns based on huge amount of data. But machine learning, unlike data mining, will change the program actions automatically based on the detected patterns.

DELIVER CUSTOMIZED CONTENT AND INFORMATION

Facebook is already using machine learning algorithm to customize newsfeed of each user. The technology used by Facebook combines predictive analytics and statistical analysis to identify patterns based on the user’s data. Also, it personalizes the newsfeed of the users based on the identified patterns. The machine learning technology identifies pattern based on the content read and posts liked by the user. Based on the identified pattern, it displays similar posts and content earlier in the feed. While developing web applications, programmers can embed similar machine learning technology to deliver personalized content and information to each user based on the user’s personal choices and preferences.

A RANGE OF MACHINE LEARNING APIS

The web application developers have option to choose from several open source and commercial machine learning APIs according to their specific needs. These APIs make it easier for developers to achieve changing tasks by implementing machine learning algorithms efficiently. The web stores can also use machine learning APIs to regulate the prices of products according to current demand. The API will increase the price of the product automatically as the demand rises.

FAST PRODUCT DISCOVERY

Big organizations such as Apple, Google, and Microsoft are already using machine learning algorithms to deliver smart search results to each user. While developing ecommerce applications, the programmers can use machine learning algorithm to help customers find products faster. The developers can use precise machine learning algorithm to deliver quality and relevant information to users. Also, they can use the technology to help customers select products based on their specific needs. The ecommerce portal can further use machine learning to make the customers browse through only relevant products.

THE VERDICT

Machine learning will change the way websites and web applications are developed. The developers will embed machine learning algorithm and APIs in the web applications to make them deliver customized and rich user experience. However, the impact of machine learning will differ from one web application to another. Also, the web developers have to combine various machine learning algorithms according to their precise needs.

MACHINE LEARNING APIS

AMAZON EKS

Amazon EKD

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Amazon Elastic Container Service for Kubernetes (Amazon EKS) delivers Kubernetes as managed service on AWS. Amazon launched Amazon EKS in November at its re:invent conference. Creating it

commonly brings AWS up to speed with Google Cloud Platform and Microsoft Azure in terms of offering fully-managed Kubernetes.

Kubernetes is an open-source system for pre-setting the deployment, scaling, and management of containerized applications.

Amazon EKS runs Kubernetes control plane instances through multiple Availability Zones to make sure high availability. Amazon EKS easily detects and substitutes unhealthy control plane instances, and it also provides automated version upgrades and patching.

ADVANTAGES

NO CONTROL PLANE TO MANAGE

Amazon EKS runs the Kubernetes management infrastructure through several AWS Availability Zones, automatically identifies and replaces unhealthy control plane nodes, and provides on-demand upgrades and patching. The user has to simply provision the worker nodes and connect them to the provided Amazon EKS endpoint.

SECURE

Secure and encrypted communication channels automatically set up between the employee nodes and the managed control plane, creating the user’s infrastructure running on Amazon EKS secure.

BUILT WITH THE COMMUNITY

AWS dynamically works with the Kubernetes community, with making contributions to the Kubernetes code base that help Amazon EKS users take help of AWS services and features.

INSTALL AND CONFIGURE KUBECTL FOR AMAZON EKS

Amazon EKS clusters need kubectl and kubelet binaries and the Heptio Authenticator to let IAM authentication for the user Kubernetes cluster. Opening with Kubernetes version 1.10, the user can configure the stock kubectl client to work with Amazon EKS by installing the Heptio Authenticator and modifying the kubectl configuration file to use it for authentication.

If the user does not have a local kubectl version 1.10 client on your system, the user can use the following steps to install one.

TO INSTALL KUBECTL FOR AMAZON EKS

Download and install kubectl. Amazon EKS vends kubectl binaries that the user can use, or also the user can follow the commands in the Kubernetes documentation to install.

To install the Amazon EKS-vended version of kubectl:

Download the Amazon EKS-vended kubectl binary from Amazon S3.

Use the below command to download the binary, replacing the right URL for the user’s platform. The instance below is for macOS clients.

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/dar
win/amd64/kubectl
Apply execute permissions to the binary.
chmod +x ./kubectl

Copy the binary to a folder in $PATH. If the user has already installed a version of kubectl (from Homebrew or Apt), then we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.

cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

* After the installation of kubectl, the user can verify its version with the following command:
kubectl version –short –client

TO INSTALL HEPTIO-AUTHENTICATOR-AWS FOR AMAZON EKS

Download and install the heptio-authenticator-aws binary. Amazon EKS vends heptio-authenticator-aws binaries that the user can use, or the user can also use go get to fetch the binary from the Heptio Authenticator project on GitHub for other operating systems.

TO DOWNLOAD AND INSTALL THE AMAZON EKS-VENDED
HEPTIO-AUTHENTICATOR-AWS BINARY FOR LINUX

* Download the Amazon EKS-vended heptio-authenticator-aws binary from Amazon S3.

* Use the command below to download the binary, switching the correct URL for your platform.

curl -o heptio-authenticator-aws https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/heptio-authenticator-aws

Apply execute permissions to the binary.
chmod +x ./heptio-authenticator-aws

Copy the binary to a folder in the $PATH. Create a $HOME/bin/heptio-authenticator-aws and ensuring that $HOME/bin comes first in the user’s $PATH.

cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export PATH=$HOME/bin:$PATH

Add $HOME/bin to the PATH environment variable.

Amazon EKS

Amazon EKS

ALL ABOUT ANSIBLE VAULT

All About Ansible Vault

This blog is about using ansible vault. Vault is a way to encrypt subtle information in Ansible scripts.

A distinctive Ansible setup comprises some sort of secret to fully setup a server or application. The common types of “secret” include passwords, SSH keys, SSL certificates, API tokens and whatever which the user do not want the public to see.

Since it is common to store Ansible configurations in version control, we need a way to store confidential data securely.

Ansible Vault is the answer to this. Ansible Vault can encrypt anything inside of a YAML file, using a password of the user choice.

USING ANSIBLE VAULT

A classic use of Ansible Vault is to encrypt variable files. Vault can encrypt any YAML file, but the most common files to encrypt are:

* Files within the group_vars directory
* A role’s defaults/main.yml file
* A role’s vars/main.yml file
* Any other file used to store variables.

ENCRYPTING AN EXISTING FILE

The characteristic use case is to have a normal, plaintext variable file to encrypt. Using ansible-vault, we can encrypt and define the password needed to decrypt later:

# Encrypt a role’s defaults/main.yml file ansible-vault encrypt defaults/main.yml > New Vault password: > Confirm New Vault password: > Encryption successful

The ansible-vault command will request the user for a password twice. Once that is done, the file will be encrypted. If the user edits the file directly, the user will just see encrypted text. It will be something like this:

$ANSIBLE_VAULT;1.1;AES256
65326233363731663631646134306563353236653338646433343838373437373430376464616339 3333383233373465353131323237636538363361316431380a643336643862663739623631616530 35356361626434653066316661373863313362396162646365343166646231653165303431636139 6230366164363138340a356631633930323032653466626531383261613539633365366631623238 32396637623866633135363231346664303730353230623439633666386662346432363164393438

CREATING AN ENCRYPTED FILE

If the user wants to create a new file instead of encrypting an existing one, the user can use the create command:

ansible-vault create defaults/extra.yml > New Vault password: > Confirm New Vault password:

EDITING A FILE

Once the user encrypts a file, the user can only edit the file by using ansible-vault. Here is how to edit the file after it is been encrypted:

ansible-vault edit defaults/main.yml > Vault password:
This will ask for the password used to encrypt the file.
You’ll lose your data if you lose your password!

ENCRYPTING SPECIFIC VARIABLES

The user does not have to encrypt a whole file! The user can track the changes in git, where the user will not have an entire file changing for just a small change.

The most basic use case is, to run it interactively on the CLI to get the formatted YAML as output:

ansible-vault encrypt_string > New Vault password: > Confirm New Vault password: > Reading plaintext input from stdin. (ctrl-d to end input) > this is a plaintext string > !vault | > $ANSIBLE_VAULT;1.1;AES256 >

39393766663761653337386436636466396531353261383237613531356531343930663133623839
>
3436613834303264613038623432303837393261663233640a363633343337623065613166306363
>
37336132363462386138343535346264333061656134636631326164643035313433393831616131
>
3635613565373939310a316132313764356432333366396533663965333162336538663432323334
> 33656365303733303664353961363563313236396262313739343461383036333561 >
Encryption successful