WHAT HAPPENS TO THE TOTAL MEMORY WHEN PYTHON EXISTS?

What happens to the total memory when python exists?

Whenever Python exits, especially those python modules which are having circular references to other objects or the objects that are referenced from the global namespaces are not always de – allocated/freed/uncollectable.

It is impossible to deallocate those portions of memory that are reserved by the C library.

On exit, because of having its own efficient clean up mechanism, Python would try to deallocate/destroy every object.

ON WHAT CONCEPT THE HADOOP FRAMEWORK WORKS?

On what concept the Hadoop framework works?

Hadoop Framework works on the following two core components :

HDFS – Hadoop Distributed File System is the java-based file system for scalable and reliable storage of large datasets. Data in HDFS is stored in the form of blocks and it operates on the Master Slave Architecture.

Hadoop MapReduce-This is a java-based programming paradigm of Hadoop framework that provides scalability across various Hadoop clusters. MapReduce distributes the workload into various tasks that can run in parallel. Hadoop jobs perform 2 separate tasks- job. The map job breaks down the data sets into key-value pairs or tuples. The reduce job then takes the output of the map job and combines the data tuples to into smaller set of tuples. The reduce job is always performed after the map job is executed.

WHY PROMETHEUS IS BETTER WHEN COMPARED TO OTHER MONITORING METRIC SYSTEMS?

Why Prometheus is better when compared to other monitoring metric systems?

First of all, we don’t have to define a fixed metric system to start working with it; metrics can be added or changed in the future. This provides valuable flexibility when you don’t know all of the metrics you want to monitor yet.

Prometheus supports a DNS service for service discovery. It already has an internal DNS service.

There is no need to install any external services (unlike Sensu, for example, which needs a data-storage service like Redis and a message bus like RabbitMQ). This might not be a deal breaker, but it definitely makes the test easier to perform, deploy, and maintain.

Prometheus is quite easy to install, as you only need to download an executable Go file. The Docker container also works well and it is easy to start.

SED AND AWK IN LINUX

SED and AWK in Linux

The Linux’s ecosystem has two other very useful and powerful tools for patterns search: sed that stands for stream editor, and awk that is named by the names of its creators, Aho, Weinberger, and Kerningham. In this article we are going to explain, what is the major difference? Which is the best usage for each one of the two? So, let’s dive deep in.

SED

A fast stream editor, is able to search for a pattern and apply the given changes and/or commands; still easy to combine in sophisticated filters, but serving in different aim; modifying the text in the stream. Its key usage consists of editing in-memory a stream according to the given pattern.

AWK

A slackly typed programming language for stream processing, where the basic unit is the String (intended as an array of characters) that can be 1. Matched 2. Substituted and 3. Worked around most of the times, it is not needed to combine awk with other filters, since its reporting capabilities are very powerful (the printf built-in function allows format the output text as in C). Its main usage consists fine-grained (variables can be defined and modified incrementally) and programmatic manipulations (flow control statements) to the input stream.

According to the above definitions, the two tools serve different purposes but it might be used in combinations, and as said work in matching patterns, but, there is still no net difference between sed and awk so let’s try to clarify by examples.

AWK_SED

MACHINE LEARNING IN WEB DEVELOPMENT

Machine Learning in Web Development

Machine Learning in Web Development

As a type of artificial intelligence (AI), machine learning uses algorithms to make computers learn without being openly programmed. It is a method of data analysis that automates analytical model building. The automated analytical model building makes computers find unseen insight. It also makes computer programmers change when exposed to new data. At present, machine learning is one of the newest trends in software development. Many predictors believing that machine learning will wholly transform the development process of various software including web applications. So, explore this article and know more about impact of machine learning in web development.

IMPACT OF MACHINE LEARNING ON WEB APPLICATION DEVELOPMENT
ALTERNATIVE TO CONVENTIONAL DATA MINING

Most of the organizations use data mining to produce new information based on huge volumes of existing data. There are various websites that use specialized data mining techniques like web mining to discover patterns based on huge amount of online data. The enterprises can use machine learning as an alternative to conventional data mining. Data mining, machine learning can also identify patterns based on huge amount of data. But machine learning, unlike data mining, will change the program actions automatically based on the detected patterns.

DELIVER CUSTOMIZED CONTENT AND INFORMATION

Facebook is already using machine learning algorithm to customize newsfeed of each user. The technology used by Facebook combines predictive analytics and statistical analysis to identify patterns based on the user’s data. Also, it personalizes the newsfeed of the users based on the identified patterns. The machine learning technology identifies pattern based on the content read and posts liked by the user. Based on the identified pattern, it displays similar posts and content earlier in the feed. While developing web applications, programmers can embed similar machine learning technology to deliver personalized content and information to each user based on the user’s personal choices and preferences.

A RANGE OF MACHINE LEARNING APIS

The web application developers have option to choose from several open source and commercial machine learning APIs according to their specific needs. These APIs make it easier for developers to achieve changing tasks by implementing machine learning algorithms efficiently. The web stores can also use machine learning APIs to regulate the prices of products according to current demand. The API will increase the price of the product automatically as the demand rises.

FAST PRODUCT DISCOVERY

Big organizations such as Apple, Google, and Microsoft are already using machine learning algorithms to deliver smart search results to each user. While developing ecommerce applications, the programmers can use machine learning algorithm to help customers find products faster. The developers can use precise machine learning algorithm to deliver quality and relevant information to users. Also, they can use the technology to help customers select products based on their specific needs. The ecommerce portal can further use machine learning to make the customers browse through only relevant products.

THE VERDICT

Machine learning will change the way websites and web applications are developed. The developers will embed machine learning algorithm and APIs in the web applications to make them deliver customized and rich user experience. However, the impact of machine learning will differ from one web application to another. Also, the web developers have to combine various machine learning algorithms according to their precise needs.

MACHINE LEARNING APIS

AMAZON EKS

Amazon EKD

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Amazon Elastic Container Service for Kubernetes (Amazon EKS) delivers Kubernetes as managed service on AWS. Amazon launched Amazon EKS in November at its re:invent conference. Creating it

commonly brings AWS up to speed with Google Cloud Platform and Microsoft Azure in terms of offering fully-managed Kubernetes.

Kubernetes is an open-source system for pre-setting the deployment, scaling, and management of containerized applications.

Amazon EKS runs Kubernetes control plane instances through multiple Availability Zones to make sure high availability. Amazon EKS easily detects and substitutes unhealthy control plane instances, and it also provides automated version upgrades and patching.

ADVANTAGES

NO CONTROL PLANE TO MANAGE

Amazon EKS runs the Kubernetes management infrastructure through several AWS Availability Zones, automatically identifies and replaces unhealthy control plane nodes, and provides on-demand upgrades and patching. The user has to simply provision the worker nodes and connect them to the provided Amazon EKS endpoint.

SECURE

Secure and encrypted communication channels automatically set up between the employee nodes and the managed control plane, creating the user’s infrastructure running on Amazon EKS secure.

BUILT WITH THE COMMUNITY

AWS dynamically works with the Kubernetes community, with making contributions to the Kubernetes code base that help Amazon EKS users take help of AWS services and features.

INSTALL AND CONFIGURE KUBECTL FOR AMAZON EKS

Amazon EKS clusters need kubectl and kubelet binaries and the Heptio Authenticator to let IAM authentication for the user Kubernetes cluster. Opening with Kubernetes version 1.10, the user can configure the stock kubectl client to work with Amazon EKS by installing the Heptio Authenticator and modifying the kubectl configuration file to use it for authentication.

If the user does not have a local kubectl version 1.10 client on your system, the user can use the following steps to install one.

TO INSTALL KUBECTL FOR AMAZON EKS

Download and install kubectl. Amazon EKS vends kubectl binaries that the user can use, or also the user can follow the commands in the Kubernetes documentation to install.

To install the Amazon EKS-vended version of kubectl:

Download the Amazon EKS-vended kubectl binary from Amazon S3.

Use the below command to download the binary, replacing the right URL for the user’s platform. The instance below is for macOS clients.

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/dar
win/amd64/kubectl
Apply execute permissions to the binary.
chmod +x ./kubectl

Copy the binary to a folder in $PATH. If the user has already installed a version of kubectl (from Homebrew or Apt), then we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in your $PATH.

cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

* After the installation of kubectl, the user can verify its version with the following command:
kubectl version –short –client

TO INSTALL HEPTIO-AUTHENTICATOR-AWS FOR AMAZON EKS

Download and install the heptio-authenticator-aws binary. Amazon EKS vends heptio-authenticator-aws binaries that the user can use, or the user can also use go get to fetch the binary from the Heptio Authenticator project on GitHub for other operating systems.

TO DOWNLOAD AND INSTALL THE AMAZON EKS-VENDED
HEPTIO-AUTHENTICATOR-AWS BINARY FOR LINUX

* Download the Amazon EKS-vended heptio-authenticator-aws binary from Amazon S3.

* Use the command below to download the binary, switching the correct URL for your platform.

curl -o heptio-authenticator-aws https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/heptio-authenticator-aws

Apply execute permissions to the binary.
chmod +x ./heptio-authenticator-aws

Copy the binary to a folder in the $PATH. Create a $HOME/bin/heptio-authenticator-aws and ensuring that $HOME/bin comes first in the user’s $PATH.

cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export PATH=$HOME/bin:$PATH

Add $HOME/bin to the PATH environment variable.

Amazon EKS

Amazon EKS

ALL ABOUT ANSIBLE VAULT

All About Ansible Vault

This blog is about using ansible vault. Vault is a way to encrypt subtle information in Ansible scripts.

A distinctive Ansible setup comprises some sort of secret to fully setup a server or application. The common types of “secret” include passwords, SSH keys, SSL certificates, API tokens and whatever which the user do not want the public to see.

Since it is common to store Ansible configurations in version control, we need a way to store confidential data securely.

Ansible Vault is the answer to this. Ansible Vault can encrypt anything inside of a YAML file, using a password of the user choice.

USING ANSIBLE VAULT

A classic use of Ansible Vault is to encrypt variable files. Vault can encrypt any YAML file, but the most common files to encrypt are:

* Files within the group_vars directory
* A role’s defaults/main.yml file
* A role’s vars/main.yml file
* Any other file used to store variables.

ENCRYPTING AN EXISTING FILE

The characteristic use case is to have a normal, plaintext variable file to encrypt. Using ansible-vault, we can encrypt and define the password needed to decrypt later:

# Encrypt a role’s defaults/main.yml file ansible-vault encrypt defaults/main.yml > New Vault password: > Confirm New Vault password: > Encryption successful

The ansible-vault command will request the user for a password twice. Once that is done, the file will be encrypted. If the user edits the file directly, the user will just see encrypted text. It will be something like this:

$ANSIBLE_VAULT;1.1;AES256
65326233363731663631646134306563353236653338646433343838373437373430376464616339 3333383233373465353131323237636538363361316431380a643336643862663739623631616530 35356361626434653066316661373863313362396162646365343166646231653165303431636139 6230366164363138340a356631633930323032653466626531383261613539633365366631623238 32396637623866633135363231346664303730353230623439633666386662346432363164393438

CREATING AN ENCRYPTED FILE

If the user wants to create a new file instead of encrypting an existing one, the user can use the create command:

ansible-vault create defaults/extra.yml > New Vault password: > Confirm New Vault password:

EDITING A FILE

Once the user encrypts a file, the user can only edit the file by using ansible-vault. Here is how to edit the file after it is been encrypted:

ansible-vault edit defaults/main.yml > Vault password:
This will ask for the password used to encrypt the file.
You’ll lose your data if you lose your password!

ENCRYPTING SPECIFIC VARIABLES

The user does not have to encrypt a whole file! The user can track the changes in git, where the user will not have an entire file changing for just a small change.

The most basic use case is, to run it interactively on the CLI to get the formatted YAML as output:

ansible-vault encrypt_string > New Vault password: > Confirm New Vault password: > Reading plaintext input from stdin. (ctrl-d to end input) > this is a plaintext string > !vault | > $ANSIBLE_VAULT;1.1;AES256 >

39393766663761653337386436636466396531353261383237613531356531343930663133623839
>
3436613834303264613038623432303837393261663233640a363633343337623065613166306363
>
37336132363462386138343535346264333061656134636631326164643035313433393831616131
>
3635613565373939310a316132313764356432333366396533663965333162336538663432323334
> 33656365303733303664353961363563313236396262313739343461383036333561 >
Encryption successful

DEEP LEARNING IN MACHINE LEARNING

Deep Learning in Machine Learning

Deep Learning is a sub-division of machine learning consists of algorithms stimulated by the structure and function of the brain called artificial neural networks.

If the user is just starting out in the field of deep learning or the user had some experience with neural networks, then the user might get confused.

The experts in the field have idea of what deep learning is and these exact and refined perspectives shed a lot of light on what deep learning is all about.

In this article, the user will discover exactly what deep learning is by hearing from a range of experts in the field.

DEEP LEARNING

Deep Learning has evolved hand-in-hand with the digital eon, which has brought about an eruption of data in all forms and from every region of the world. This data, known simply as Big Data, is drawn from sources such as social media, internet search engines, e-commerce platforms, online cinemas, and much more. This massive amount of data is readily accessible and can be shared through FinTech applications such as cloud computing. However, the data, which usually is unstructured, is so huge that it could take decades for humans to understand and extract relevant information. Organizations realize the incredible potential that can result from unravelling this wealth of information, and are increasingly adapting Artificial Intelligence (AI) systems for automated support.

One of the most common AI techniques used for processing Big Data is Machine Learning, a self-adaptive algorithm that gets progressively better analysis and patterns with experience or with new added data. The computational algorithm built into a computer model will process all the transactions happening on the digital platform, find patterns in the data set and identifies glitches detected by the pattern.

Deep learning, a subdivision of machine learning, utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. The artificial neural networks are built with neuron nodes connected together like a web. While traditional programs build analysis with data in a linear way, the hierarchical function of deep learning systems allows machines to process data with a nonlinear approach. A traditional approach to identifying fraud might depend on the amount of transaction arises, while a deep learning nonlinear technique would include time, geographic location, IP address, and other features that is likely to point to a fraudulent activity. The first layer of the neural network processes a raw data inputs the amount of transaction and passes it on to the next layer as output. The second layer processes the previous layer’s information by including additional information like the user’s IP address and passes on its result. The next layer takes the second layer’s information and includes raw data like geographic location and makes the machine’s pattern even better. This continues across all levels of the neuron network.

DEEP LEARNING

Using the fraud detection system with machine learning, the user can create a deep learning example. If the machine learning system creates a model with parameters built around the amount of dollars a user sends or receives, the deep learning method can start building on the results offered by machine learning. Each layer of its neural network builds on its previous layer with added data such as retailer, sender, user, credit score, IP address and a host of other features that may take years to connect together if processed by a human being. Deep learning algorithms are skilled to not just create patterns from all transactions, but to also know when a pattern is signalling the need for a fraudulent investigation. The final layer transmits a signal to an analyst who may freeze the user’s account until all pending investigations are confirmed.

Deep learning is used across all industries for a number of different tasks. Commercial apps that use image recognition, open source platforms with consumer recommendation apps that explore the possibility of reusing for new ailments are a few of the examples of deep learning incorporation.

WHAT IS A NAMESPACE IN KUBERNETES?

What Is A Namespace In Kubernetes?

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.