Author Archives: admin

WHAT IS DATADOG?

What is Datadog?

Datadog

Datadog is a monitoring service for cloud-scale applications, it brings the data from servers, databases, tools, and services to present a combined view of an entire stack. These capabilities are provided on a SaaS-based data analytics platform.

Datadog uses Python based open-source agent forked from the original, created in 2009 by David Mytton for Server Density. It’s backend is built using a number of open and closed source technologies such as D3, Apache Cassandra, Kafka, PostgreSQL, etc.

Datadog helps the developers and operations teams to view their full infrastructure – cloud, servers, apps, services, metrics, and much more. This also includes real-time interactive dashboards that can be customized to a team’s specific needs, full-text search capabilities for metrics and events, sharing and discussion tools so teams can collaborate using the insights they surface, targeted alerts for critical issues, and API access to accommodate unique infrastructures.

Datadog also integrates with various cloud, enterprise, and developer software tools out of the box, so established team workflows will be unchanged and uninterrupted when adopting Datadog’s service.

BENEFITS OF DATADOG

With Datadog, users can:

* Connect and compare metrics and other data from the all apps and services as-well-as information coming from Amazon EC2, web servers, StatsD, SQL, and NoSQL databases.

* Streamline information analysis and other related processes such as graphing and measuring can be done in span of time.

* Configure information filtration setup to only gather the metrics.

* Set-up the system to send alerts or notifications on issues that only require the user’s immediate attention.

* Focus on the correct code configurations, significant updates, and scheduled operations.

* Extensive collaboration features enable the user and team to work hand in hand and provide comment and annotations for a productive session.

FEATURES OF DATADOG

* 80+ turn-key integrations for data aggregation.

* Clean graphs of StatsD and other integrations.

* Slice and dice graphs and alerts by tags, roles, and much more.

* Easy-to-use search for hosts, metrics, and tags.

* Alert notifications through e-mail and PagerDuty.

* Full API access.

* Overlay metrics and events across disparate sources.

* Out-of-the-box and customizable monitoring dashboards.

* Easy way to compute rates, ratios, averages, and integrals.

* Can mute all alerts with a click during upgrades and maintenance.

* Tools for team collaboration.

BUILDING MICROSERVICES IN PYTHON

What are Microservices?

Microservices is also known as the microservice architecture – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture allows the continuous delivery/deployment of large and complex applications. It also enables an organization to evolve its technology stack.

The microservice architecture pattern language is a collection of patterns for applying the microservice architecture. It has two goals, they are:

* The pattern language enables the user to decide whether microservices are a good fit for their application or not.

* The pattern language enables the user to use the microservice architecture successfully.

Explore this article and know how to build microservices in python.

Microservices design helps to ease the problems associated with the monolithic model. Implementing microservices is perhaps one of the greatest ways to improve the productivity of a software engineering team. This is especially true if the following takes place:

Building_Microservices_in_Python

* User deploys only the component that was changed. This keeps the deployments and the tests manageable.

* If multiple team members are working on the application, they need to wait until everyone is done with development and testing to move forward. However, with the microservices services model, whenever each piece of functionality is ready, it could be deployed.

* Each microservice runs in its own process space, so if the user wants to scale for any reason, he canscale the particular microservice that needs more resources instead of scaling the entire application.

* When one microservice needs to communicate with another microservice, it uses a lightweight protocol such as HTTP

MICROSERVICES DESIGN

Microservices_Design

Let’s examine how the configuration would look for a Python and Django application that runs on Nginx on a typical Linux server.

As the application code is spread across multiple repos grouped by a logically independent code, this will be the typical organization of the application directories on the server.

[ec2-user@ip-172-31-34-107 www]$ pwd
/opt/www
[ec2-user@ip-172-31-34-107 www]$ ls -lrt
total 8
drwxr-xr-x. 5 root root 4096 Oct 12 14:09 microservice1
drwxr-xr-x. 7 root root 4096 Oct 12 19:00 microservice2
drwxr-xr-x. 5 root root 4096 Oct 12 14:09 microservice3
drwxr-xr-x. 7 root root 4096 Oct 12 19:00 microservice4

Nginx, which is deployed by a front-end gateway or a reverse proxy, will have the configuration:

[ec2-user@ip-172-31-34-107 serviceA]$ cat /etc/nginx/conf.d/service.conf
upstream django1 {
server unix:///opt/www/service1/uwsgi.sock; # for a file socket
}
upstream django2 {
server unix:///opt/www/service2/uwsgi.sock; # for a file socket
}
upstream django3 {
server unix:///opt/www/service3/uwsgi.sock; # for a file socket
}
upstream django4 {
server unix:///opt/www/service4/uwsgi.sock; # for a file socket
}
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name localhost;
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location /api/service1/ {
uwsgi_pass django1;
include /etc/nginx/uwsgi_params;
}
location /api/service2/ {
uwsgi_pass django2;

include /etc/nginx/uwsgi_params;
}
location /api/service3/ {
uwsgi_pass django3;
include /etc/nginx/uwsgi_params;
}
location /api/service4/ {
uwsgi_pass django4;
include /etc/nginx/uwsgi_params;
}

Multiple uWSGI processes have to be created that would process the request for each microservice:

/usr/bin/uwsgi –socket=/opt/www/service1/uwsgi.sock –module=microservice-test.wsgi –master=true –chdir=/opt/www/service1
/usr/bin/uwsgi –socket=/opt/www/service2/uwsgi.sock –module=microservice-test.wsgi –master=true –chdir=/opt/www/service2
/usr/bin/uwsgi –socket=/opt/www/service3/uwsgi.sock –module=microservice-test.wsgi –master=true –chdir=/opt/www/service3
/usr/bin/uwsgi –socket=/opt/www/service4/uwsgi.sock –module=microservice-test.wsgi –master=true –chdir=/opt/www/

BLUE OCEAN 1.0

WHAT BLUE OCEAN ACTUALLY IS?

Blue Ocean rethinks the user experience of Jenkins. Designed from the ground up for Jenkins Pipeline, Blue Ocean reduces clutter and increases clarity for every member of the team.

* Sophisticated visualizations of continuous delivery (CD) Pipelines, allowing for fast and intuitive comprehension of pipeline’s status.

* Pipeline editor makes creation of Pipelines approachable by guiding the user through an intuitive and visual process to create a Pipeline.

* Personalization to suit the role-based needs of each member of the team.

* Pinpoint precision when intervention is needed and/or issues arise, Blue Ocean shows where in the pipeline attention is needed, facilitating exception handling and increasing productivity.

* Native integration for branch and pull request enables maximum developer productivity when collaborating on code with others in GitHub and Bitbucket.

STARTING WITH BLUE OCEAN

Below we will show you how to start using Blue Ocean. It will include instructions for installing and configuring the Blue Ocean plugin, and how to switch into and out of the Blue Ocean UI.

INSTALLING
Blue Ocean can be installed in an existing Jenkins environment or can run with Docker.
To start using the Blue Ocean plugin in an existing Jenkins environment, it must be running Jenkins 2.7.x , later:

* Login to your Jenkins server.

* Click Manage Jenkins in the sidebar then Manage Plugins.

* Choose the Available tab and use the search bar to find Blue Ocean.

* Click the checkbox in the Install column.

* Click either Install without restart or Download now and install after restart.

The majority of Blue Ocean requires no additional configuration after installation. Existing Pipelines and Jobs will continue to work as usual. However, when Pipeline is created for the first time, Blue Ocean will ask for permissions to access user repositories in order to create pipelines based on those repositories.

WITH DOCKER

The Jenkins project publishes a Docker container with Blue Ocean built-in every time a new release of Blue Ocean is published. The jenkinsci/blueocean image will be based on the current Jenkins Long-Term Support (LTS) release and is ready for production.

TO START A NEW JENKINS WITH BLUE OCEAN PRE-INSTALLED

* Ensure Docker is installed.

* Run docker run -p 8888:8080 jenkinsci/blueocean:latest.

* Browse to localhost:8888/blue.

STARTING BLUE OCEAN

Once the Blue Ocean is installed in Jenkins environment users can start using Blue Ocean by clicking the Open Blue Ocean in the top navigation bar of the Jenkins web UI. Alternatively, users can navigate directly to Blue Ocean at the /blue URL for their Jenkins environment, for example http://JENKINS_URL/blue.

FINAL WORD

Blue Ocean is an entirely a new, modern, and fun way for developers to use Jenkins that has been built from the ground level to help teams of any size with an approach of Continuous Delivery. It can be installed as a plugin for Jenkins and can be integrated with Jenkins Pipeline. Since the start of the beta at Jenkins World 2016 in September, there are now over 7400+ installations making use of Blue Ocean.

ALL YOU NEED TO KNOW ABOUT APACHE HADOOP YARN

All You Need To Know About Apache Hadoop Yarn

YARN is one of the key features in the second-generation of Hadoop 2 version of the Apache Software Foundation’s open source distributed processing framework. Originally described by Apache as a redesigned resource manager, YARN is now characterized as a large-scale, distributed operating system for big data applications.

Back in 2012, YARN became a sub-project of the huge Apache Hadoop project. YARN is a software rewrite that decouples MapReduce’s resource management and scheduling capabilities from the data processing component, which enables the Hadoop to support in a varied processing approach and a broader array of applications. The original manifestation of Hadoop closely paired the Hadoop Distributed File System (HDFS) with the batch-oriented MapReduce programming framework, which handles resource management and job scheduling on Hadoop systems and supports the parsing and condensing of data sets parallelly.

YARN combines a central resource manager that reunites the way applications are used by the Hadoop system resources with node manager agents which monitor the processing operations of individual cluster nodes. Separating HDFS from MapReduce with YARN makes the Hadoop environ ment more suitable for operational applications which can’t wait for batch jobs to finish.

DEVELOPING YARN APPLICATIONS

YARN provides the capabilities to build custom application frameworks on top of Hadoop, users also get new complexity. Building applications for YARN are notably more complex than building traditional MapReduce applications on top of pre-YARN Hadoop because the user needs to develop an ApplicationMaster in ResourceManager which is launched when a client request arrives. The ApplicationMaster has many requirements, including implementation of a number of required protocols to communicate with the ResourceManager (for requesting resources) and NodeManager (to allocate containers). For existing MapReduce users, a MapReduce ApplicationMaster minimizes any new work required, making the amount of work required to deploy MapReduce jobs similar to pre-YARN Hadoop.

YARN allocates the resources within a cluster, performs processing, exposes touchpoints for monitoring the progress of the application, and finally releases resources and does general clean-up when the application is complete. A boilerplate implementation of this life cycle is available under a project called Kitten. Kitten is a set of tools and code that simplifies the development of applications in YARN, allowing the user to focus on the logic of their application and initially ignores the details of negotiation and runs with the constraints of the various entities in a YARN cluster.

FINAL WORD

Although Hadoop continues to grow in the big data market, it has begun an evolution to address yet-to-be-defined large-scale data workloads. YARN is still under active development and may not be suitable for production environments, but YARN provides significant advantages over traditional MapReduce. It permits the development of new distributed applications beyond MapReduce, allowing them to coexist simultaneously with one another in the same cluster. YARN, with its new capabilities and new complexities, will soon be coming to a Hadoop cluster near you.

GOOGLE FUSCHIA

Google Fuschia

Google Fuschia

Google is solely developing a new operating system, but here is the thing: it’s unclear at the moment what this operating system is for, including what devices it might power. Explore this article and know about the Google’s new project Fuchsia.

GOOGLE FUCHSIA: WHAT IS IT?

Fuchsia is a developing pile of code. Users can find it on the search giant’s code depository and on GitHub. The code is purportedly the early beginnings of an entirely new operating system, though Google has yet to confirm those details. Curiously, it’s not based on Linux Kernel – the core underpinnings of both Android (Google’s mobile OS) and Chrome OS (Google’s desktop and laptop OS).

GOOGLE FUCHSIA: HOW DOES IT LOOK LIKE?

Fuchsia has already has an early user interface with a card-based design, according to Ars Technica, which posted a video and images of the yet-to-be-announced software. The interface is reportedly called Armadillo. It was first discovered by Kyle Bradshaw at Hotfix.

Unlike Android OS or Chrome OS, both of are based on Linux, Fuchsia is built on Magenta, a new kernel created by Google. Armadillo is built in Google’s Flutter SDK, which is used to create cross-platform code which is capable of running on multiple operating systems. With Armadillo, different cards can be dragged around for use in a split-screen or tabbed interface.

The current thought is that Fuchsia is a new OS that could unify Chrome OS and Android into a single operating system (something that’s been heavily speculated since 2015. Reports have claimed that OS will be released in 2017).

Google’s own documentation describes the software is targeting “modern phones and modern personal computers” with “fast processors” and “non-trivial amounts of RAM.”

Fuchsia is also built on Magenta, a “medium-sized microkernel” based on a project called LittleKernel. The two developers have listed on Fuchsia’s GitHub page – a senior software engineer at Google and a former engineer on Android TV and Nexus Q – are well-known experts in embedded systems.

Furthermore, Google’s documentation notes Magenta supports user modes, graphics rendering, and “capability-based security model”. Although all this points to Fuchsia being an OS for Wi-Fi connected gadgets, Google already has an IoT platform called Android Things. Also, Ars Technica has compiled the Armadillo system UI, and it seems like Fuchsia is intended to be a smartphone or tablet OS.

GOOGLE FUCHSIA: IS IT GOING TO REPLACE ANDROID?

Android is perforated with problems that Google has yet to fix. Firstly, there’s fragmentation caused by hundreds of different devices from dozens of manufacturers using different, tweaked versions of Android rather than the latest, version. Secondly, there’s an update problem. Google has an annual release which is scheduled for Android updates, but it takes about four years for an update to fully flood the ecosystem.

Google could not decide to push Android direct to these devices if any modifications and tinkering has been done – another problem is that Android is based on Linux.

Linux is not only old but it also has many legal issues – and subsequent licensing fees from Android hardware OEMs eat away at profit margins. The Linux kernel was also not originally designed for smartphones and IoT devices, and yet the kernel’s been completely tweaked and loaded onto those devices, creating a prime environment for bugs and vulnerabilities to grow.

FUCHSIA’S CORE CODE IS DESIGNED TO BE LIGHTWEIGHT:

The Magenta kernel can do a lot more than just power a router. Google’s own documentation says the software “targets modern phones and modern personal computers” that use “fast processors” and “non-trivial amounts of RAM.” It notes that Magenta supports a number of advanced features, including user modes and a “capability-based security model.”

This is just speculation for now, and the only real description we have of Fuchsia is what it says at the top of the GitHub page: “Pink + Purple == Fuchsia (a new Operating System).” The question of why the project would be revealed in this way is also confusing, although when it is stressed on the subject, Swetland reportedly said: “The decision was made to build it open source, so it might start from the beginning.”

Well, we’ve certainly got the beginning of Fuchsia, but where it goes next isn’t clear. From what we can see, it’s currently being tested on all sorts of systems. Swetland says it’s “booting reasonably well” on small-form factor Intel PC’s, while another Google developer involved in the project, Travis Geiselbrecht, says they’ll soon have support for the Raspberry Pi 3. At this rate, it looks like Fuchsia will be popping up all over the place.

WHAT ARE THE NEW FEATURES INTRODUCED IN JAVA 8?

What are the new features introduced in JAVA 8?

There are dozens of features added to Java 8, the most significant ones are mentioned below :-

Lambda expression − Adds functional processing capability to Java.

Method references − Referencing functions by their names instead of invoking them directly. Using functions as parameter.

Default method − Interface to have default method implementation.

New tools − New compiler tools and utilities are added like ‘jdeps’ to figure out dependencies.

Stream API − New stream API to facilitate pipeline processing.

Date Time API − Improved date time API.

Optional − Emphasis on best practices to handle null values properly.

Nashorn, JavaScript Engine − A Java-based engine to execute JavaScript code.

HOW TO ADD A BEAN IN SPRING APPLICATION?

How to add a bean in spring application?

Check the following example −

<?xml version = “1.0” encoding = “UTF-8″?>
<beans xmlns = “http://www.springframework.org/schema/beans”
xmlns:xsi = “http://www.w3.org/2001/XMLSchema-instance”
xsi:schemaLocation = “http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd”>

<bean id = “helloWorld” class = “com.tutorialspoint.HelloWorld”>
<property name = “message” value = “Hello World!”/>
</bean>

</beans>

HOW CAN YOU INJECT JAVA COLLECTION IN SPRING?

How can you inject Java Collection in Spring?

Spring offers four types of collection configuration elements which are as follows :-

<list> − This helps in wiring i.e. injecting a list of values, allowing duplicates.

<set> − This helps in wiring a set of values but without any duplicates.

<map>− This can be used to inject a collection of name-value pairs where name and value can be of any type.

<props>− This can be used to inject a collection of name-value pairs where the name and value are both Strings.

HOW WILL YOU SECURE JENKINS?

How will you secure Jenkins?

The way to secure Jenkins is mentioned below :-
Ensure global security is on.
Ensure that Jenkins is integrated with company’s user directory with appropriate plugin.
Ensure that Matrix/Project matrix is enabled to fine tune access.
Automate the process of setting rights/privileges in Jenkins with custom version controlled script.
Limit physical access to Jenkins data/folders.
Periodically run security audits on same.
 
 
 
 
 
 
 .