What is lambda in Python?

The lambda operator or lambda function is a way to create small anonymous functions, throw-away functions, i.e. the unnamed functions which are needed just where they have been created.



What is IoT?

IoT stands for Internet of Things. It is basically a network in which things can communicate with each other using internet as means of communication between them. All the things should be IP protocol enabled in order to have this concept possible. Not one but multiple technologies are involved to make IoT a great success.



DDOS Mitigation

DDoS mitigation is a set of methods or tools for resisting or mitigating the impact of distributed denial-of-service (DDoS) attacks on networks attached to the Internet by protecting the target and relay networks. DDoS attacks are a continuous threat to businesses and companies by threatening service performance or to shut down a website entirely, even for a short period. So, explore this article and know more about DDoS mitigation.


The term DDoS mitigation refers to the process of successfully protecting a target from a distributed denial of service (DDoS) attack.

A typical mitigation process can be generally defined by these four stages:


Detection: The identification of traffic flow aberrations that may signal the build-up of a DDoS attack. Efficiency is measured by the ability to recognize an attack as early as possible, with prompt detection is the ultimate goal.

Diversion: Traffic is redirected away from its target, either to be filtered or completely discarded.

Filtering: DDoS traffic is weeded out, usually by identifying patterns that instantly distinguish between legitimate traffic and malicious visitors. Responsiveness is a function of being able to block an attack without interfering with the user’s experience. The aim is the solution to be completely transparent to the site visitors.

Analysis: Security logs are studied to gather information about the attack, both to identify the offenders and to improve future resilience. The process of effectiveness relies on the existence of security logs that can offer granular visibility into the attack traffic.


Besides the method of traffic deviation, there are several other key aspects one must consider while choosing a mitigation provider. They are as follows:


Network capacity remains a great way of benchmarking a DDoS mitigation service. It is measured in Gbps (gigabits per second) or Tbps (terabits per second) and returns overall scalability available to the user during an attack.

Most of the cloud-based mitigation services offer multi-Tbps network capacity. On-premise DDoS mitigation appliances, on the other hand, are covered by default, both by the size of company’s network pipe and the internal hardware capacity.


In addition to the capacity, consideration should also be given to the processing capabilities of the user’s mitigation solution. They are represented by redirecting the rates, measured in Mpps (millions of packets per second).


Once an attack has been detected, time to mitigation is critical. Most attacks can take down a target in a matter of minutes and the recovery process can take hours. The undesirable impact of such downtime can potentially be felt by the company for weeks and months ahead.

By providing pre-emptive detection, always-on solutions will have a distinct advantage. They offer an instant mitigation protecting the company from the first hail during any attack.

Not all the solutions offer a response level. That is why inquiring about time to mitigation should be on the user’s list when assessing a DDoS protection provider, in addition, while testing it during a service trial.


Groovy Programming Language

The only thing constant in this world is change! In the 1990s, Java shook up the computing world by bringing managed runtime (virtual machines) to mainstream programming. Today, there are many new languages emerging to replace languages that are popular.

Do you know that there are more than two hundred languages that target the Java platform? Languages like Scala, Clojure, Groovy, and JRuby are few important languages that target the Java platform. Of these, of interest to us is Groovy since it is getting extensive attention and gaining steady popularity in the last few years. So explore this article and know more about Groovy.


Groovy is an object-oriented programming language for the Java platform. It is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. If you are familiar with languages like Java, C/C++, Python, Ruby, or JavaScript, then you will see many resemblances in Groovy. Groovy code is compiled to byte code that is executed by the Java Virtual Machine (JVM). Groovy programming language is always known for its simplicity and flexibility, as-well-as the performance and stability of the JVM. As Groovy is compiled to byte code that runs on the JVM Java Virtual Machine (JVM), most of the Java code is valid for Groovy and the standard Java libraries are available to Groovy programs.

The user can write Groovy code online and can execute it too quickly to play around with some snippets. Please notice, that the playground currently has some limitations, like System.out.println is not working.


Groovy nearly is a superset of Java, which means most of the Java code is also valid in Groovy code. It just adds a lot of syntactic sugar on top of Java. Let’s see a short example.

System.out.println(“Hello World”);

This would be valid in Java and Groovy. Except that the user would need at least a class with a main method around it in Java to run. In Groovy user can just place this inside a file, execute it through the console and it will start working. But in Groovy we can shorten this line to:

println “Hello World”


In contrast to Java, user can also use dynamic typing in Groovy. To define a variable, use the keyword “def”. That way Groovy will determine the type of the variable at runtime and the type might even change. For instance, the following snippet is valid for Groovy code:

def x = 42

println x.getClass()

x = “Hello World”

println x.getClass()

This script produces the following output:

class java.lang.Integer

class java.lang.String

The variable x changed its type during runtime. The user is still free to define x with a type like int.


Groovy has a String implementation called GString which allows the user to add variables into the String.

def x = “World”

println “Hello, $x”

This would produce the output Hello, World. The content of variable x is inserted into the string. If the user wants to use more difficult statements in the string, then the user need to add curly braces after the dollar sign.


Alike JavaScript Groovy evaluates every object to a Boolean value if required.

if(“foobar”) …

if(42) …

if(someObject) …


Groovy has some syntactic sugar around regular expressions. The user can define a regular pattern class by placing the ~ in front of a string. Besides the already mentioned single quote and double quotes,there is a third method to define a string, which is primarily meant for regular expressions. The user can define a string inside two slashes /../. The main difference (in contrast to “””) is, that the user does not need to escape a backslash character, which they often need in regular expression patterns.

def pattern = ~/a slash must be escaped \/ but backslash, like in a digit match \d does not/
println pattern.getClass()



The software development industry has grown over the years, from simple software running in one machine to complex systems on multiple servers in the cloud. Provisioning and handling complex server architecture across different environments can be a big test.

Users would manually provision servers, install all the dependencies, and then launch the software. This approach has defects. Assuming that the user’s infrastructure gets corrupted or flops, to spin new servers the user will have to go through the same painful process all over again. Isn’t it frustrating?


Infrastructure as Code (IaC) is the process of handling and provisioning computing and networking infrastructure and their configuration through machine-processable code, rather than physical hardware configuration or the use of interactive configuration tools. Such files can be kept in source control to allow audibility and reproducible builds and the full discipline of continuous delivery.

There are many tools that are used to achieve this such as, terraform, Puppet, Chef, Ansible, and much more. However, in this article, we will be looking into Ansible.


Ansible is an open source automation platform which is used for configuration management, application deployment, and task automation. It can also do IT orchestration to run tasks in order and create a sequence of events which must happen on several different servers or devices. In short, Ansible enables the user to define the infrastructure as code in a simple declarative manner.


When it comes to choosing a tool, there is always a question, why should anyone use it? What is the deal breaker? There are many reasons why to choose Ansible as your configuration management tool. Here are some of them.


As compared to Chef or Puppet, Ansible does not make use of the agent in the remote host rather it makes use of SSH to manage the systems. This is a good news as the user need not to configure anything on the host before they use it. This approach makes it simpler to set up and to use.


Ansible makes use of ad-hoc mode to run shell commands across different machines. This can come in handy if the user is providing many servers. This also minimizes the provision time to make it easier and quicker to replicate the user’s infrastructure.


It is recommended to name all the Ansible tasks in a very eloquent manner in the user provisioning script. When the script is executed, Ansible will provide descriptive reports whether the task is succeeded with or without changes. The messages are also coloured providing orderly reports.


Ansible uses YAML as its configuration syntax. This makes it easy to the user as compared to using a bash script. Taking into consideration that YAML is easy to learn, therefore reduces the learning curve.


Of course yes, the user need to start by installing Ansible. Tasks can be run in any machine where Ansible is installed.

As Ansible is “agentless” there is no central agent running. The user can even run Ansible from any server.

Here is how to install Ansible on Ubuntu 14.04.

sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible


Ansible is an open-source automation engine that automates cloud provisioning, configuration management, and application deployment. It provides an easy way for the users to manage their infrastructure through easy human readable syntax. The concepts we have looked at in the above sections is a mere drop in the ocean, they are only guidelines to get you started. So it is up to the user to take the step further and discover what Ansible has to offer.


RabbitMQ Vs Kafka

Today we have dozens of messaging technologies, countless ESBs, and nearly 100 iPaaS vendors in market. This leads to questions about how to choose the right messaging technology for the user needs – particularly for those who have already invested in a particular choice.

This post explains you, starting with the most modern, popular choices of today: RabbitMQ and Apache Kafka. So, explore this article and know the major differences between these two technologies.


RabbitMQ is a “traditional” message broker that implements a variety of messaging protocols. It was one of the first open source message brokers to attain a reasonable level of features, client libraries, dev tools, and quality documentation. RabbitMQ was originally developed to implement AMQP, an open wire protocol for messaging with powerful routing features. While Java has messaging standards like JMS, so it is not helpful for non-Java applications that need distributed messaging which is severely limiting to any integration scenario, micro service, or monolithic. With the arrival of AMQP, cross-language flexibility became real for open source message brokers


Apache Kafka is developed in Scala and started to connect various internal systems. Kafka is one of those systems that is very simple to describe at a high level, but has an incredible depth of technical detail when you dig deeper. The Kafka documentation does an excellent job of explaining the many design and implementation subtleties in the system. Kafka is well adopted today within the Apache Software Foundation ecosystem of products and is useful in event-driven architecture.

RabbitMQ vs Kafka


import java.util.Properties;

import java.util.Arrays;

import org.apache.kafka.clients.consumer.KafkaConsumer;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.ConsumerRecord;

public class SimpleConsumer {

public static void main(String[] args) throws Exception {

if(args.length == 0){

System.out.println(“Enter topic name”);



//Kafka consumer configuration settings

String topicName = args[0].toString();

Properties props = new Properties();

props.put(“bootstrap.servers”, “localhost:9092″);

props.put(“group.id”, “test”);

props.put(“enable.auto.commit”, “true”);

props.put(“auto.commit.interval.ms”, “1000”);

props.put(“session.timeout.ms”, “30000”);





KafkaConsumer<String, String> consumer = new KafkaConsumer

<String, String>(props);

//Kafka Consumer subscribes list of topics here.


//print the topic name

System.out.println(“Subscribed to topic ” + topicName);

int i = 0;

while (true) {

ConsumerRecords<String, String> records = con-sumer.poll(100);

for (ConsumerRecord<String, String> record : records)

// print the offset,key and value for the consumer records.

System.out.printf(“offset = %d, key = %s, value = %s\n”,

record.offset(), record.key(), record.value());




Message queues is a broad and interesting topic, in this article we just checked the tip of the iceberg.

Kafka is good for “fast” and reliable consumers.

RabbitMQ is good for “slow” and unreliable consumers.

We recommend you to use as per your analysis and project because, real systems are more complex and the above conclusion is not optimal for every situation.



MuleSoft is a vendor that provides an integration platform to connect applications, data, and APIs across on-premises and cloud computing environments. To provide agility both on-premises and in the cloud, MuleSoft’s Anypoint Platform integrates or connects SaaS applications and existing legacy applications through application programming interfaces (APIs). In addition, the platform integrates service-oriented architectures (SOA).

MuleSoft’s Anypoint Platform offers a various number of tools and services, such as:

API Designer. A web-based tool for creating APIs which also includes a console and a JavaScript scripting notebook. It also allows users to share their API design to receive feedback from other users.

API Manager. An API management tool that allows organizations to manage users, traffic, service-level agreements (SLAs), and API security. The API Manager also includes API Gateway.

Anypoint Studio. A graphical design environment for constructing, editing, and debugging integrations.

API Portal. A developer portal offering interactive documents, tutorials, and code snippets. It is also included in the API Portal such as MuleSoft’s Portal Designer, Developer Onramp, and Access Controller tools.

API Analytics. An analytics tool that allows users to track API metrics, such as performance and usage. The API Analytics service also includes visualization tools, such as API Dashboards and API Charts.

CloudHub. A multi-tenant integration platform as a service (iPaaS) that connects SaaS applications and on-premises applications. The iPaaS includes a hybrid deployment option, disaster recovery, and high availability.

What is Mule ESB

The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within the user enterprise or across the Internet. Mule has powerful capabilities such as:

Service creation and hosting: Represent and host reusable services, using the ESB as a lightweight service container.

Service mediation: Shield services from message formats and protocols, separates the business logic from messaging, and enables the location-independent service calls.

Message routing: Route, filter, aggregate, and re-sequence messages based on content and rules.

Data transformation: Exchange the data across varying formats and transport protocols.

Why Mule?

Mule is lightweight but highly scalable, allowing the user to start small and connect more applications over a certain time period. The ESB manages all the interactions between applications and components transparently, irrespective of they exist in the same virtual machine or over the internet.

There are currently numerous commercial ESB implementations in the market. However, many of these provide limited functionality or they are built on top of an existing application server or messaging server, locking the user into that specific vendor. You are never locked in to a specific vendor when you use Mule.

Advantages of Mule

– Mule and the ESB model allows the significant component reuse. Unlike other frameworks, Mule allows the user to use their existing components without any changes. Components do not require any Mule-specific code to run in Mule, and there is no programmatic API required. The business logic is kept completely separate from the messaging logic.

– Messages can be in any format from SOAP to binary image files.

– The user can deploy Mule in different topologies, not just ESB. Because it is lightweight and embeddable, Mule can dramatically minimize time to market and increases productivity for projects to provide secure, scalable applications that are adaptive to change and can scale up or down as needed.

– Mule’s stage event-driven architecture (SEDA) makes it highly scalable. A major financial services company processes billions of transactions per day with Mule across thousands of Mule servers in a highly distributed environment.