Monthly Archives: February 2017

DOCKER VS VMS

Docker Vs VMs

Docker vs Vms

Every application will have their own dependencies, which include both software and hardware resources. Docker containers bring plentiful of unknown tags compared to the existing technologies in use. Docker is an open platform for developers, it’s mechanism helps in isolating the dependencies per each application by packing them into containers. Containers are scalable and secure to use and deploy as compared to other previous approaches.

Virtual machines are used broadly in cloud computing. Isolation and resource control have continually been achieved through the use of virtual machines. Virtual machine loads a full OS with its own memory management and enables the applications to be more organized and secure while ensuring their high availability.

So let’s dive deep in and know the major differences between Docker and VMs and know how they might be useful to your resources. So let’s dive deep in and know the major differences between Docker and VMs and also how they might be useful to your resources.

HOW IS DOCKER DIFFERENT FROM VMS?

Virtual machines contain a full OS with its own memory management installed with-in the associated overhead of virtual device drivers. In a virtual machine, valuable resources are duplicated for the guest OS and hypervisor, which makes it possible to run in many instances of one or more operating systems in parallel on a single machine. Every guest OS runs as an individual entity from the host system.

On the other hand, Docker containers are executed with the Docker engine rather than the hypervisor. Containers are smaller than Virtual Machines and enable faster startup with better performance, less isolation, and greater rapport is possible because of sharing the host’s kernel.

THE DIFFERENCES BETWEEN DOCKER AND VIRTUAL MACHINE

When it comes to the comparison, Docker Containers have much more potential than Virtual Machines. It is noticeable that Docker Containers are able to share a single kernel and shares the application libraries. Containers present in a lower system overhead than Virtual Machines and the performance of the application inside a container is generally same or better when compared to the same application running on a Virtual Machine.

There is one major key point where Docker Containers are frail than Virtual Machines, and that is “Isolation”. Intel’s VT-d and VT-x technologies have provided Virtual Machines with ring-1 hardware isolation of which, it takes full advantage. It helps the Virtual Machines from breaking down and in interfering with each other but Docker Containers do not have any hardware isolation.

Compared to virtual machines, containers can be bit faster as-long-as the user is willing to stick to a single platform to provide the shared operating system. A virtual machine takes more time to create and launch whereas a container can be created and launched within few seconds. Applications contained in containers offers versatile performance when compared to running the applications within a virtual machine.

VMS AND CONTAINERS, WHEN COMBINED, ARE BETTER TOGETHER

Sometimes one can use a hybrid approach which makes use of both VM and Docker. There are also workloads which are well suited for physical hardware. If both are placed in a hybrid approach, it might lead to a better and well-organized scenario.

Below are a few of them, which explains how they work together as a Hybrid:

  • Docker Containers and Virtual Machines are not only sufficient to operate an application in production but also the user considers how are the Docker Containers going to run in an enterprise data center.
  • Application probability and enabling the accordant provisioning of the application across the infrastructure is provided by containers. But other operational requirements like security, performance, and other management tools and integrations are still a big challenge in front of Docker Containers.
  • Security isolation can be achieved by both Docker Containers and Virtual Machines.
  • Docker Containers can run inside a Virtual Machine though they are positioned as two separate technologies and provided them with advantages like proven isolation, security properties, mobility, software-defined storage, and massive ecosystem.

THE VERDICT

Using Docker or any other container solution in combination with a virtual machine is an option. By combining the two different technologies one can get the benefits of both technologies: The security of the virtual machine with the execution speed of containers.

Knowing the capabilities of the tools in the toolbox is the most important thing. There are a number of different things to keep in mind when doing that. However, in the case of containers Vs virtual machines, there is no one particular reason to choose just one. It can be a perfect world and you can choose both.

WHAT IS IOT?

What is IoT?

What exactly is Internet of Things? It is an equivocal term, but it is becoming a tangible technology that can be applied to data centres to collect the information about anything that IT wants to control.

In simple term, The Internet of Things (IoT) is essentially a system of machines or objects outfitted with data-collecting technologies so that those objects can communicate with one another. The machine-to-machine data which is generated has a wide range of uses but is commonly seen as a way to determine the health and status of things.

Internet of Things is a new revolution of the Internet. Objects make themselves recognizable and they obtain intelligence by enabling context related decisions. They can access information that has been aggregated by other things, or they can be components of complex services. This transformation is associated with the emergence of cloud computing capabilities and the transition of the Internet towards IPv6 with an unlimited addressing capacity. The main goal of the Internet of Things is to enable the things to be connected anytime, anyplace, with anything and anyone using any path or network.

There is a diverse combinations of communication technologies, which need to be adapted in order to address the needs of IoT applications such as efficiency, speed, security, and reliability. In this context, it is possible that the level of diversity will be scaled to a number of manageable connectivity technologies that address the needs of the IoT applications. Standard IoT examples include wired and wireless technologies like Ethernet, WI-FI, Bluetooth, ZigBee, GSM, and GPRS.

CHARACTERISTICS OF INTERNET OF THINGS

The principal characteristics of the IoT are as follows:

Interconnectivity: With regard to the IoT, everything can be interconnected with
the global information and communication infrastructure.

Things-related services: The IoT is capable of providing things-related services within the constraints of things, such as privacy protection and semantic consistency between physical things and their associated virtual things. To provide things-related services within the constraints of things, both the technologies in physical world and information world will change.

Heterogeneity: The devices in the IoT are heterogeneous, based on different hardware platforms and networks. They can interact with other devices or service platforms through different type of networks.

Dynamic changes: The state of devices change dynamically, for instance sleeping and waking up, connected or disconnected as-well-as the context of devices including location and speed. Furthermore, the number of devices can change dynamically.

Enormous scale: The number of devices that need to be managed and that communicate with each other will be at least an order of magnitude larger than the devices connected to the current Internet. It is even more critical when the management of the data and their interpretation for application purposes are generated. This relates to semantics of data, as well as efficient data handling.

Safety: As we all gain benefits from the IoT, we should not forget about safety. Both the creators and recipients of the IoT, must design for safety. This includes the safety of our personal data and the safety of our physical well-being. Securing the endpoints, the networks, and the data moving across all of it means creating a security paradigm that will scale.

WHY IS INTERNET OF THINGS BEING IMPORTANT?

You might be surprised to learn how many things are connected to the Internet, and how much economic benefit we can derive from analyzing the resulting data streams. Below are some examples of the impact the IoT has on industries:

Intelligent transport solutions speed up traffic flows, reduces fuel consumption, and arranges vehicle repair schedules.

Smart electric grids more effectively connect with renewable resources and improve system reliability.Machine monitoring sensors determine and forecast the pending maintenance issues, near-term part stock outs, and schedule maintenance crew schedules for repair equipment and regional needs. Data-driven systems are built into the infrastructure of “smart cities,” making it easier for municipalities to run waste management, law enforcement, and other programs more systematically.

CONCLUSION

With the incessant boom of the emerging IoT technologies, the concept of Internet of Things will soon be inevitably developed on a very large scale. This emerging paradigm of networking will influence every part of our lives ranging from the automated houses to smart health and environment monitoring by implanting intelligence into the objects around us.

Internet of Things

PYTHON 2 VS PYTHON 3

Python 2 Vs Python 3

Python is an extremely readable and adaptable programming language. The name was inspired by the British comedy group Monty Python; it was a major foundational goal of the Python development team to make the language fun and easy to use. It is easy to set up, and written in a relatively straightforward style with immediate feedback on errors, Python is a great choice for beginners.

Before going into the potential opportunities let’s see the key programmatic differences between Python 2 and Python 3, let’s start with the background of most recent major releases of Python.

PYTHON 2

Python 2 is a transparent and inclusive language development process than earlier versions of Python with the implementation of PEP (Python Enhancement Proposal). Python 2 has much more programmatic features including a cycle-detecting garbage collector to automate memory management, increased Unicode support to standardize characters, and list comprehensions to create a list based on existing lists. As Python 2 continued to develop, more features were added, including unifying Python types and classes into one hierarchy in Python version 2.2.

PYTHON 3

Python 3 is contemplated as the future of Python and is the version of the language that is currently in development. Python 3 was released in late 2008 to address and amend intrinsic design flaws of previous versions of the language. The focus of Python 3 development was to clear the codebase and remove redundancy. Major modifications to Python 3.0 includes, changing the print statement into a built-in function, improved the way integers are divided, and provides more Unicode support.

PYTHON 2.7

Following the 2008 release of Python 3.0, Python 2.7 was published on July 3, 2010 and planned as the last of the 2.x releases. The main intention behind Python 2.7 was to make it easier for Python 2.x users to port features to Python 3 by providing some measures of compatibility between the two. This compatibility support includes enhanced modules for version 2.7 like unittest to support test automation, argparsefor parsing command-line options, and more convenient classes in collections.

THE DIFFERENCES BETWEEN PYTHON 2 & PYTHON 3:

While Python 2.7 and Python 3 share many identical capabilities, there should not be any thought of interchangeable. Though a user can write code and useful programs in either version, it is worth in understanding that there will be some considerable differences in code syntax and handling.

PRINT

In Python 2, print is considered as a statement instead of a function, which is a typical area of confusion as many other actions in Python requires arguments inside the parentheses to execute. If the user wants the console to print out “The Shark is my favourite sea creature” in Python 2 the user can do it with the following print statement:

Print “The Shark is my favourite sea creature”

In Python 3, print() is explicitly treated as a function, so to print out the same string above, the user can easily do it with the simple syntax of a function:

Print(“The Shark is my favourite sea creature”)

This change made Python’s syntax more uniform and also made it easier to change between different print functions.

DIVISION WITH INTEGERS

In Python 2, any number that the user types without decimals is treated as the programming type called integer. While in the beginning this seems like an easy way to handle programming types, when the user tries to divide integers together then the user expects to get an answer with decimal places (called a float), as in:

5 / 2 = 2.5

However, in Python 2 integers were strongly typed and would not change to a float with decimal places even in cases that would make instinctive sense.

When the two numbers on either side of the division “/” symbol are integers, Python 2 will do floor division so that the quotient x is the number which is returned is the largest integer less than or equal to x. This means that when you write 5 / 2 to divide the two numbers, Python 2.7 returns the largest integer less than or equal to 2.5, in this case, 2:

a = 5 / 2

print a

OUTPUT
2
UNICODE SUPPORT

When programming languages handle the string type i.e., a sequence of characters which can do it in a different way so that computers can convert numbers to letters and other symbols.

Python 2 uses the ASCII alphabet by default, so when you type “Hello” Python 2 will handle the string as ASCII. Limited to a couple of hundred characters at best in various extended forms, ASCII is not a very flexible method for encoding characters, especially non-English characters.

Python 3 uses Unicode by default, which saves the programmers development time, and the programmer can easily type and display many more characters directly into the program. Because Unicode supports a linguistic character.

CONCLUSION

Python is a flexible and well-documented programming language to learn, whether you choose to work with Python 2 or Python 3, one will be able to work on exciting software projects.

Though there are several key differences, it is not difficult to move from Python 3 to Python 2 with a few twists, and you will often find that Python 2.7 can easily run Python 3 code.

It is important to keep in mind that most of the developers are focused on Python 3, the language will become more refined and in-line with the evolving needs of programmers, and less support will be given to Python 2.7.

GOOGLE CLOUD VS AWS

Google Cloud Vs AWS

Three main players of business cloud services have an array of products covering the users needs for their online operations. But there are some differences not only in pricing but also in how they name and group their services, so let’s compare each other and find out what they offer.

WHY THE CLOUD COMPUTING?

Many famous companies from both the public and the private sector such as Netflix, Airbnb, Spotify, Expedia, PBS, and much more relies on cloud services for supporting their online operations. This allows them to focus on doing what they’re known for, and let many of the technicalities be taken care of by an infrastructure that already exists and is constantly being upgraded.

Cloud service is not limited to only big names. Today, we live in a world where both a huge business, and individual entrepreneur with no initial capital, can access world-class infrastructure for storage, computing, and management to make the next massive online services.

LET’S DIFFERENTIATE BETWEEN AWS AND GOOGLE CLOUD

Amazon introduced “commoditized” cloud computing services through its first AWS service launched in 2004, and ever since then they kept innovating and adding features, which allowed them having the upper hand in the business by building the most extensive array of services and solutions for the cloud. In many regards, it is the most expensive one.

Google, came into the game and is quickly coming to a level, by bringing its own infrastructure and ideas, offering deals, and pulling the prices down.

STORAGE

Storage is the key pillar for cloud services. In the cloud, the user can store with the same ease from a group of GBs (gigabytes) to several PBs (petabytes). This is not a regular hosting for which you just need a user and password to upload files to an FTP. Instead, the user needs to interact with APIs or third-party programs, and it may take some time before the user is ready to operate the storage entirely in the cloud. To store objects, Amazon Simple Storage Service (S3) is the service that’s been running from pretty long time, it has extensive documentation, including free webinars, tons of sample code and libraries. Of course, Google Cloud Storage is a service that’s as reliable and robust, but the resources you’ll find don’t come even close that of Amazon’s.

ANALYTICS

The challenges of big data are dealing with incredibly large data sets, making sense of them, using them to make predictions, and even helping to model completely new situations like new products, services, treatments etc. This requires a specific technology and programming models, one of which is MapReduce, which was developed by Google. such as BigQuery (managed data warehouse for large-scale data analytics), Cloud Dataflow (real-time data processing), Cloud Dataproc (managed Spark and Hadoop), Cloud Datalab (large-scale data exploration, analysis, and visualization), Cloud Pub/Sub (messaging and streaming data). Elastic MapReduce (EMR) and HDInsight are Amazon’s and Azure’s take on big data, respectively.

LOCATIONS

When deploying the services, the user may want to choose a data centre that’s close to their primary target of users. For instance, if you’re doing real estate or retail hosting on the West Coast of the United States, you’ll want to deploy your services right there to minimize the latency and provide a better user experience (UX). Amazon has the most extensive coverage whereas, Google has a solid coverage in the United states but it is falling behind in Europe and South America.

NESSUS-3

Nessus-3

The Nessus vulnerability scanner was created by the Nessus Development Team, led by Renaud Deraison. Nessus is one of the greatest tool designed to automate the testing and to discover the known security problems. Nessus is designed to help, identify, and solve known problems before a hacker takes advantage of them. Nessus is a great tool with lots of capabilities. In this article, we shall endeavour to cover the basics of Nessus and Nessus-3 setup and configuration.

WHAT’S NEW IN NESSUS-3

Nessus 3 is the latest version of Nessus. Tenable Network Security, Inc. offers Nessus 3 as a free product for UNIX, Windows, and OS X operating systems. The following are the list of changes between Nessus 2 and Nessus 3:

  • NASL3 is 16 times faster than NASL2 and a full 256 times as fast as NASL1.
  • The IDS-evasion feature is no more.
  • Nessus 3 has more protocol APIs.
  • In Nessus 3, each host is tested in its own individual process and scripts share the same process space.
  • An NASL script can only use 80 Megs of memory.
  • The NASL3 VM is more secure. A poorly written NASL script is not vulnerable to any buffer, stack overflows or memory corruption because the language itself prevents the problem from occurring.
  • There are two kinds of NASL functions such as:
  • “Harmless” functions which cannot interact with the local systems.
  • Functions which can interact with the local system are supported in Nessus 3. However, the script must be signed by Tenable. In this way, tainted scripts cannot interact with the local system and the risk of a script being copied or hacked from system-to-system is reduced.

SCANNING MODE AND NESSUS OS FINGERPRINTING

The ability to detect the operating system of a remote target is always critical. A vulnerability scanner must be able to adapt the different type of environments. One of the initial steps that Nessus takes is, it attempts to identify the remote operating system. This is a highly critical step, as the other Nessus modules will often rely on this information to make intelligent decisions such as whether to scan the target host or not.

DEPLOYING A NESSUS INFRASTRUCTURE

Before deploying a Nessus infrastructure, the user should understand the target network. For instance:

  • Where are the network bottlenecks?
  • Where are the firewalls?
  • Where are the RFC 1918 networks?
  • What routing protocols are used?
  • What network protocols are typically used?

Speed

Nessus 3 is all about its speed. With Nessus 3, the network is the limiting factor. If more speed is required, the user should have multiple Nessus engines running in parallel. Besides speed, the user will also get other benefits from such a configuration. For example, when scanning a local broadcast domain, the user’s Nessus scanner should be able to pick up on things which typically would not be routed to the next-hop router. By having a scanner on each broadcast domain, the user can detect and use broadcast traffic, RFC 1918 addressing and much more. Having separate scanners ensures that the Nessus scanning traffic does not traverse WAN pipes. Nessus 3 runs on UNIX, Windows, and OS X operating systems. Hence, an organization can, deploy the Windows version of Nessus on their Backup Domain Controllers.

Location

It is very important to plan where a scan should begin. Do you want to simulate the “hackers” view and scan from outside the network? Do you want to scan from inside a network? Do you want to scan from a business partner network into your network? There are hundreds of permutations. The first question will be: “which vector of attack do I wish to test for?” Ideally, you want to test all the different permutations.

Time

How often you can scan? If active scanning is the only scanning being done, then the user should scan as often as possible. Most organizations utilize Change Control Procedures. Try to scan after a change control window. Remember, if you are scanning for every 30 days, a change in the network after 2 days of scan will go undetected for 28 days. While outside the scope of this paper, Tenable offers a 24×7 passive vulnerability scanner which detects these changes in real time. With respect to time, Tenable releases dozens of plugins per month. Be sure to have your Nessus scanner set up to automatically retrieve the latest direct feed from Tenable prior to a scan.

The Verdict

Nessus is an excellent tool that will greatly aid your ability to test and discover the security problems. The power that Nessus gives you should be used wisely as it can render production systems unavailable with some of the most dangerous plug-ins. We hope this article has given you a brief knowledge on Nessus and how it tests and discovers the security problems.

HOW TCP IS DIFFERENT FROM UDP

How TCP is different from UDP

Unlike TCP, UDP is connectionless and provides no reliability, no windowing and no funtion to ensure data is received in the same order as it was transmitted. However, UDP provides some functionalities as supported by UDP such as data transfer, multiplexing and has fewer bytes of overhead in the data. This fewer bytes in the overhead makes UDP protocol need less time in processing the packet and need less memory. Also absence of acknowledgement field makes it faster as it need not have to wait on ACK or need not have to hold data in memory until they are ACKed.

WHAT IS A MAC ADDRESS?

What is a MAC address?

MAC address stands for Medium Access Control Address. MAC address is also referred as physical address or hardware address or Ethernet address.

MAC address is unique to network device wanting to utilize TCPIP network or LAN or WLAN service. It is “burnt into” the device by manufacturer of the device or Card.

MAC address is composed of 48 bit or six hexadecimal digits,separated by colons or dashes.

Example – 00-14-2A-3F-47-D0

Remember Hexadecimal digits can be numbers from 0-9 and letters from A-F.

MAC address represent manufacturer of the card and device number. The first three pairs of digits represent manufacturer (called OUI-Organisationally Unique Identifier) and last three pairs of digits represent number specific to the device(called NIC-Network Interface Controller Specific). ARP,Address Resolution Protocol is used to convert IP address to the MAC address. MAC address is very essential for the IP layer to work. MAC is the foundation for IP address to communicate packet from one system to the other. Similar to IP addresses, there are some MAC addresses defined for special purposes,For example FF:FF:FF:FF:FF:FF is reserved ror broadcast purpose.

WHAT IS THE DIFFERENCE BETWEEN NAT AND PAT?

what is the difference between NAT and PAT?

NAT stands for Network Address Translation and PAT stands for Port Address Translation.

A network address translation device obscures all details of the computers connected to the local network. NAT device acts as gateway for all the computers. Behind the NAT device, local network can use any network address space. NAT device acts as proxy for the local network on the internet.

NAT

A NAT device helps in increasing the security as it can prevent an outside attacker even to find the local network. This is because of local addressing scheme is not contiguous with the standard IP address space used worldwide.

PAT

PAT helps in optimum utilization of IP address space by way of allocating one dedicated IP address for the organization and internally uses IP addresses as per the need. PAT is the extension of the NAT.