GIT VS SVN

GIT VS SVN

There are many number of Subversion Vs Git comparisons around the web and most of them are based on myths rather than facts. In this article, we will explain you the major differences between these version control systems to help you to understand the actual state of affairs.

git vs svn

Distributed Nature

Git was designed from the ground up as a distributed version control system. Version control system means that multiple redundant repositories and branchings are the first class concepts of the tool.

In a distributed VCS like Git every user consists of a copy of the repository data stored locally, thereby it gives the user to access the file history and also allows full functionality when disconnected from the network.

In a centralized VCS like Subversion, only the central repository has the access to view the complete history. This means that user must communicate over the network with the central repository to obtain history about a file.

Branch Handling

Branches in Git are the core concept used on every day by every user. In Subversion they are more unmanageable and often used sparingly.

In Git, developers working directory is itself a branch. If two developers are modifying two different unrelated files at the same time, it is easy to view those directories as different branches stemming from the common base revision of the project.

Speed of Operation

Git is extremely fast. All the operations are local and there is no network latency involved for the following.

View file history
Commit changes
Merge branches
Switch branches
Switch branches

Line Ending Conversion

Subversion can be easily configured to automatically convert line endings for CRLF or LF, depending on the native line ending used by the clients operating system. Subversion also allows the users to specify line ending conversion on a file-by-file basis. If the user does not check the binary flag on adding binary content, then the content might get corrupted.

Single Repository

Subversion only supports a single repository. Only the user knows the repository URL and all the materials and the branches related to that project.

Since Git is distributed, not everything related to the project can be stored in the same location.

Data Storage

Every control system stores the metadata of files in hidden folders like.svn, .cvs and much more. Whereas, Git stores the entire content in the .git folder. If you are comparing the size of .git folder with .svn, you will notice a huge difference. The .git folder is the cloned repository on your machine, it has everything that the central repository has such as, tags, branches, and version histories.

Access Control

Subversion has a single central repository, it is possible to specify read and write access controls in a single location and can be executed across the entire project.

Change Tracking

In earlier versions of Git, there are only minor changes to binary files such as adjustment of brightness for images which is different because Git interprets them as a new file making the content history to split. The Subversion tracks file-by-file hence, history for changes is maintained.

DEPLOY A FULL DOCKER SWARM CLUSTER ON AZURE CONTAINER SERVICES

Deploy a full Docker Swarm cluster on Azure Container Services

Microsoft has announced the availability of Azure Container Services that allows deploying in an easy way in a cluster of virtual machines that can host containers.

Azure Container Services supports two orchestrators for the clusters:

Docker Swarm: It uses the native Docker stack so that a user can directly use Docker commands to deploy Docker containers.

DC/OS: It is a data centre operating system that can run containers in different formats including Docker images. DC/OS is also used to deploy and run on well-known distributed systems such as HDFS, Spark, Kafka, and Cassandra is used to scale by organizations like AirBnb, Twitter, and Netflix.

In this article, we will explain how you can use Azure Container Services to deploy a Docker Swarm based cluster in Azure.

GENERATE AN SSH RSA KEY

Microsoft is working on the implementation of a container technology for the next version of windows server, but Azure Container Service supports only Linux workloads currently, so the user needs an SSH key to connect the cluster once it is created. There are several ways to create a new key, depending on the system the user is running on.

Firstly, the bash scripts are open source and hosted on Github:

full_Docker_Swarm_cluster_on_Azure_Container_Services

https://github.com/avranju/azure-swarm

The two main scripts are:

swarm-up.sh – This brings the cluster for the user.

swarm-down.sh – This breaks down the cluster which is created by the user using swarm-up.sh.

Create a new Docker Swarm cluster on Azure Container Service

A user can create a new Azure Container Service instance using the Azure portal, Azure CLI or PowerShell. In this article, we will focus on Azure Portal.

Go to http://portal.azure.com and log in with your Azure account and click the “+” new button and search for the “container”.

Click on the Azure Container Service in the result view.

Azure_Container_Service

Click on the Create button. An assistance will open to help you to configure your new cluster.

Firstly, the user has to enter the name of the user, that will be the administrator of the cluster and pass the SSH public key. The user has to choose the Azure subscription, a resource group, and the location where the cluster will be deployed.

  • Click on OK to continue with the step-2. In this step, a user can choose between two orchestrators: DC/OS or Swarm.
  • In the next step, the user has to save the settings for Azure Container Service such as number of masters, nodes, the virtual machine size to use and a DNX prefix which will be used in each source that will be created.
  • Click on OK and wait for the final validation.
  • In the last section, click on Create button. Depending on the number of masters/agents you have asked for, the cluster creation may take a little time.

Once the deployment is completed, a user can access their new Azure Container Service.

Connect to the Swarm master virtual machine:

Connecting to the Swarm master is really simple, by using the SSH command in the output produced by the deployment.

If you are using Linux or Mac, open a terminal. If you are using Windows, then you can continue to use Git Bash that provides an SSH client.

The user will be asked to enter the passphrase and then you are connected to the Swarm master.

Swarm_master_virtual_machine

Once connected, you can use the Docker command to work with your Swarm cluster. Your Cluster is ready now, you can deploy your first Docker container.

You have now a fully functional Docker Swarm cluster based on Azure Container Service. As you can see the above steps, it is really simple to create a cluster in Microsoft Azure.

TOP 10 COMMANDS OF GIT

Top 10 commands of GIT

Are you new to GIT? Then you will recognize some things which work differently as compared to SVN or CVS based repositories. In this article, we will explain you the top 10 important GIT commands in a GIT workflow that you need to know about.

Are you using Windows? Then set-up Git on your local machine to follow the steps below.

The GIT workflow as follows

Go to the directory where you want to have the version control. Use git init to keep the directory under version control. This creates a new repository for the current location. The user can make changes to their files, and can use git add to stage files into the staging area. The user can also use git status and git diff to see what exactly the user has changed. When you want to upload the changes to a remote repository then you need to use git push.

git

When the user wants to download the changes from a remote repository to the local repository the user needs to use git fetch and git merge.

1. Git auto complete:

The auto complete feature can be enabled for git commands and it works in internal terminal.
For implementing that, you need to edit the bash_profile file.
sudo nano ~/.bash_profile
and add the following lines.
if [ -f ~/.git-completion.bash ]; then
. ~/.git-completion.bash
fi

2. Track the file changes:

In order to keep things in control and to know who is responsible when something goes wrong, it is always useful to see all the changes done to a particular file or files. A user can do this easily by using git blame command, which will show the author for every line in a file.

git blame my_filename

3. Stash uncommitted changes:

The stash command saves the user’s time in the situations where the user needs to show his applications to the client or someone from the management team. Since the user is in the middle of the work and the application would not work without reverting the changes. At that point, a user can run the stash command, which saves all the changes for further usage.

git stash

A user can showcase the work without losing any progress. When the user wants to see the list of stashes, then run:

git stash list
To recover the uncommitted changes, execute the following command:
git stash apply
It is also possible to retrieve a particular stash only:
git stash apply stash_unique_id

4. Staging parts of a changed file for a commit:

According to the best practices, while using version control, each commit should represent either a feature or a bug fix. However, team members or a user can add multiple features or can fix multiple bugs without committing. In such sought of situations, a user can stage files individually and commit them separately.

git commit -p myfilename

5. Clone a specific remote branch:

In some cases, the user doesn’t want to clone the entire remote repository, but only one of its branches. A user can perform that by running the following command.

git remote add -t Mybranchname -f

6. Delete a remote branch:

Deleting a remote branch is more confusing than deleting a local one. In newer versions of git, a user can use the following command.

git push origin –delete Mybranchname

However, in previous versions, user had to use push “nothing” to the branch in order to delete it.

git push origin :Mybranchname

7.Merge current branch with a commit from the other branch:

When the team is working in parallel with multiple branches, the change made in one branch has to be applied for the other branches. Using the cherry-pick command, a user can do it by merging a single commit from the other branch into the current branch, without changing other files or commits. So, the First switch to the branch that the user has to merge with the commit and then run the cherry-pick command.

git checkout Mybranchname

git cherry-pick commit_hash

8. Pull with rebase instead of merge:

When the team members are working on the same branch, the user has to pull the code and merge changes frequently. To avoid merge messages from cluttering up the log, use the rebase.

git pull –rebase

Also, user can configure a certain branch to always rebase:

git config branch.mybranchname.rebase true

9. Use .gitignore and .gitkeep:

In git, it is possible to exclude some files from version control. This is done by creating a file named .gitignore in the root of the project. After that, add the path to each file in a new line. The git automatically skips the empty folders by default. If the user wants to commit an empty folder, create a .gitkeep file and place it in the folder.

10. Change the default message editor:

In UNIX operating system, the default editor for commit message is VI. If the user wants to change it, the user can run the following command.

git config –global core.editor “path/to/my/editor”

CI Vs CT

CI Vs CT

Continuous” is the word that you would often hear again and again in any discussion around DevOps. Almost everything in DevOps is continuous: be it Continuous integration or Continuous testing. So let’s take a closer look at the idea of continuity and the difference between Continuous Integration and Continuous Testing.

Continuous Integration

Continuous Integration is a software development practice where members of a team integrate their work frequently. This is probablythe most widely known DevOps practices and it is all about compiling, building, and packaging your software on a continuous basis.

With every check-in, the system brings out the compilation process, runsthe unit test, runs the static analysis for the tools you use, and the otherquality related checks that you can automate. If you add the automated deployment in one environment, then you know that the system can be deployed. It means you have merged all the code into the mainline or trunk before triggering the process.

Key practices of Continuous Integration

The effort required to integrate a system increases exponentially with time. Continuous integration is a software development practice that promotes regular team integrations and automatic builds. By integrating the system more regularly, integration issues are identified earlier and they are easy to fix and the overall integration effort is reduced.

Continuous integration provides the following benefits

Improved feedback: Continuous integration gives you constant and demonstrable progress.
Improved bug detection: Continuous integration enables the user to detect and remove errors early.
Improved collaboration: Continuous integration enables team members to work together safely. The user knows that they can make changes to their code, integrate the system, and determine quickly whether it conflicts with others or not.
Reduced number of parallel changes are needed to be merged and tested.
Reduced technical risk: A user will always have an up-to-date system to test.
Reduced number of errors: These are found during the testing. All the conflicts will be resolved before making new change sets.

Continuous Testing

The aptly named DevOps practice of Continuous Testing synchronizes testing with Dev and Ops processes and optimized to achieve business and development goals. Continuous Testing in DevOps encourages a systematic approach towards process improvement.

Continuous testing goes beyond automation and encompasses all practices which help to mitigate risks before progressing a subsequent software development lifecycle stages.

Continuous Testing and automation

The approach of Continuous Testing can vary and follow varied pathways to ensure that best user experience is delivered.

It is scantily viable to repeat all the tests every time a new feature is added, so the Continuous Testing strategy foments a company’s wide cultural change to achieve four capabilities such as test early, test faster, test often, and automate.

Key elements of Continuous Testing

Risk assessment: It covers risk mitigation tasks, technical debt, quality assessment and test coverage optimization to ensure the build is ready to progress towards next SDLC stage.

Policy analysis: It ensures all the processes and aligns with the organizations evolving business and compliance demands. Its primary objectives are:

  • Identifying the trends which inject the dangerous patterns within the code.
  • Augmenting defect prevention practices for high-risk areas.
  • Isolating the risk in targeted areas.

Advanced analysis: Using automation in areas such as static code analysis, change impact analysis, and scope assessment to prevent defects in the first place and accomplishing more within each iteration.

Test optimization: It ensures tests and gives accurate outcomes and provides actionable findings. Aspects include test data management, test optimization maintenance.

Making sense of the differences

Though all these terms look similar, there are some essential differences in these two processes. The above two processes are useful in minimizing risk and labor cost while allowing developers the greatest creative freedom and understanding the exquisite differences among the two is essential in making the usage of the process that will benefit your team and company.

WHAT IS DC/OS

WHAT IS DC/OS

Innovation and technology go hand in hand. Technology is all about the changing of ideas into something tangible. Innovation is not only for those organizations and individuals who are creative, but also requires the presence of scientific and technological talent. No doubt technology has transformed our lives into something much better.

Innovation and technology go hand in hand. Technology is all about the changing of ideas into something tangible. Innovation is not only for those organizations and individuals who are creative, but also requires the presence of scientific and technological talent. No doubt technology has transformed our lives into something much better.

DC/OS:

DC/OS is a distributed operating system based on the Apache Mesos distributed systems Kernel. It enables the management of a multiple number of machines as if they were in a single computer. It robotizes resource management, schedules process placement, facilitates inter-process communication, and simplifies the installation and management of distributed services.

In a single computer environment, the user’s operating system automatically takes care of the following things. When was the last time the users had to ask their laptop that which processor core to be run in their application?

On your laptop, the job of scheduling an application to a specific core is handled by the Kernel. The Kernel manages all the resources on your computer, including memory, disk, inputs, and outputs.

Features of DC/OS
High-resolution utilization

Deciding where to place long-running services which have changing resource requirements over time is harder. DC/OS manages this problem by separating resource management from task scheduling. Task placement is delegated to higher level schedulers that are more aware of their tasks and their specific requirements. This model is known as two-level scheduling and it enables multiple workloads to be accumulated efficiently.

Mixed workload accumulation.

DC/OS makes it easy to run all your computing tasks on the same hardware. For scheduling of long-running services, DC/OS integrates with Marathon to provide a solid stage on which it has to launch micro services, web applications or other schedulers.

For complex custom workloads, a user can even write their own schedule to optimize and control the scheduling logic for specific tasks.

Public and private package repositories

DC/OS makes the user to easily install both community and proprietary packaged services. The mesosphere universe package repository connects the user with the library of open source industry-standard schedules, services, and applications. DC/OS also supports installing from multiple package repositories: one can host their own private packages which are to be shared within their company or with their customers.

Service discovery and load balancing

DC/OS also includes several options for automating service discovery and load balancing. Distributed services creates distributed problems, but the user does not need to solve those problems by themselves. DC/OS includes automatic DNS endpoint generation, an API for service lookup, transport layer, and load balancing for external-facing services.

Security:

At the highest level, we categorize into three security zones in a DC/OS deployment, namely the admin, private, and public security zones.

Admin zone

The admin zone is obtainable through HTTP/HTTPS and SSH connections and provides access to the master nodes. It also provides reverse access to the other nodes in the cluster via URL routing

Private zone

Private zone is a non-routable admin network which can be accessed only from the admin zone. Deployed services can run in the private zone.

Public zone

The publicly accessible applications can be run in the public zone. Usually, only a small number of agent nodes are run in this zone. The edge router forward transfers the traffic to applications running in the private zone.

WHAT FUNCTIONS DOES BAMBOO OFFER?

What functions does Bamboo offer?

If you are a solo developer, then using Bamboo gives you:

  • An automated, and therefore reliable, build and test process, leaving you free to code more.
  • A way to manage builds that have different requirements or targets.
  • Automatic deployment to a server, such as the App Store or Google Play.
    If you work on a large, complex application, then, in addition to all the above advantages, using Bamboo means that:
  • You can optimize build performance through parallelism.
  • You can leverage elastic resources.
  • You can deploy continuously, for example to user acceptance testing (UAT).
  • You can implement release management.

Comparison B/W Jenkins, Bamboo and Travis?

jenkins_bamboo_travis

WHAT IS DOCKER AND WHEN TO USE DOCKER?

What is Docker and When to Use Docker?

Docker is a basic tool that you should start incorporating into your daily development and ops practices.

  • Use Docker when you want to distribute/collaborate on your app’s operating system with a team
  • Use Docker whenever your app needs to go through multiple phases of development (dev/test/qa/prod, try Drone or Shippable, both do Docker CI/CD)
  • Use Docker with your Chef Cookbooks and Puppet Manifests (remember, Docker doesn’t do configuration management)

WHAT’S THE DIFFERENCE BETWEEN UP,RUN, AND START?

What’s the difference between up, run, and start?

Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml.

The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container.

The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.

HOW IS DOCKER DIFFERENT FROM STANDARD VIRTUALIZATION?

How is Docker different from standard virtualization?

Docker is operating system level virtualization. Unlike hypervisor virtualization, where virtual machines run on physical hardware via an intermediation layer (“the hypervisor”), containers instead run user space on top of an operating system’s kernel. That makes them very lightweight and very fast.