SonarQube is an open source static code analysis tool that is gaining great popularity among software developers. It allows software developers to measure code quality and fix code quality issues. The SonarQube community is active and provides continuous upgrade, plug-ins, and customization information on regular intervals. Further, it is a healthy practice to intermittently run SonarQube on the source code to fix code quality violations and reduce the technical obligation.
SonarQube allows the developers to track code quality which helps the developers to determine if a project is ready to be deployed in production. It also allows the developers to continuously review and perform automatic reviews and run analysis to find code quality issues. SonarQube provides a lot of other features, including the ability to record metrics, progress graphs and much more. It has inherent options to perform automated analysis and continuous integration utilizing tools such as Jenkins, Hudson, etc.
STEP BY STEP INSTALLATION OF SONARQUBE ON UBUNTU 16.04
Step1 : System update
Step2 : Install jdk
Step3 : Install postgresql
Step 4 : Download and configure SonarQube
Step 5 : Configure Systemd service
TO INTEGRATE JENKINS WITH SONARQUBE
Firstly, in the Jenkins server install the SonarQube plugins on Jenkins.
1. Go to manage plugins and install SonarQube plugin.
2. Go to global tool configuration and install Sonarscanner and save it.
SonarQube Scanner used to start code analysis. Project configuration is read from a
sonarproject.properties or passed on command line.
3. Configure system.
Token can be generated from the sonar browser.
Go to administration >security>user>generate token
Create a new item in Jenkins and update the configure.
A hardware security module (HSM) is a physical computing device that protects and manages digital keys for strong authentication and provides crypto processing. These modules usually come in the form of a plug-in card or an external device that attaches directly to a computer or network server.Azure Key Vault first became available as a public screening in January 2015 and became generally available in same year of June. It is available in Standard and Premium service levels. There is no set up fee and users are billed for operations and keys. So, let’s dive deep in and know more about AKV and its benefits of HSM.
To help you out of this hardware misery, Microsoft offers you Azure Key Vault (AKV) in the cloud. It offers the benefits of HSM and managing it.
HOW SAFE IS STORING DELICATE DATA IN AKV?
Storing data in a database and an HSM is not the same. The data does not simply stay in a file on user’s server. This information is stored in hardware device and the device offers the user many features such as auditing, tamper-proofing, encryption, etc. What Microsoft provides in the form of AKV is an interface using which the user can access the HSM device in a safe way.
If the user wants further assurance about the integrity of the key, the user can generate it right inside of the HSM. Isn’t it cool? Microsoft processes keys in FIPS 140-2 Level 2 validated HSMs and even Microsoft cannot see or extract the user’s keys. With logging enabled, you can pipe the logs to Azure HD Insight or SIEM solutions for threat detection.
PATCHING, PROVISIONING, AND OTHER INFRASTRUCTURE ISSUES
Microsoft does, just like other Azure IaaS resources. They provide an SLA of 99.9% effective processing for Key Vault transactions within 5 seconds.
WHAT KIND OF DATA SHOULD BE STORED?
The user can store PFX files and manage your SSL certificates using AKV. Database Connection strings, Social Network client keys, Subscription Keys, License Keys, and many more keys could be stored and managed easily using AKV.
CREATING AND MANAGING KEY VAULTS
Azure Key Vault offers a way to securely store credentials and other keys and secrets, but the user code needs to authenticate to Key Vault to recover them. Managed Service Identity (MSI) makes resolving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). The user can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in the user’s code.
Top 5 Reasons Why Every Programmer Should Learn Python
Python is one the finest programming languages ever developed. It involves simple English that makes it easy to remember as well as write commands. The codes are easy to read and it can be used in very complex tasks such as AI development. In this article, we are going to mention 5 reasons why every Programmer should learn python.
SIMPLE AND EASY TO LEARN
AI DEVELOPMENT AND MACHINE LEARNING
Programmers are using Python to enhance scientific programming. Researchers can use now numerical computation engines for python. They can now unravel complex calculations by single-import statements which are then followed by a function call. Python language is flexible, it is faster and provides many functions for machine learning. Researchers progressively preferring using Python and shortly it will control the machine-learning landscape.
DATE SCIENCE IN PYTHON
Data science and python skills are increasingly in demand as businesses advance in fields of artificial intelligence, automation, and advanced data analytics, whether its forecasting budgets or teaching machines to recognize images using sophisticated algorithms.
FINEST FOR WEB DEVELOPMENT
Python offers numerous choices to choose from while web programming. Python has a line-up of the framework set for developing websites. They include Pylons, Zope2, web.py, Django, and TurboGears though Django is the most chosen and popular framework for python development. What makes Python exceptional is that the coding program takes the shortest time when you use Python not like in the case of PHP that takes hours to get completed.
A PERFECT ROUTE FOR START-UPS
Python is the perfect choice for start-ups. With Python, you will be able to code quicker and build complex applications using minimal code lines. When it comes to start-ups world, it is important to move fast from idea to implement. Therefore, it is suggested to learn Python. In addition, it can be useful as a supporting service or partly as a core product.
ELK Stack or Elastic Stack 6 commonly known as (Elasticsearch, Logstash, Kibana) is a powerful tool not only for driving search on big websites, but also for analyzing big data sets in a matter of milliseconds! It is a progressively popular technology, and a valuable skill to have in today’s job market.
Elastic Stack 6 was released last November, and now it is a good time to assess whether to upgrade. To help you folks make that call, we are going to take a look at some of the key changes included in the different components in the stack and review the main breaking changes.
The main changes to Elastic Stack are intended to boost performance, improve consistency, and make retrieval easier. The most significant changes in Logstash are focused on allowing the users to run multiple self-contained pipelines on the same JVM. There are no main changes to Kibana, but a comparatively large amount of minor usability improvements was added.
What You Need to Know Elasticsearch 6
Changes to Elasticsearch are typically internal and should not require most organizations to tailor how they configure the Elasticsearch, with the big exception being the change to mapping types.
Sparse Doc Values
A sparse values situation results in the use of a huge amount of disk space and file-system cache. The revolution to Lucene 7 lets Elasticsearch support sparse doc values, a new encoding format that minimizes the disk space and improves query throughput.
Updating to the new Elasticsearch version is made easier with a series of upgrade improvements that target at tackling some of the traditional sprints facing upgrade procedures.
A new restart feature denies the need for a full cluster restart and thus reduces downtime. Elasticsearch 5.x indices will be able to be searched using cross-cluster searching: a new approach to cross-cluster operations that change the traditional tribe-node based approach. Deprecation logs have been reinforced with important information on breaking changes.
Logstash was initially proposed to handle one type of event per instance, and previous to this version, each Logstash instance supported only a single event pipeline. Users can avoid this restriction using conditional statements in the configuration, which often leads to a new set of problems.
Logstash now supports native support for multiple pipelines. These pipelines are defined in a pipeline.yml file, which is loaded by default.
A new pipeline viewer now lets users monitor and analyze Logstash pipelines in Kibana. Pipelines are displayed on a graph where each component is accompanied by relevant metrics. I explored the pipeline viewer in this article.
The major changes to the UI include a new CSV export option, new colors for better contrast, and enhanced screen reading and keyboard navigation.
Dashboarding in Kibana has two new features. A new full-screen mode was added when viewing dashboards to the users. In addition, the new dashboard also allows administrators to share dashboards securely.
We are going to end this part with a summary that needs to be considered before upgrading. Keep in mind this is a partial list of breaking changes only.
The main breaking change in this version is the slow removal of mapping types from indices, so Elasticsearch 6.0 will be able to read indices created in version 5.0 or above only. Elasticsearch 6.0 requires a re-indexing afore full functionality can be achieved; this is because of a number of changes to the Elasticsearch indices. These are accompanied by changes to the CAT, Document, and Java APIs and the Search and Query DSL, as well as REST changes. A result is a number of key changes that affect Elasticsearch deeply, tailoring everything from the structure of queries and scripts to how internal components communicate.
Changes in Logstash 6 include several breaking changes, with a number of changes to the input and output options. There are also several plugin changes and a change to the config.reload.interval configuration option, which now uses time value strings instead of millisecond time values.
To transfer existing Kibana installations to Kibana 6.0, the Kibana index needs to be re-indexed.
There are pretty a large number of changes in Elastic Stack 6.0. Many of these changes stem from the upgrade to the Lucene 7 database engine, but just as many are part of a general push towards increased proficiency and performance.
Elastic Stack 6.0 also brings significant security changes and improvements. This follows the needs of users deploying Elastic Stack in production environments, where complex security requirements are increasingly the standard.
The major version of the stack comes with a need to re-index, changes to the index structure and a number of configuration modifications to various plugins should come as no surprise. Migration from former versions will need to be planned carefully and, above all, tested.
What is AWS Cloud HSM, how it works and what are the benefits?
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries. CloudHSM is also standards-compliant and enables you to export all of your keys to most other commercially-available HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM also enables you to scale quickly by adding and removing HSM capacity on-demand, with no up-front costs.
Why do we use WebSockets when AJAX can do the work?
There are a few reasons:
Bi-directional communication.If your service is real-time you will typically not want to make a request and wait for a response; you want a response whenever the server is ready to send one. This isn’t possible with AJAX.
Less server overhead. The alternative to WebSockets are HTTP polling techniques (short- or long-polling) or Server Sent Events. Polling techniques are often inefficient or have a larger server burden than WebSockets. Server Sent Events aren’t standardized across browsers.
Smaller request frames. While less of a problem if you’re using HTTP/2, a lot of HTTP requests come with a lot of overhead that you likely don’t want/need for standard requests. Websockets use much smaller frames for communication, resulting in slightly faster communication.
Dynamic Web page is a web page that changes as per the requirements provided by the user or the computer program.
It displays varied contents each time the page is viewed. The page may change with time or as per the user who uses the site.
There are two types of dynamic web pages viz. Client side scripting, that generates client side content at the user end .Server side scripting are those web pages that vary when the web page is loaded or visited like that of shopping carts, submission forms etc. Dynamic web site enables to update the information frequently.
Why do we need a DOM? Why can’t we just manipulate the HTML? I would like to know the reasoning behind it, and why the concept of a DOM was born.
As for HTML, it’s a markup language, not meant for scripting. Just as styling is delegated to CSS, front end scripting is delegated to JS.
The use of HTML/CSS and JS is consensual because all major browsers implement them, following specifications (W3C specifications for HTML/CSS and ECMAScript specification for JS). (Note that while there are official sources to tell how they must behave, there are minor implementation differences for all three between browsers).
Simply put, HTML is to describe what is in a document, CSS is to describe how is what HTML describes, JS is to script anything with all that, based on an event driven fashion.