Monthly Archives: July 2017

CI/CD CASE STUDY

CI/CD Case Study

CICD Case Study

INTRODUCTION

Continuous Delivery is mainly related with the DevOps movement and the practice of continuous deployment. There are many case studies that fall into this sweet spot. If you want to see more companies talk about their journey, check out the videos from the DevOps Enterprise Summit. It’s important to note that continuous delivery has been widely adopted by many web companies, the techniques described in this article can be used in all sorts of domains—essentially, anywhere where your software development capability is considered as a strategic asset.

CASE STUDY

Like many companies, “Company A” has also used the cloud since day one. The company has always used the cloud as a flexible way to spin up the servers and to store the data. The targets set by the company leadership were to improve developer productivity by a factor of 10, so as to get material off the critical path for product development and can reduce the expenses. The company has three high-level goals:

  • Creating a single platform to support all the devices.
  • Increasing the quality and reducing the amount of stabilization required prior to release.
  • Reducing the amount of time spent on planning.
  • The key elements in achieving these goals was implementing continuous delivery, with a particular focus on:
  • The practice of continuous integration.
  • Significant investment in test automation.
  • Creating a hardware simulator so that tests could be run on a virtual platform

THE BENEFITS

Developers now have consistent environments in which deploy code for the company’s applications. By using Amazon Cloud, the team has saved money and improved the end-user experience. With Amazon Cloud the company has better access to data, more agile, and they can get feedback on product performance in days

SECURITY: NEW METASPLOIT EXTENSION

Security: New Metasploit Extension

Metasploit Extension

Enterprise security teams and penetration testers now have a new tool for evaluating the risks posed to their networks from Internet of Things (IoT) devices that are operating on radio frequencies outside the standard 802.11 specification. Explore this article and know more about Metasploit extension for testing IoT device security.

Rapid7, the owner of the Metasplot Project, has released an extension to its recently introduced Hardware Bridge API for conducting pen tests on network-connected hardware.

The new RFTransceiver extension for the Metasploit Hardware Bridge is designed to let the companies detect and evaluate the security state of multi-frequency wireless devices operating on their networks more effectively than current tools permit.

The RFTransceiver gives security and pros the ability to craft and monitor different RF packets for identifying and accessing the organizations wireless systems beyond Ethernet-accessible technologies. It also allows the pen testers to create and direct “short bursts of interference” to some devices to see how they respond from a security standpoint.

Many organizations already have devices and systems operating on radio frequencies outside 802.11 on their networks, examples include RFID readers, smart lighting systems using the Zigbee communication protocol and network-enabled alarms, surveillance, and door control systems.

The RFTransceiver extension is designed to help the organizations with such devices answer to vital questions, such as the operating range of the devices, whether they are encrypted or not, how they respond to outside interference, and how they fail.

Many RF-enabled devices fail to serialize, this makes them vulnerable to issues such as replay attacks where an attacker records a command sent out over RF and then plays it back. With organizations expected to connect a constantly growing range of wireless IoT devices to the network over the next few years, RF testing capabilities have become vital.

HOW TO USE RFTRANSCEIVER

Using the new RFTransceiver extension requires the purchase of an RfCat-compatible device such as Yard Stick One. Download the latest RfCat drivers, included with those drivers you can find rfcat_msfrelay. This is the Metasploit Framework relay server for RfCat. Run this on the system with the RfCat compatible device attached.

Then you can connect with the hardware bridge:

RFTranceiver Usage
$ ./msfconsole -q
msf > use auxiliary/client/hwbridge/connect
msf auxiliary(connect) > run
[*] Attempting to connect to 127.0.0.1…
[*] Hardware bridge interface session 1 opened (127.0.0.1 -> 127.0.0.1) at 2017-02-16 20:04:57 -0600
[+] HWBridge session established
[*] HW Specialty: {“rftransceiver”=>true} Capabilities: {“cc11xx”=>true}
[!] NOTICE: You are about to leave the matrix. All actions performed on this hardware bridge
[!] could have real world consequences. Use this module in a controlled testing
[!] environment and with equipment you are authorized to perform testing on.
[*] Auxiliary module execution completed
msf auxiliary(connect) > sessions

Active sessions

Id  Type Information  Connection

— —- ———– ———-
1 hwbridge cmd/hardware rftransceiver 127.0.0.1 -> 127.0.0.1 (127.0.0.1)

msf auxiliary(connect) > sessions -i 1
[*] Starting interaction with 1…

hwbridge > status
[*] Operational: Yes
[*] Device: YARDSTICKONE
[*] FW Version: 450
[*] HW Version: 0348

TYPESCRIPT 2.4 RC

TypeScript 2.4 RC

TypeScript 2.4 RC

Version 2.4 of TypeScript, a popular, type superset of JavaScript. It offers improved load times with the addition of a dynamic import expressions capability. A release candidate version is now available via NuGet or via NPM. Explore this article and see what’s new in TypeScript 2.4 RC.

 

The latest stable version of TypeScript, can be grabbed through NuGet, or use the following command with npm:

npm install -g typescript@rc

Visual Studio 2015 users (who have Update 3) can install TypeScript from here, and Visual Studio 2017 users using Update 2 will be able to get TypeScript by simply installing it from here.

To get it working, user can easily configure Visual Studio Code and Sublime Text plugin to pick up whatever version the user needs.

DYNAMIC IMPORT EXPRESSIONS

Dynamic import expressions is a new feature and part of ECMAScript that allows the users to asynchronously load a module at any arbitrary point in their program.

For instance, imagine a webpage that allows the user to create and edit images. When the user is working on one file, the page will allow the user to download that file immediately; but if he/she is working on multiple images, then the user can save all of them as a .zip file.

User can conditionally import other modules and libraries. For example, here’s an async function that only imports a utility library when it’s needed:

async function getZipFile(name: string, files: File[]): Promise {
const zipUtil = await import(‘./utils/create-zip-file’);
const zipContents = await zipUtil.getContentAsBlob(files);
return new File(zipContents, name);
}

Many bundlers have support for automatically splitting output bundles based on these import expressions, so consider using this new feature with the esnext module target.

STRING ENUMS
TypeScript 2.4 now allows enum members to contain string initializers.
enum Colors {
Red = “RED”,
Green = “GREEN”,
Blue = “BLUE”,
}

The limitation is that string-initialized enums cannot be reverse-mapped to get the original enum member name. In other words, the user cannot write Colors[“RED”] to get the string “Red”. Improved inference for generics

TypeScript 2.4 introduces a few wonderful changes around the way generics are inferred.

RETURN TYPES AS INFERENCE TARGETS:

TypeScript can now make inferences for the return type of a call. This can improve user experience and catch errors.

function arrayMap<T, U>(f: (x: T) => U): (a: T[]) => U[] {
return a => a.map(f);
}
const lengths: (a: string[]) => number[] = arrayMap(s => s.length);
As an example of new errors you might spot as a result:
let x: Promise = new Promise(resolve => {
resolve(10);
// ~~ Error!
});

STRICTER CHECKING FOR GENERIC FUNCTIONS

TypeScript now tries to unify type parameters when comparing two single-signature types. As a result, user gets the stricter checks when relating two generic signatures.

Type A = <T, U>(x: T, y: U) => [T, U];
type B = <S>(x: S, y: S) => [S, S];
function f(a: A, b: B) {
a = b;  // Error
b = a;  // Ok
}

WEAK TYPE DETECTION

TypeScript 2.4 introduces the concept of “weak types”. Any type that contains nothing but a set of all-optional properties is considered to be weak. For example, this Options type is a weak type:

interface Options {
data?: string,
timeout?: number,
maxRetries?: number,}
In TypeScript 2.4, it’s now an error to assign anything to a weak type when there’s no overlap in properties. For example:
function sendMessage(options: Options) {
// …
}
const opts = {
payload: “hello world!”,
retryOnFail: true,
}
// Error!
sendMessage(opts);
// No overlap between the type of ‘opts’ and ‘Options’ itself.
// Maybe we meant to use ‘data’/’maxRetries’ instead of ‘payload’/’retryOnFail’.

User can think of this as TypeScript “toughening up” the weak guarantees of these types to catch the silent bugs.

DELAY IN JAVA 9

Delay in Java 9

With less than few more days to the release of JDK 9, it’s being delayed again. The new release date has been updated to July 2017, four months later than the previously postponed date.

PUSHING THE DATE BACK

On September-13, Mark Reinhold, the chief architect of the Java platform group at Oracle, posted his suggestion to postpone the release date for JDK 9 in his e-mail in Oracle’s mailing list.

Mark also noted that the number of open bugs that are new in JDK 9 is larger than it was at this point in JDK 8, and that’s why he proposed a four months’ delay. Mark put his offer up for a vote, asking others on Oracle’s mailing list what they think about it.

Moving the release date of JDK 9 back 4 months affected the entire schedule. It also pushed back the “All tests run”, “Zero bug bounce” and also “Release candidate” milestones.

THE CURSE OF PROJECT JIGSAW

No surprise that JDK 9 has been pushed again, and for the same reason – Project Jigsaw. This project has a long history of pushing Java versions back, moving from Java 7 to Java 8 and now as part of Java 9.

The Jigsaw objective is to make Java modular and break the JRE to interoperable components. This means that the developer will be able to create a scaled down runtime Jar (rt.jar) customised to the components of a project actually needs.

The desire is to make Java scalable to small computing devices, improves security, and performance, and most importantly make it easier for developers to construct and maintain libraries. Considering the JDK 8 rt.jar has about 20,000 classes that are part of the JDK.

THE MAJOR REASONS WHY JDK 9 HAS BEEN POSTPONED

  • The new module system breaks all use cases that depend on reflection to access internals of other libraries.
  • It does not fix the issue of depending on two conflicting versions of a library.
  • It fails to strongly encapsulate access, because classes can still be loaded as resources, and used that way.

MARK HAS ADDRESSED THE REASON FOR THE DELAY ON HIS ORIGINAL EMAIL, EXPLAINING THAT

“We recently received critical feedback that motivated a redesign of the module system’s package-export feature, without which we’d have failed to achieve one of our main goals. There are, beyond that, still many open design issues, which will take time to work through.”

The current pushback tells us loud and clear that Jigsaw needs more time, and our only hope is that it’ll actually be a part of JDK 9, and not be pushed back to JDK 10. Or JDK 11. There’s no doubt it’s a critical and important project, and the community is willing to wait a little longer for it to be just right.

TOP 5 MISTAKES IN HADOOP AND HOW TO AVOID THEM

Top 5 Mistakes in Hadoop and how to avoid them

Hadoop has its strengths and difficulties. Business needs more specialized skills and data integration to factor into planning and implementation. Even though this happens, a large percentage of Hadoop implementations fail.

To help others avoid common mistakes with Hadoop, explore this article and know top 5 mistakes with Hadoop and how to avoid them.

MISTAKE 1: MIGRATE EVERYTHING BEFORE DEVISING A PLAN

As attractive as it can be to dive, head first into Hadoop, never start without a plan. Migrating everything without a clear strategy will only create long-term issues resulting in expensive ongoing maintenance. With first-time Hadoop implementations, users can expect a lot of error messages and a steep learning curves.

Successful implementation starts by identifying a business use case. Consider every phase of the process – from data ingestion to data transformation to analytics consumption, and even beyond to other applications and systems where analytics must be embedded. It also clearly determines how Hadoop and big data will create value for the business.

Our advice: Maximize your learning in the least amount of time by taking a holistic approach and starting with smaller test cases.

MISTAKE 2: ASSUME RATIONAL DATABASE SKILLSETS ARE TRANSFERABLE TO HADOOP

Hadoop is a distributed file system, not a traditional relational database (RDBMS). User can’t migrate all their relational data and manage it in Hadoop, nor can expect skillsets to be easily transferable between the two.

If the team is lacking Hadoop skills, it doesn’t necessarily mean you have to hire all new people. Every situation is different, and there are several options to consider. It might work best to train existing developers. User might be able to plug skills gaps with point solutions in some instances, but growing organizations tend to do better in the long run with an end-to-end data platform that serves a broad spectrum of users.

Our advice: It is important to look for software, along with the right combination of people, agility, and functionality to be successful. There are lot of tools available which automates some of the repetitive aspects of data ingestion and preparation.

MISTAKE 3: USER’S FIGURE-OUT SECURITY LATER

High profile data breaches have motivated most enterprise IT teams to prioritize protecting sensitive data. If the user considers using of big data, it’s important to keep in mind while processing sensitive data about the customers and partners. The user should never, ever, expose the card and bank details, and personally identifiable information about the clients, customers or the employees. Protection starts with planning ahead.

Our advice: Address each of the security solutions before deploying a big data project. Once a business need for big data has been established, decide who will be benefited from the investment and how it is going to impact the infrastructure.

MISTAKE 4: BRIDGING THE SKILLS GAP WITH TRADITIONAL ETL

Plugging the skills gap can be tricky for the organizations who are considering to solve big data’s ETL challenges. Many developers are proficient in Java, Python, and HiveQL, but may lack the experience to optimize performance on relational databases. When Hadoop and MapReduce are used for large scale traditional data management workloads such as ETL, this problem will be increased.

Some point solutions can help to plug the skills gap, but these tend to work best for experienced developers. If you’re dealing with smaller data sets, it might work to hire people who’ve had the proper training on big data and traditional implementations, or work with experts to train and guide staff through projects. But if you’re dealing with hundreds of terabytes of data, then you will need an enterprise-class ETL tool as part of a comprehensive business analytics platform.

Our advice: People, experience, and best practices are essential for successful Hadoop projects. While considering an expert or a team of experts as permanent hires or consultants, user should consider their experience with “traditional” as-well-as big data integration, the size and the complexity of the projects they’ve worked on, the organizations they worked with, and the number of successful implementations they have done. While dealing with large volumes of data, it might be the time to evaluate a comprehensive business analytics platform which is designed to operationalize and simplify Hadoop implementations.

MISTAKE 5: ENTERPRISE-LEVEL VALUE ON A SMALL BUDGET

The low-cost scalability of Hadoop is one of the reasons why organizations decide to use it. But many organizations fail to factor in data replication/compression, skilled resources, and overall management of big data integration of the existing ecosystem.

Hadoop is built to process enormous data files that continue to grow. It’s essential to do proper sizing up front. This includes having the skills on hand to leverage SQL and BI against data in Hadoop and to compress data at the most granular levels. The compression of data also needs to be balanced with performance expectations for reading and writing data. Also, storing the data may cost 3x more than what the user has planned initially.

Our advice: Understand how the storage, resources, growth rates, and management of big data will factor into your existing ecosystem before you implement.

WHAT IS THE DIFFERENCE BETWEEN A REST WEB SERVICE AND A SOAP WEB SERVICE?

What is the difference between a REST web service and a SOAP web service?

Below are the main differences between REST and SOAP web service

  • REST supports different formats like text, JSON and XML; SOAP only supports XML.
  • REST works only over HTTP(S) on a transport layer; SOAP can be used different protocols on a transport layer.
  • REST works with resources, each unique URL is some representation of a resource; SOAP works with operations, which implements some business logic through different interfaces.
  • SOAP based reads can’t be cached, for SOAP we need to provide caching; REST based reads can be cached.
  • SOAP supports SSL security and WS-security(Web Service-security); REST only supports SSL security.
  • SOAP supports ACID (Atomicity, Consistency, Isolation, Durability); REST supports transactions, but it is neither ACID compliant nor can provide two phase commit.

WHAT’S NEW IN ANGULAR 4? WHAT ARE THE IMPROVEMENTS IN ANGULAR 4?

What’s New in Angular 4? What are the Improvements in Angular 4?

Smaller & Faster Apps- Angular 4 applications is smaller and faster in comparison
with Angular 2.

View Engine Size Reduce – Some changes under to hood to what AOT generated code compilation means in Angular 4, improved the compilation time. These changes are reduced around 60% size in most of the cases.

Animation Package – Animations now have their own package i.e. @angular/platform-browser/animations

Improvement – Some Improvement on *ngIf and *ngFor.

NAME SOME OF THE COMMONLY USED HTTP METHODS USED IN REST BASED ARCHITECTURE?

Name some of the commonly used HTTP methods used in REST based architecture?

Following well known HTTP methods are commonly used in REST based architecture −

– GET − Provides a read only access to a resource.

– PUT − Used to create a new resource.

– DELETE − Used to remove a resource.

– POST − Used to update an existing resource or create a new resource.

– OPTIONS − Used to get the supported operations on a resource.

WHAT IS THE DIFFERENCE BETWEEN MONGODB AND MYSQL?

What is the difference between MongoDB and MySQL?

Although MongoDB and MySQL both are free and open source databases, there is a lot of difference between them in the terms of data representation, relationship, transaction, querying data, schema design and definition, performance speed, normalization and many more. To compare MySQL with MongoDB is like a comparison between Relational and Non-relational databases.