What is Refactoring?

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. And it makes software easier to understand.

I think refactoring goes further because it provides a technique for cleaning up code in a more efficient and controlled manner.

  • Refactoring does “not” mean:
  • rewriting code
  • fixing bugs
  • Changing functionality
  • improve observable aspects of software such as its interface


Explain Java API for XML Processing

The Java API for XML Processing (JAXP), part of the Java SE platform, supports the processing of XML documents using the Document Object Model (DOM), the Simple API for XML (SAX), and Extensible Stylesheet Language Transformations (XSLT). JAXP enables applications to parse and transform XML documents independent of a particular XML-processing implementation.

JAXP also provides namespace support, which lets you work with schemas that might otherwise have naming conflicts. Designed to be flexible, JAXP lets you use any XML-compliant parser or XSL processor from within your application and supports the W3C schema.


What is Java Servlet API?

The Java Servlet API lets you define HTTP-specific classes. A servlet class extends the capabilities of servers that host applications that are accessed by way of a request-response programming model. Although servlets can respond to any type of request, they are commonly used to extend the applications hosted by web servers. For instance, you might use a servlet to get the text input from an online form and print it back to the screen in an HTML page and format, or you might use a different servlet to write the data to a file or database instead. A servlet runs on the server side — without an application GUI or HTML user interface (UI) of its own. Java Servlet extensions make many web applications possible.


What is the difference between traditional Waterfall model and Agile testing?

There are several ways to develop software, two of the most prominent methods being waterfall and Agile.

Waterfall basically is a sequential model where software development is segregated into a sequence of pre -defined phases – including feasibility, planning, design, build, test, production, and support. On the other hand, Agile testing follows a linear sequential approach while providing flexibility for changing project requirements.

Why agile is more adapted is because you can always track your development and you can make changes if it is needed.


What is Product backlog & Sprint Backlog in Agile?

The sprint backlog is a list of tasks identified by the Scrum team to be completed during the Scrum sprint. During the sprint planning meeting, the team selects some number of product backlog items, usually in the form of user stories, and identifies the tasks necessary to complete each user story.


Top 5 cyber security vulnerabilities


In today’s world, all the major government organizations and financial firms pressure upon the issue cyber security. Sensitive data of the organizations and those keep largely public data, has been the target of some of the most notorious hackers of the world. Manipulation of, data, theft of data, leaking of company secrets, and shutting down services, are some of the many things that hackers have the license to do once they gain access to a system. So, let’s dive deep in and take a look at the 5 most dangerous cyber security vulnerabilities that are exploited by hackers.


Injection vulnerabilities will occur when an application sends untrusted data to an interpreter. Injection flaws are very common and affect a wide range of solutions. The most popular injection vulnerabilities affect SQL, LDAP, XPath, XML parsers, and program arguments. The injection flaws are quite easy to discover by analyzing the code, but it’s hard to find during the testing sessions when systems are already deployed in production environments.
Possible consequences of a cyber-attack that exploits an Injection flaw are data loss and consequent exposure of sensitive data, lack of accountability, or denial of access. An attacker could run an Injection attack to completely compromise the target system and gain control on it.

The business impact of an injection attack could be vivid, especially when the hacker compromise legacy systems and access internal data.

The vulnerability has been in existence for several decades and it is related to the way bash handles specially formatted environment variables, namely exported shell functions. To run an arbitrary code on affected systems, it is necessary to assign a function to a variable, trailing code in the function definition will be executed. The critical Bash Bug vulnerability affects versions of GNU Bash which ranges from 1.14 to 4.3, a threat actor could exploit it to execute shell commands remotely on a targeted machine using specifically crafted variables.


A buffer overflow vulnerability condition comes to existence when an application attempts to put more data in a buffer than it can handle. Writing outside the space assigned to buffer allows an attacker to overwrite the content of adjacent memory blocks causing data corruption or crashes the program. Buffer overflow attacks are quite routine and very hard to discover, while compared to the injection attacks they are harder to exploit. The hackers need to know the memory management of the targeted application to alter their content to run the attack.

In an attack scenario, the attacker sends the data to a application that stores in an undersized stack buffer, causing the overwriting of information on the call stack, including the function’s return pointer. In this manner, the attacker will able to run their own malicious code once a legitimate function is completed and the control is transferred to the exploited code which contains in the attacker’s data. There are many types of buffer overflow, but the most popular are the Heap buffer overflow and the Format string attack. Buffer overflow attacks are dangerous, they can target desktop applications, web servers, and web applications.


The most dangerous and the most common vulnerability is sensitive data exposure, it results in calamitous losses for an organization. Sensitive data exposure occurs every time a threat actor gains access to the user sensitive data. Data can be stored in the system or transmitted between two entities, in every case a sensitive data exposure flaw occurs when sensitive data lack of sufficient protection. Attackers, therefore, use this vulnerability to inflict as much damage as possible. The targeted data can be stolen when it is resting in the system, in an exchange transit or in a backup store. Malware is used by hackers when the data is in the system and cryptography techniques when it is in exchange transit.


The exploitation of a broken Authentication and Session Management flaw occurs when an attacker uses leaks or flaws in the authentication or session management procedures to imitate other users.

This kind of attack is very common; many hacker’s groups has exploited these flaws to access victim’s accounts for cyber surveillance or to steal the information that could advantage their crime activities.


We consider this category of vulnerability as the most common and dangerous. It is quite easy to discover. Below are some examples of security misconfiguration flaws:

  • Running outdated software.
  • Applications and products running in production in debug mode or that still include debugging modules.
  • Running inessential services on the system.
  • Default keys and passwords.
  • Usage of default accounts.

The exploitation of one of the above scenarios could allow an attacker to compromise a system. Security misconfiguration can occur at every level of an application stack. An attacker can discover the target which is being used in an outdated software or flawed database management systems. This kind of vulnerabilities could have a severe impact for the new paradigm of the Internet of Things.


Cyber security is something which is quite an important issue. In this article, we tried to make you aware of some of the most common and dangerous vulnerabilities. Knowing at the initial step is better than knowing it lately, and with this article, we aim to help you in your initial step.


Aws lambda and using Java with AWS Lambda


AWS Lambda is a compute service that lets the user to run the code without provisioning or managing servers. AWS Lambda executes the code only when it is needed and scales automatically. The user pays only for the compute time that they consume – there is no charge when the user doesn’t run the code. With AWS Lambda, one can run code virtually for any type of application or backend services. AWS Lambda runs the code on a high-availability compute infrastructure and performs all the administration of the compute resources, such as server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging. The user need to do is to supply the code in one of the languages that AWS Lambda supports (such as Node.js, Java, C#, and Python).

The user can use AWS Lambda to run the code in response to events, such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table to run the code in response to HTTP requests using Amazon API Gateway or to invoke the user’s code using API calls made using AWS SDKs. With the above capabilities, the user can use Lambda to easily build data processing triggers for AWS services like Amazon S3 and Amazon DynamoDB process streaming the data which is stored in Amazon Kinesis or the user can create their own back end which operates at AWS scale, performance, and security.


Many of the AWS users are using AWS Lambda to build clean and straightforward applications which handles image and document uploads, process log files from AWS CloudTrail, and handles the data which is streamed from Amazon Kinesis, and so forth. With the newly launched synchronous invocation capability, Lambda is becoming the most favourite choice for building mobile, web, and IoT backends.


AWS Lambda has become even more useful by giving the ability to write the user’s Lambda functions in Java.

The user’s code can make use of Java 8 features along with any desired Java libraries. The user can also use the AWS SDK for Java to make calls to the AWS APIs.

AWS is providing two libraries specific to Lambda: aws-lambda-java-core with interfaces for Lambda function handlers and the context object, and aws-lambda-java-events containing type definitions for AWS event sources (Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon DynamoDB, Amazon Kinesis, and Amazon Cognito). User can author their Lambda functions in one of the two ways. Firstly, they can use a high-level model that uses input and output objects:

public lambdaHandler( input, Context context) throws IOException;
public lambdaHandler( input) throws IOException;

If the user does not want to use POJOs or if Lambda’s serialization model does not meet the user’s needs, they can use the Stream model. This is a bit lower-level:

public void lambdaHandler(InputStream input, OutputStream output, Context context)
throws IOException;

The class in which the Lambda function is defined should include a public zero-argument constructor, or to define the handler method as static. Alternatively, the user can also implement one of the handler interfaces(RequestHandler::handleRequest or RequestStreamHandler::handleRequest) which is available within the Lambda core Java library.


User can continue to use their existing development tools. In order to prepare their compiled code with Lambda, user must create a ZIP or JAR file that contains their compiled code (CLASS files) and any desired JAR files. The handler functions should be stored in the usual Java directory structure and the JAR files must be inside a lib subdirectory. In order to make this process easy, AWS has published build approaches using popular Java deployment tools such as Maven and Gradle.

Specify a runtime of “java8” when the user uploads their ZIP file. If they implement one of the handler interfaces, then they have to provide the class name. Otherwise, they have to provide the fully qualified method reference (e.g. com.mypackage.LambdaHandler::functionHandler).



Setting up a Ci/Cd Pipeline

ci_cd pipeline

A pipeline helps the user to automate the steps in their software delivery process, such as commencing automatic builds and then deploying to Amazon EC2 instances. The user will be using AWS CodePipeline, a service that builds, tests, and deploys the code every time there is a code change, based on the release process models which the user defines. Use CodePipeline to orchestrate each step in the release process. In this article, we will show you how to create a simple pipeline that pulls code from a source repository and automatically deploys it into an Amazon EC2 instance.


AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release the user’s software. A developer can quickly model and configure the different stages of a software release process. AWS CodePipeline automates the steps required to release the user’s software changes continuously.


AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release the user’s software. A developer can quickly model and configure the different stages of a software release process. AWS CodePipeline automates the steps required to release the user’s software changes continuously.

Continuous Delivery and Integration with AWS CodePipeline

AWS CodePipeline is a continuous delivery service that automates the building, testing, and deployment of the user’s software into production.

Continuous delivery is a software development methodology where the release process is automated. Every software change is automatically built, tested, and deployed to production. Before the final push to production, a developer, an automated test, or a business rule decides when the final push should occur. Though every successful software change can be immediately released to production with continuous delivery, not all changes need to be released immediately.

Continuous integration is a software development practice where team members use a version control system and integrate their work frequently to the same location, such as a master branch. Each change is built and verified by means of tests and other verifications in order to detect any integration errors as speedily as possible. Continuous integration is focused on automatically building and testing code, as compared to continuous delivery, which automates the entire software release process up to production.


AWS CodePipeline helps the user to create and manage their release process workflow with pipelines. A pipeline is a workflow construct which describes how software changes go through a release process. A user can create as many pipelines as they need within the limits of AWS and AWS CodePipeline, as described in Limits.

The following diagram and accompanying descriptions introduce you to some of the terms unique to AWS CodePipeline and how these concepts relate to each other:


  • When the user creates the first pipeline using the console, AWS CodePipeline creates a single Amazon S3 bucket in the same region as the pipeline to store items for all pipelines in that region associated with the user’s account. Every time the user creates another pipeline within that region in the console, AWS CodePipeline creates a folder in that bucket for that pipeline and uses that folder to store artifacts for the user’s pipeline as the automated release process runs. This bucket is named as codepipeline-region-1234567EXAMPLE, where a region is the AWS region in which the user creates the pipeline, and 1234567EXAMPLE is a ten-digit random number that ensures the bucket name is unique.
  • If the user creates a pipeline using the AWS CLI, the user can choose any Amazon S3 bucket to store the artifacts for that pipeline, as long as that bucket is in the same region as the pipeline.
  • AWS CodePipeline breaks up the workflow into a series of stages. For instance, there might be a build stage, where the code is built and tests are run. There are also deployment stages, where code updates are deployed to run-time environments. The user can configure multiple parallel deployments to different environments in the same deployment stage. Also, the user can label each stage in the release process for better tracking, control, and reporting.
  • Every stage contains at least one action, which is some kind of task performed on the artifact in that stage. Pipeline actions occur in a specified order or in sequence, as determined in the configuration of the stage. For instance, a beta stage might contain a deploy action, which deploys code to one or more staging servers. You can configure a stage with a single action to start, and then add additional actions to that stage as needed.

ci_cd pipeline


A pipeline starts automatically as soon as it is created. The pipeline might go to pause while waiting for the events, such as the start of the next action in a sequence, but it is still running. When the pipeline completes the processing of a revision, it will wait until the next revision occurs in a source location as defined in the pipeline. As soon as a change is identified in a source location, the pipeline will begin running it through its stages and actions.

The user cannot manually stop a pipeline after it has been created, but you can disable transitions between stages to prevent stages and actions from running or adding an approval action to the pipeline to pause the execution until the action is manually approved. To assure a pipeline does not run, the user can also delete a pipeline or edit the actions in a source stage to point to a source location where no changes are being made. If the user deletes a pipeline for this purpose, make sure that a developer should have a copy of its JSON structure first. That way the developer can easily restore the pipeline when they want to re-create it.




CanJS is a lightweight, modern JavaScript MVVM (Model-View-View-Model) framework which is quick and easy to use while remaining robust and extensible to power some of the most trafficked websites in the world. This article will walk you through about CanJS and why you should use CanJS.


CanJS is a developing and improving set of client-side JavaScript architectural libraries which balances innovation and stability. It targets experienced developers, building complex applications with long future ahead. CanJS’s major aim is to reduce the cost of building and maintaining JavaScript applications by balancing innovation and stability, helping the developers to achieve a changing technology landscape.

The developer shouldn’t have to rewrite the application to keep pace with technology. CanJS aims to provide a stable and innovative platform, so the developer can block out the noise and stay focused on their applications, but not on the tools.


  • Observables
  • Models
  • ViewModels
  • Views
  • Custom elements and
  • Routing with an AppViewModel


Observable objects provide a way for the developer to make changes to the data. Observables such as can.The list, can.Map, and can.compute provides the foundation for models, view-models, view bindings, and also for routing in the developer’s app. can.compute is able to combine observable values into new observable values.

var info = can.compute(function(){
return person.attr(“first”)+” “+person.attr(“last”)+
” likes “+ hobbies.join(“, “)+”.”;

Models let the developer to modify the data from the server. They also hydrate raw, serialized service data into more useful (and observable) type data on the client side. can.Model makes it easy to connect to restful services and perform Create, Retrieve, Update, and Delete the operations.


ViewModels contains the state and model data used by the views to create HTML. They also contain methods that views can call.


Views are passed to a view-model and generate a visual output that’s meaningful to a user. Views are able to:

  • Listen to the changes in view-models and update the HTML (one-way bindings).
  • Listen to the HTML events, such as clicks, and call methods on the view-models and models (event bindings).
  • Listen to the form elements changing and update view-model and model data (two-way bindings).


Custom HTML Elements are for how CanJS encapsulates and orchestrates different pieces of functionality within an application. Custom elements are built with the can.Component and combines a view-model and view.


CanJS maintains a reciprocal relationship between the browser’s URL and a can.Map view-model. This view-model represents the state of the application as a whole and so is called the appViewModel. When the URL changes, CanJS will update the properties of the appViewModel. When the appViewModel changes, CanJS will update the URL.


CanJS is designed to be a very well-rounded JavaScript framework which is useful to any client-side JavaScript team.

It provides a wealth of JavaScript utilities that combines to make a testable and repeatable Model-View-ViewModel application with very little code.


CanJS is flexible. Unlike other frameworks, it’s designed to work in any situation. The developer can readily use third party plugins to modify the DOM with jQuery directly and uses alternate DOM libraries like Zepto and Dojo. CanJS supports all the browsers including IE8.


CanJS is powerful. It creates the custom elements with one and two-way bindings. It easily defines the behavior of observable objects and their derived values. It avoids memory leaks with smart model stores and smart event binding.


The below code explains a simple template with events.


<script id=”user-welcomer-template” type=”text/stache”>

{{#with selectedUser}}


<h2>Welcome {{name}}!</h2>





<input name=”user” type=”radio” {{#isSelected .}}checked{{/isSelected}} {{data ‘user’}}><span>{{name}}</span>




<script id=”welcome-message-template” type=”text/stache”>

<span><content></content></span><i class=”icon”></i>


var info = can.compute(function(){

return person.attr(“first”)+” “+person.attr(“last”)+

” likes “+ hobbies.join(“, “)+”.”;


label {
display: block;
margin: 1em;
cursor: pointer;
var WelcomeMessage = can.Component.extend({
tag: “welcome-message”,
template: can.stache($(“#welcome-message-template”).text()),

var UserWelcomer = can.Component.extend({
tag: “user-welcomer”,
template: can.stache($(“#user-welcomer-template”).text()),
scope: {
chooseUser: function(scope, ev, user) {
console.log(scope, ev, user);

events: {
“:radio change”: function(el, e, val) {
this.scope.attr(‘selectedUser’, el.data(‘user’));
“init”: function() {
var scope = this.scope;
var users = scope.attr(‘users’);
var firstUser = users.attr(0);
scope.attr(‘selectedUser’, firstUser);

helpers: {
“isSelected”: function(user, options) {
if (this.attr(‘selectedUser’) === user) {
return options.fn(this);
return options.inverse(this);

var users = new can.List([
{name: ‘Tina Fey’, id: 1},
{name: ‘Sarah Alexander’, id: 2},
{name: ‘Anna Ferris’, id: 3},
{ name: ‘Leslie Mann’, id: 4},
{name: ‘Kristen Wiig’, id: 5 }

var template = can.stache(”);
var frag = template({
users: users