Monthly Archives: July 2015



Ever since its inception, jQuery has been a tool to be reckoned with. It has become the philosopher’s stone for developers across the world with its unparalleled ability of maintaining code compatibility with popular and old browsers alike. Conventional development practises have evolved over the past few years. Practises which were the norm some time ago are now considered wrong, such as browser sniffing. Today, UI developer practises have become more disciplined.


The jQuery development team under the guidance of its lead, the president of the jQuery foundation Dave Methvin is introducing jQuery 3.0

The 1.x and 2.x versions that supported old browsers like IE6, 7, 8 and recent browsers respectively were in fact compatible with each other in terms of the API. This led to confusion in the product’s Semantic Versioning (Semver), which is why they introduced jQuery 3.0 This latest version comes in two packages,“jQuery 3.0” which is a successor to what previously was jQuery 2.1.1& “jQuery Compat 3.0” which is a successor to the erstwhile 1.11.1.

The jQuery 3.0 offers support only to modern browsers. This package might also offer support for additional browsers based on market share.

The jQuery Compat 3.0 however, supports both the old and new browsers. This support that spans a wide range of browsers comes however with the risk of a potentially lower performance and at the expense of a larger file size. Though they might differ insignificantly in their internal implementations, segregating the same version into two packages lays emphasis on the fact that both are API compatible. This would enable developers to switch between the two versions smoothly and assure compatibility with all third party jQuery plugins. The need for a new version and the subsequent segregation arose due to the fact that jQuery was using Semver to indicate browser compatibility whereas ideally, Semver indicates API compatibility.

Dave Methvin stated in his blog that If support is required for a wide range of browsers including IE8, Opera 12, Safari 5 etc. the “jQuery 3.0.0 Compat” version is the option to go for as this version offers compatibility for all website visitors. However, if support is required only for a select few popular browsers that are widely in use or just an HTML app within a webview (Example: PhoneGap or Cordova) and in cases where knowledge about the browser engines in use is privy, the jQuery 3.0.0 package is the most feasible option.

Contrary to what one might anticipate, this version is not a major rework of jQuery’s API. The folks at the jQuery foundation are quite content with the design and therefore are only interested in tweaking the API when necessary. Some significant changes have however been introduced, such as the Promise/A+ compatibility for jQuery’s deferred implementation and the return of Request Animation Frame for animations to enhance battery life and performance.

This version of jQuery has been made modular to allow developers working on the smallest file sizes to exclude unnecessary parts. For instance they now have the liberty of excluding the animation methods of jQuery if they exclusively use CSS animations.

The release date is not yet out, however, the subsequent releases of both the packages shall be made available on npm & bower. They shall also be made available as single-file builds on the jQuery CDN. According to the jQuery people, both the packages shall also be supported by Google’s CDN.

With the current trend of newer, simpler and easier versions to popular platforms, it seems that developers are spoilt for choice. So all those jQuery enthusiastic out there, go ahead and exercise the luxury of choosing what you require and using only that which you need making your life simpler.


Micro Services Architecture Genesis and Benefits

MicroServices a suite of small services, each running in its own process and communicating with lightweight mechanisms often an HTTP resource API. Micro Service Architecture (MSA) is quite similar compared to Service Oriented Architecture(SOA) in the industry. Both emphasis about services as the cornerstone for the architecture. Let’s first explore what exactly is a MSA.

Micro service architecture is a distinctive method of developing software systems as a suite of independently deployable, small, modular services in which each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal. It has grown and become a popular software architecture style, courtesy of its leaps and bounds in the past few years. In this model, large and complicated applications are broken down into small and manageable processes that communicate with each other using language-agnostic Application Program Interfaces(APIs). Micro services are small, highly decoupled services that focus on conducting minute tasks.

So, a micro-services architecture would involve developing a single yet meaningful functional feature that will act as a single service, with each service having its own process and lightweight communication mechanism that is initiated in single or multiple servers.

Example of how Micro Services are implemented:

Let’s assume that you are building an e-commerce application that takes orders from customers, verifies inventory and available credit, and ships them.

The application consists of several components including the StoreFrontUI, which implements the user interface, along with some backend services for checking credit, maintaining inventory and shipping orders. The application is deployed as a set of services.

Y axix scaling – application level

Y axis scaling application level

Apply X axis cloning and/or Z axis partitioning to each service

This application is likely using different services such as Accounting, Inventory services and Shipping service. Each of this functionality is effectively running as an independent service and composed together. Each service runs in its own process and communicating with lightweight mechanisms. These services will have their own database in order to decouple from other services. When necessary, consistency between databases is maintained using either database replication mechanisms or application-level events. These services are independently deployable. With this approach, it is easier to deploy new versions of services frquently.

MicroServices, the name suggests that the services need to very small and probably very fine grained but in reality no size restrictions apply. Ironically, it’s the size (and focus) of services that commonly cripple SOA implementations.

Benefits of Micro Services Architecture:

  • Simplicity -Code is easier to understand
  • More productive in speeding up the deployments
  • Each service can be deployed independently
  • Makes continuous deployment feasible
  • Develops, deploys and scales each serviceindependently
  • Improves fault isolation
  • Eliminates any long-term commitment to a technology stack
  • Easier to deploy new versions of services frequently

How MSA is useful to UI developers:

Micro services architecture helps UI developers in building an entire portfolio of applications where services and resources are shared, allowing you to eliminate redundant workflows. So,instead of maintaining an array of monolithic applications by various teams where architects and support resources end up duplicating each other’s effort, UI developers can now draft and design their applications from an existing service that has already been built and setup by others.

It is useful to note that this is not the same as reusing old codes, because as a UI developer, you’re embedding a live service that will become a part of your application and will benefit all applications that use that service. It’s independent code bases, independent deployability and having different teams work on different parts of the stack without having to coordinate with each other is what makes it the preferred choice.


Micro services architecture has been used to describe a peculiar and effective method of designing applications as a package of independently deployable services. And as an innovative concept that’s blossomed out of an old idea, micro services architecture has shown stellar potential in describing development patterns that web developers keep incorporating into their design.



Explore your editor of choice that allows you to be ultra-productive.

So, you’re in the market for a reliable and functional text editor for your daily needs. There are hundreds of source code editors available in the market that might offer a high level of customization, but it’s very complicated and can become tedious to work with. To name a few- Eclipse, Notepad++, IntelliJ, Atom, Brackets and Sublime Text are few text editors. Among the most popular and useful of them are Atom, Brackets and Sublime Text.

Here’s a thorough comparison that measures these editors upon 6 important parameters and bares the best and worst of each. Though we are comparing open source IDE and commercial IDE’s – our intention is to just look at some important features keeping pricing aside. Right from the interface and customization flexibility to accessibility and stability, we consider everything and have a winner for you. Do not skip ahead, the ending should not be revealed until the time is right.

1. Interface

We all know that text editors aren’t what you’d call pretty, but as you use one over the course of the day, you’ll realize that a clean and efficient user interface is exactly what you need.

Text editors should be easy on the eye and allow for hours of coding sessions.Sublime Text offers the widest options because it lets developers understand the importance of color coding theme customization. Not to say that it’s the best because the other two editors also have a capable range of options which allow you to make your personalized text document.

Overall, it’s not easy choosing a favourite because they all look alike and even have identical interfaces. All three editors offer the best of everything, but brackets seems to be the obvious choice because it apparently provides you with the most consistent cross-platform user experience and is also quite eye pleasing. But even brackets has its flaws as its font rendering doesn’t appear as smooth as other editors. This issue can be solved with a little tweaking though. Atom’s text editor comes in really close second since it has windows aesthetics, which gives it a minor advantage over Sublime Text because it looks out of place.

2. Customization

Since text editors are universally designed to be fully customizable, it becomes a bit difficult to judge them upon this factor. Although Sublime Text and Atom have similar configuration files, Atom has more options integrated into its interface. Documentation is also more organized, and gives your access to the editor’s source in case you need more comprehensive changes or want to work on plug-in development. Brackets is also equipped with an array of options, but although it’s also open source, it’s just not as customizable as Atom andSublime Text.

3. Syntax Support

All four editors have the ability to edit any source files, no matter what kind of syntax they possess. Sublime text, however, clearly wins this category since it offers color coding and support for numerous popular and esoteric languages. Atom and brackets are in the same league as they offer plug-in support even for less-popular syntax.

4. Accessibility

The best text editor is one that allows you to be ultra-productive when you need to be, and allows you to explore new features over time. Keeping that in perspective, brackets is the best choice because it has easy accessibility to most of the options conveniently located on its general user interface. It is also configured in such a way that most of the options are available via the menu without having to modify the configuration files.Sublime text is packed with clever features but its interface makes locating, configuring and implementing these features troublesome to say the least. Atom works better and simpler in this case because of its relatively steeper learning curve compared to brackets.

5. Stability

Sublime text is, by far, the fastest of all the four editors and faces only the extremely rare crash or loss of progress or data. Brackets, on the other hand, has much higher hardware requirements that can cause some very annoying lag and frequent system or process crashes, especially with larger files. We recommend that you’d be best off if you used it to edit HTML, CSS and JavaScript files, and try avoiding complicated SQL files.

6. Native Features and Plugins

How good is your editor is out of the box? Again, Sublime Text wins this one, since it offers a comprehensive range of features. And even if that doesn’t satisfy you, it offers an excess of 2,000 extensions that can enhance functionality. Other contenders follow a more minimalist approach, offering just basic editor that needs to be supplemented by plugins that offer advanced avenues to take. In Atom, for example, even auto complete is an optional plug-in. Brackets doesn’t even offer a split view feature as an option. Sublime text manages to remain nimble and light despite the extra functionality. Atom and brackets offer easier plug-in management, despite their limited standalone functionality.Atom (primarily maintained by GitHub) and Brackets (Product of Adobe) are open source code editors where you can install and use them interchangeably. On other hand, Sublime text is not free, but has unlimited trial.

And the Winner is.

Now that we’ve thoroughly scrutinized the three text editors, we’ve enough evidence to establish that they all come in very close to each other. That being said, Sublime Text continues to remain the editor of choice despite its considerable price. This is mainly because it offers you a quick, stable, and highly-adaptable user interface that ensures a positive user experience.



The internet is abuzz with Facebook’s recently launched Flux, their new pattern to structure client-sided applications. Let’s take a look how the new Flux pattern relates to the earlier used MVC pattern and how it could end up being as useful as the user interface builder, React.


The story behind the model-view-controller(MVC) pattern and how numerous companies and projects have used it in the past is pretty interesting. It is recommended that you go through this brief history because it’s a great way to understand the specific domain that the Flux pattern operates under.

The MVC Pattern

Web developers have been using a lot of model-view-controller (MVC) patterns that have each being doing things a bit differently than the other. But, most MVC patterns typically perform the following core roles:

a. Model: This maintains the behavior and data of an application domain.
b. View: This represents the showcasing of the model in the user interface.
c. Controller: Taking user inputs, manipulating the model and prompting the view to update itself

The core concept of model-view-controller can be formulated by:

1. Separating the presentation from the model – This not only enables the implementation of varied UIs, but also ensures smoother testability.

2. Separating the controller from the view – This is extremely useful for older web interfaces that are not commonly used in most Graphic User Interface (GUI) frameworks.

Problems with MVC

Introduced in Smalltalk-76, the object-oriented, dynamic reflective programming language, MVC is an application pattern that is a legend in its own right as it has been employed for multiple projects ever since 1976. And even in 2015, it has firmly stood the test of time and is being used in some of biggest projects. So, the question arises as to why it should even be replaced! The truth is that MVC didn’t scale well when it came to Facebook’s enormous codebase. Major challenges arose due to Facebook’s bidirectional communication platform, where one change would loop back and have ripple effects across the entire code-base. This made the system fairly complicated and debugging almost impossible.

The Flux Pattern

MVC’s shortcomings posed some serious challenges, and Facebook solved them by incorporating Flux, which forces unidirectional flow of data between a system and its components. Typically, the flow within an MVC pattern is not well defined. Flux however, is completely dedicated to controlling the flow within an application, making it as simple to comprehend as possible.

The Flux pattern has four core roles including actions, storage, data dispatcher and views. Their functions are described below:

a. Actions – These are pure objects that consist of a type property and some data.
b. Stores/Storage - This contains complex data, including the application’s state and logic. One can consider stores as a manager for a particular domain within the application. While Flux stores can store virtually anything, they are not identical to MVC models because these models typically attempt to model single objects.
c. The Dispatcher – This essentially acts as the hubs nerve center. The dispatcher processes actions like user interactions and prompts loops that the storage has registered within it. And as with storage, the dispatcher is quite different from controllers in the model-view-controller (MVC) pat
tern. The difference is that the data dispatcher does not contain a lot of logic within it, allowing you to reuse the samedispatcher across a variety of new and complex projects.
d. Views – These are just controller-views and are also commonly found in most Graphic User Interface (GUI) MVC patterns. They monitor for any changes within the stores and re-design themselves accordingly in real time. Views also have the ability to add new actions to the dispatcher, user interactions included. These views are normally coded with React, but with Flux patterns it’s not necessary to use React. The typical flow of a Flux application can be defined in the below diagram.

It is critical to note that every change that you make will go through an action via the dispatcher.

So how does Flux differ from MVC?

1. The Flow

When it comes to Flux, the flow of the application is vital and these are governed by some strict rules and exceptions that are enforced by the data Dispatcher. In model- view-controller (MVC) patterns however, the flow is not strictly enforced or not enforced at all. This is why different MVC patterns implement their flows differently.

2. Unidirectional flow

As all changes go through the data dispatcher, stores/storage cannot change other stores directly and the same basic concept applies for views and other actions. So, any change has to go through the dispatcher via actions. And MVC’s commonly have bidirectional flow.

3. Stores

Flux stores can store any application related state or data. But, MVC models try to model single objects.

So, Is Flux better than MVC?

The fact that Flux has only recently been launched means that it’s too early to say as its perceived benefits are yet to be vetted. That being said, Flux is very new and innovative and it’s just refreshing that there is now a new pattern that can challenge MVC and its traditional ways.

The best part remains that Flux is extremely easy to understand and comes with minimalist coding, allowing you to structure your app in a more effective manner. This augers well for the future, particularly when React’s programming language is nefarious for coming with a nearly endless huge codebase and an even bigger runtime complexity that turns off a lot of web developers.


How RequireJS Plays a Role in Angular?

It’s the best way of simplifying the task of loading dependencies using AngularJS and RequireJS

One of the basic steps that need to be taken while writing big JavaScript apps is the division of the code base into numerous files. Not only does this help to enhance the maintainability of the code but it increases the possibility of misplacing or missing the script tag on the primary HTML document.

With the increase in the number of files, keeping up a clean record of all the dependencies becomes rather complicated and this issue extends to the case of large AngularJS apps as well. There are few tools that can handle the loading dependencies in the application.

It is important to know the best way of simplifying the task of loading dependencies by using RequireJS with AngularJS along with the way in which Grunt must be used for the purpose of generating combined files holding the RequireJS modules.

Understanding the Basics of RequireJS

RequireJS is a JavaScript library that assists with the loading of JavaScript dependencies at a slow pace. Modules are basically JavaScript files with a few RequireJS syntactic sugar in them and so, RequireJS is capable of implementing the Asynchronous Modules that are specified by the Common JavaScript. You attain the benefit of creating and referring to modules by using the simple APIs offered by RequireJS.

RequireJS should always have a main file that holds the basic configuration data like shims and path modules. The snippet below explains the basic outline of a main.js file:

require.config ( {
map: {
// Maps

paths: {
// duplicate names and path of the
shim: {
// Modules, plus their dependent modules
} );

Every module present within the application does not need to be specified within the paths section and others may be loaded with the help of their relative paths. However, when defining a particular module, the define( ) block needs to be used.

define ( {
// Dependencies
], function (
// object with dependency
function yourModule ( ) {
// dependency objects received above
can used
}return yourModule;

It is possible for a module to have a few dependent modules but in general, an object will be returned by the end of the module. However, this outcome is not mandatory.

Comparison of RequireJS Dependency Management with Angular’s Dependency Injection:

The difference between RequireJS dependency management and Angular’s dependency management is one of the most frequently asked questions by Angular developers.

While RequireJS tries to load a module, it checks for every dependent module and proceeds to load them first. The objects of the loaded modules are cached and later, they get served when the same page requests once again. However, AngularJS possesses an injector that contains a list of names as well as corresponding objects. Upon creation of a component, an entry is added to the injector and the object gets served whenever it is referenced via the registered name.

Using AngularJS and RequireJS Together:

Most codes for simple applications contain various external dependencies, including:

  1. RequireJS
  2. Angular Resource
  3. jQuery
  4. AngularJS
  5. Angular UI’s ngGrid (Or any component library)
  6. Angular Route

All of the above-mentioned files need to be loaded directly on to the page in a specific order.

How to Define AngularJS Components In Terms of Requirejs Modules?
Almost every AngularJS component is made up of:

  • The Dependency injection
  • A function definition
  • Finally, registering with an Angular module

The first step involves defining a config block which does not depend on other blocks and is capable of returning the config function in the end. However, prior to loading the config module within another module, it is necessary to load every essential component required for the config block. The code below will be contained in config.js:

define ( [ ],function(){
function config($routeProvider) {
return config;

The integral point to take note of is the way in which the dependency injection has been carried out in the code snippet above. $inject was used to get the dependencies injected while the config function defined above happens to be a plain JavaScript function. Prior to closing the module, config function must be returned so that it might be sent to the dependent module for use in the future.

The same approach may be followed for defining other kinds of Angular components also, since there is no component specific code contained in the files. The snippet below exhibits the definition of the controller:

define ( [ ], function() {
function ideasHomeController($-
scope, ideaDataSvc) {
$scope.ideaName = ‘Todo List';
$scope.gridOptions = {
data : ‘ideas’,
columnDefs: [
{ field:’name’, displayname:’Name’},
{ field:’technologies’,displaName:’
{ field:’platform’,displaName:’Platforms’}
{ field:’status’,displaName:’Status’},
{ field:’devesNeeded’,displaName:’
{ field:’id’,displaName:’View Details’,
‘cellTemplate:'<a ng-href=”#/details/{{
field)}}”>View Details</a>’}
enableColumnResize: true
ideasDataSvc.allIdeas( ).then(function(
result) {
scope’, ‘ideasDataSvc’];
return ideasHomeController;

The application’s Angular module is based on each of the modules that have been defined till this point. The file gains objects from all the other files and then hooks them in with an AngularJS module. It is possible that the file may or may not return something as the result of this file, but the Angular module may be referenced from anywhere with the help of angular.module( ). The following blocks of code define an Angular module:


function(config, ideasDataSvc, ideasHomeController,
ideaDetailsController) {
var app = angular.module(‘
ideasApp’, [‘ngRoute’,’ngResource’,

No Angular application can be bootstrapped using the ng-app directive since the necessary files are loaded in asynchronous manner. The right approach here involves the use of manual bootstrapping and needs to be done in a special file named main.js. This requires the file defining the Angular module to be first loaded and the code is given below:

require {[‘app/ideasModule’],
function( ) {


The process of managing dependencies tends to pose a challenge when the application size exceeds beyond a specific number of files. Libraries such as RequireJS make it simpler to define the dependency without having to worry about the loading order of the files. Dependency management involves becoming an essential part of the JavaScript applications. AngularJS 2.0 will receive the benefit of built-in support for AM


What is the difference between compile and link function in angularjs ?

In compile phase the angular parser starts parsing the DOM and whenever the parser encounters a directive it creates a function. These functions are termed as template or compiled functions. In this phase we do not have access to the $scope data. In the link phase the data i.e. ($scope) is attached to the template function and executed to get the final HTML output.

Compile – It works on template. It’s like adding a class element in to the DOM (i.e., manipulation of tElement = template element), hence manipulations that apply to all DOM clones of the template associated with the directive.

Link – It works on instances. Usually used for registering DOM listeners (i.e., $watch expressions on the instance scope) as well as instance DOM manipulation. (i.e., manipulation of iElement = individual instance element).


How can I upload files asynchronously with jQuery?

The “enctype” has to be mentioned in the AJAX call.

For example:

$(document).ready(function ( ) {
$(“#uploadbutton”).click(function ( ) {
var filename = $(“#file”).val(
type: “POST”,
url: “”,
enctype: ‘multipart/form-data’,
data: {
file: filename
success: function
( ) {
alert(“Data Uploaded: “);


Difference between Bower and NPM?

The main difference between NPM and Bower is the approach for installing package dependencies. NPM installs dependencies for each package separately, and as a result makes a big package dependency tree(node_modules/grunt/node_modules/glob/node_modules/…), where there could be several versions of the same package. For client-side JavaScript this is unacceptable: you can’t add two different versions for jQuery or any other library to a page.

With Bower each package is installed once(jQuery will always be in the bower components/jquery folder, regardless of how many packages depend on it) and in the case of a dependency conflict, Bower simply won’t install the package incompatible with one that’s already installed.


How to access parent scope from within a custom directive *with own scope* in AngularJS?

The way a directive accesses its parent ($parent) scope depends on the type of scope the directive creates:

default (scope: false) – The directive does not create a new scope, so there is no inheritance here. The directive’s scope is the same scope as the parent/container.

scope: true – The directive creates a new child scope that prototypically inherits from the parent scope

scope: { x:’=foo’ } – The directive creates a new isolate/isolated scope. It does not prototypically inherit the parent scope. You can still access the parent scope using $parent, but this is not normally recommended. Instead, you should specify which parent scope properties (and/or function) the directive needs via additional attributes on the same element where the directive is used, using the =, @, and & notation.

transclude: true – The directive creates a new “transcluded” child scope, which prototypically inherits from the parent scope. The $parent property of each scope references the same parent scope.