Monthly Archives: April 2016

IS IT BETTER TO MOVE TO SWIFT OR STAY WITH OBJECTIVE-C?

Is it better to move to Swift or stay with Objective-C?

OBJECTIVE-C
On 2nd of June, 2014, Apple presented new object oriented programming language – Swift, a replacement for Objective-C which is a conventional programming language for OSX and IOS applications development. It was a smooth move to radically change main development language for a super popular platform.

Since its release users have several questions, whether they still need to learn Objective-C or not. Moreover, it seems like users are still confused if Swift fits the IOS development ecosystem. Every programming language has its own advantages and disadvantages. Here we are going to clear up some of your confusion and set your path to learning.

Types:

Type inference is probably one of the major reason why a user prefers to use Swift over Objective-C. It is the handiest feature of Swift. In Swift, most of the time there is no need for writing type annotations. The Swift compiler will work with appropriate types from the value of variables. Because of type inference the size of the code will be reduced.

Swift is also a type-safe language. It performs type checks while compiling the code and makes sure that there are no mismatched types. This is one of the major differences between Swift and Objective-C. Objective-C provides dynamic runtime, which means it decides which method implementation to call first. If the method has not been implemented, then the user gets a chance to redirect the call to another object.

In Swift, the compiler checks all the types, when you call a method, instead of doing a dynamic invocation, the compiler will directly call the implementation. Compared to Objective-C, Swift not only enables us to find the errors but also brings a runtime speed improvement.

Closures:

Blocks are excessively useful in Objective-C. The developer uses blocks to pass around self-contained functions in code. However, the syntax of blocks in Objective-C is just awful. Even an experienced developer has to use templates while writing blocks.

In Swift, a developer can avoid this and can use closures to pass functions. Closures are similar to blocks, coding style is more clear, elegant, and simplified.

Generics:

Generics is a new powerful weapon that does not exist in Objective-C. With generics, you could use a placeholder type first and can define the specific later. Since Swift is a strongly typed language and really strict about types, generics provides some essential flexibility.

Fun Fact:

Swift allows the beginners to not to get tripped over syntax. Writing in Swift flows so nicely. Even a beginner can produce a clean and perfect code with less effort. What is so fun and interesting in Swift is that, it allows the user to use nearly any character for both variable and constraint names. A user can even use emoji character for the naming. A user may wonder that how can he type emoji character in MAC OS. It’s easy, a user has to press Control – Command – Spacebar and an emoji picker will be displayed.

Existing Projects:

Are you planning to build a heavy duty application or work for an organization that has an application already in the App store? Then you are most likely to learn Objective-C. Because Swift is hardly a year old. All the existing frameworks and applications are almost all built-in Objective-C. If a developer wants to modify existing applications or converting them to Swift, still a developer needs to understand Objective-C code.

New Projects:

For new projects, the scenario is completely different. The developer has the possibility to use Swift right from the scratch. If you take into consideration that Swift is the future of IOS development, it is a very good idea to use Swift for the projects. There are already a lot of projects done in Swift and Swift is ready for real projects. The only potential problem is, there could be the need to migrate the projects to new versions of Swift.

The Predictions:

The future of Objective-C is unpredictable. According to the updates from Apple, there is a chance that the same will become significantly popular in five years from now. By then Swift may leave an inerasable mark in the sphere of iOS applications.

Conclusion:

As a new programming language, Swift has a clean and clear style. It combines the advantages of Objective-C with the features of modern scripting languages. Most importantly, Swift’s clean syntax and powerful debug tools attract the users to learn iOS programming.

On the other side, almost all the applications are still in Objective-C. It takes some years for Swift to gain its place and importance in iOS applications.

It is a good idea to use Swift whenever a developer wants. However, the effort to migrate existing Objective-C projects to Swift is too high

LATEST APIs IN HTML5

Latest APIs in HTML5

Latest APIs in HTML5

The web is deriving at a fast pace. Every day a new framework, tools, and libraries are releasing with the ambition to become next jQuery. Here are some of the interesting HTML5 APIs, such as Navigation Timing API and Battery Status API. So, let’s dive deep in and know what are these about.

What is Navigation Timing API?

The Navigation Timing API provides data that can be used to measure the performance of a website. This API exposes several properties which offer information about the time at which certain events happen.

Navigation Timing API is W3C recommendation which means that its specifications will not be changed unless a new version is released. Unlike other JavaScript-based mechanisms that have been used for the same purpose, Navigation Timing API can provide end-to-end latency data that can be more useful and accurate.

Let us see an example of how you can measure perceived loading time:

function onLoad() {
var now = new Date().getTime();
var page_load_time = now – performance.timing.navigationStart;
console.log(“User-perceived page loading time: ” + page_load_time);
}

There are many measured events given in milliseconds that can be accessed through the Performance Timing interface. The list of events in order of occur rence is given below.

navigationStart: Is the time immediately after the browser finishes prompting to unload the previous document. If there is no previous document, then navigationStart is equal to fetchStart. This is the beginning of the page load time as perceived by the user.

unloadEventStart: unloadEventStart is the time immediately before the previous document’s unload event is fired. If there are no previous document, or if the previous document is from different origin then the value would be zero.

unloadEventEnd: If there are any redirects that point to a different origin then unloadEventStart and unloadEventEnd are both zero.

redirectStart: “redirectStart” represents the start time of the URL fetch that initiates a redirect.

redirectEnd: If there are any redirects then “redirectEnd” represents the time after the last byte of the last redirect response received.

requestStart: requestStart is the time before the browser sends a request for the URL.

responseStart: The time immediately after the browser receives the first byte of response.

responseEnd: The time immediately after the browser receives the last byte of the response.

In addition to the above set of properties, the Navigation Timing API also defines another object to determine how a user landed on a particular page. The object is called navigation and it belongs to windows.performance. It contains two properties i.e., “type” and “redirectCount”.

Type : This property provides the method by which the user navigated to the current page. The following are the list of values that “type” can hold.

⦁ If the user navigates to a page by typing the URL, clicking a link, submitting a form, or through a script operation, then the value of “type” is zero.
⦁ If the page is reloaded/refreshed, then the “type” is equal to one.
⦁ If the user navigates to a page by clicking back or forward buttons, then “type” is equal to two.
⦁ For other circumstances, “type” is equal to 255.

redirectCount: It is the number of redirects taken to the current page. If there are no redirects occurred or if any of the redirects are from a different origin, then the value is zero.

Browser Support: Browser support for this API is very good for both desktop and mobile. On a desktop, the Navigation Timing API can be implemented in Chrome, Firefox, Internet Explorer, and Opera. The only desktop browser that currently does not support this API is Safari. On mobile, the API can be implemented in iOS safari, UC browser, and Blackberry browser. Opera Mini is the only browser that does not support this API.

Battery Status API
What is Battery API?

The Battery Status API also referred as the Battery API, is an API that provides information about systems battery level. It also defines events that are fired when changes of the battery level or status take place. This can be used to adjust user’s applications resource usage to minimise battery drain when the battery is low or to save changes before the battery runs out in order to prevent data loss.

The Battery Status API exposes four read-only properties on the window.navigator.battery object, they are as follows:

strong>Charging: A Boolean value that specifies whether the battery is charging or not. If the device doesn’t have a battery or the value cannot be determined, then the value of this property is set to be true.

ChargingTime: A value that specifies the number of seconds remaining until the battery is fully charged. If the battery is already charged or the device doesn’t have a battery, then the property is set to be “0”. If the device is not getting charged or if it is unable to determine the remaining time, then the value is Infinity.

DischargeTime: The value that represents the number of seconds re maining until the battery is completely discharged. If the discharge time cannot be determined or the battery is currently charging, then the value is set to Infinity. If the device doesn’t have a battery, then the discharging Time is also set to Infinity.

Level: A value that specifies the current level of the battery. The value is returned as float ranging from 0 (discharged) to 1 (fully charged). If the level of the battery cannot be determined, the battery is fully charged, or the device does not have a battery, then the value of level is equal to 1.

Battery Status Events:
There are four events that can be fired by the battery object.

Chargingchange: The device has been changed to being charged to being discharged.

Levelchange: The level of the battery will be changed.

Chargingtimechange: It is the time until the battery is fully charged has changed.

strong>Dischargingtimechange: It is the time until the battery is fully discharged has changed.

Browser support

Browser support for the Battery Status API is pretty poor. Firefox is the only browser that provides support for the API without the use of vendor prefix.

ROLE OF DATA SCIENTISTS IN BIG DATA

Role of Data Scientists IN big data

Rising apace with the relatively new technology of big data is the new job title called “Data Scientist” while not tied exclusively to big data projects. The data scientist role complements them because of the increased breadth and depth of data being examined, compared to traditional roles.

What does a data scientist do?

The data scientist will be responsible for designing and implementing processes and layouts for complex, large-scale data sets used for modelling, data mining, and research purposes. The data scientist is also responsible for business case development, planning, coordination and collaboration with various internal and vendor teams, managing the lifecycle of analysis of the project, and interface with business sponsors to provide periodic updates.

A data scientist would be responsible for:
⦁ Extracting data relevant for analysis (by coordinating with developers)
⦁ Developing new analytical methods and tools as required.
⦁ Contributing to data mining architectures, modelling standards, reporting, and data analysis methodologies.
⦁ Suggesting best practices for data mining and analysis services.
⦁ Creating data definitions for new databases or changes to existing ones as needed for analysis.

Big Data:

The term “Big Data”, which has become a buzzword, is a massive volume of structured and unstructured data that cannot process or analysed using traditional processes or tools. There is no exact definition of how big a dataset should be in order or considered as Big Data.

Big Data is also defined by three V’s i.e., Volume, Velocity, and Variety.

Volume: Big data implies enormous volume of data. We currently see the growth in the data storage, as the data is not only the text data, but also in the format of video, music, and large images on social media channels. It is granular nature of data that is unique. It is very common to have Terabytes and Petabytes of the storage system for organizations. As the database increases, the applications and architecture built to support the data need to be evaluated quite often. Sometimes the same data is evaluated with multiple angles even though the original data is same and the new found intelligence creates an explosion of the data.

Velocity: Velocity deals with the fast rate at which data is received and perhaps acted upon. The increase of data and social media explosion have changed how we look at the data. The flow of data is massive and continuous. Now-a-days people rely on social media to update them on the latest happenings. The data movement is almost real-time and the update window has reduced to a fraction of seconds.

Variety: Data can be stored in multiple formats. Big data variety refers to unstructured and semi-structured data types such as text, audio, and abnormality in data. Unstructured data has many of the same requirements as structured data such as summarization, audibility, and privacy. The real world has data in many formats and that is the major challenge we need to overcome with the Big data.

The future of Big Data:

The demand for big data talent and technology is exploding day-by-day. Over the last two years, the investment in big data solutions has been tripled. As our world continues to become more information driven by year over year, industry analysts predict that the big data market will easily expand by another ten times within the next decade. Big data is already proving its value by allowing companies to operate at a new level of intelligence and worldliness.

future of big data

USING THE PROTRACTOR AUTOMATION TOOL TO TEST ANGULARJS APPLICATIONS

Using the Protractor Automation Tool to test AngularJS applications

Are you developing an AngularJS application? Are you confused to use Protractor to test for your application or not? If yes, explore this article and know what Protractor is about and the usage of Protractor to test your AngularJS applications.

Talking about Protractor:

Google has released an end-to-end framework for AngularJS applications called Protractor, which works as a solution integrator combining powerful tools and technologies such as NodeJs, Selenium, WebDriver, Jasmine, Cucumber, and Mocha. Protractor has a bunch of customizations from Selenium to easily create tests for AngularJS applications. With Protractor, a developer can write automated tests that run inside an actual browser, against an existing website. Thus, a developer can easily test whether code, end-to-end,  is working as expected or not. The added benefit of using Protractor is that it understands AngularJS and is optimized for it.

“Unit tests are the first line of defense for catching bugs, but sometimes issues come up with integration between components which can’t be captured in an unit test. End-to-end tests are made to find these problems” -Angular Team

Deep diving into Protractor:

Protractor is a framework for automation of functional tests. It means, it’s intention is not to be the only way to test an AngularJS application, but also to cover the acceptance criteria required by the user, basically, End-to-End.

It runs on top of the Selenium, and provides all the benefits and advantages from Selenium. In addition, it also provides customiz able features to test AngularJS applications. It is also possible to use some drivers which implement web drivers wired protocols like ChromeDriver and GhostDriver, as protractor runs on top of the Selenium. In the case of ChromeDriver a developer can run tests without the Selenium server. However, to run GhostDriver one has to use PhantomJS which uses GhostDriver to run tests in Headless mode.

In the past, Protractor’s documentation has been poor and risky due to Protractor’s constant evolution. However, in recent years, the community has collaborated a lot and Protractor’s documentation has been updated.

Salient Features of Protractor:

⦁ It is built on the top of Selenium server.
⦁ Introduced new simple syntaxes to write tests.
⦁ The developer can take advantage of Selenium grid to run multiple browsers.
⦁ A developer can use Jasmine or Mocha to write test suites.

Protractor is built on top of Selenium WebDriver, so it contains every feature that is available in the Selenium WebDriver. Also, Protractor provides some new strategies and functions which are very useful to automate AngularJS applications.

Protractor Installation:

Download and Install NodeJS. After installation make sure that its path is configured correctly, so that command execution can find Node.

Open the Command Prompt and type the following command to install

Protractor globally. npm install –g protractor

Install Protractor Locally

A developer can install Protractor locally in their project directory. Go to the project directory and type the following command in command prompt.

npm install protractor
Now, Verify Installation
To verify installation, type the command
Protractor –version

If Protractor is installed successfully then the system will display the installed version. Otherwise, you will have to recheck the installation.

Let’s see Protractor’s basic example program:
As we know a Protractor needs two files i.e., “spec file” where spec file is test file and “conf file” where conf file is a configuration file.
Below is the sample Test file named “testspec.js”.
describe(‘angularjs homepage’, function() {
it(‘should have a title’, function() {
browser.get(‘http://angularjs.org/’);

expect(browser.getTitle()).toContain(‘AngularJS’);
});
});
The above simple test will navigate to an AngularJS home page and checks for its page title.
Below is the sample config file named as “conf.js”.
exports.config = {
//The address of a running selenium server.
seleniumAddress: ‘http://localhost:4444/wd/hub’,
//Here we specify the name of the specs files.
specs: [‘testspec.js’]

How to run?

Go to the command prompt, type “protractor conf.js”, it will start executing your test in chrome browser by default.

Final word:

Protractor allows a developer to test his AngularJS applications in a consistent and automated way. Today we’re able to make informed statements because of the overall state and soundness of AngularJS applications.

PRIVATE CLOUD VS PUBLIC CLOUD

Private Cloud vs Public Cloud

If you have been researching cloud computing, then you must be aware of Private Vs Public cloud debate. Before you decide which end of the debate you side with, it is important to know the differences between the two technologies. Explore this article and know about the differences before you choose the path.

Private Cloud:

A Private cloud is a distinct and secure cloud-based environment in which only the specified candidate/organization can operate. Compared with other cloud models, private clouds will provide computing power as a service within a virtualized environment using an underlying pool of physical computing resource. However, the private cloud model is only accessible by a single organization providing it with superlative control and privacy. Private cloud offers hosted services to a limited number of people behind a firewall, so it minimizes the security concerns for some organizations.

Private cloud computing, by definition, is a single-tenant environment where the hardware storage and network are dedicated to a single client or organization. The features and benefits of private cloud are therefore as follows:

Security and Privacy: Public cloud services can implement a certain level of security, but a private cloud uses a technique called distinct pools of resources with access denied to connections made from behind one organization’s firewall, devoted leased lines or on-site internal hosting by ensuring that operations are kept out from meddlesome eyes.

Control: Private cloud is only accessible for a single organization, that organization will have the ability to configure and manage it in-line with their needs to achieve customized network solutions.

Cost and Efficiency: Private cloud is not as cost effective as public cloud services due to smaller economies of scale and increased management costs, they do make more efficient computing resources than traditional LAN’s as they minimise the investment in unused capacity.

Hybrid Deployments: If a zealous server is required to run a high-speed database application, that hardware can be integrated into a private cloud, hybridizing the solution between virtual servers and dedicated servers. This can’t be achieved in a public cloud.

To reduce an organization’s on-premises IT footprint, cloud providers, such as Rackspace and VMware, can deploy private cloud infrastructures.

Public Cloud:

The most observable model of cloud computing to many users is the Public cloud model, under which cloud services are provided in a virtualized environment, constructed using pooled shared physical resources, and accessible over a public network such as internet.

Public clouds provides services and access to multiple users using the same shared infrastructure. Amazon (AWS), Microsoft (Azure), VMWare are some of the key players in this space. Public clouds are broadly used by individuals who are less likely to need the level of groundwork and security offered by private clouds. However, users can still utilise public clouds to make their operations significantly more efficient. Even though it possesses security risks, a public cloud is considered more useful than its counterparts because of several reasons.

The following are the features offered by public cloud:

Cost Effective: Initial cost is minimum, but if the data is stored for a very long period of time, it proves to be expensive.

Reliability: There are sheer number of servers and networks involved in creating a public cloud. The major advantage in a public cloud is if one physical compound fails, the cloud still runs unaffected on the remaining components. In other words, there will be no failure which would make a public cloud service vulnerable.

Flexibility: There are multitudinous services available in the market which follow the public cloud model and that are ready to be accessed as a service from any internet enabled devices. These services can fulfil most computing requirements and can deliver their benefits to private and enterprise clients. Businesses can integrate their public cloud services with private cloud services, where they need to perform sensitive business functions, to create hybrid clouds.

Location Independence: The availability of public cloud services through an internet connection ensure that the services are available wherever the client is located. This provides many opportunities to enterprises which has remote access to IT infrastructure or online document collaboration from multiple locations.

The Debate: Despite being different from each other on many factors, it is difficult to say which cloud service stands out. Both the services have equal advantages and disadvantages. Nevertheless, factors concerning security, access patterns, confidentiality, and professional workforce in public and private cloud computing are yet to be enhanced so that the technology proves to be beneficial for establishing and established businesses.

Cloud Bursting: Businesses may also use a combination of a private and public cloud services with hybrid cloud deployment. This allows users to scale computing requirements beyond the private cloud and into the public cloud – a capability called cloud bursting.

WHAT’S THE DIFFERENCE BETWEEN AWS BEANSTALK VS CLOUD FORMATION VS OPSWORK.?

What’s the difference between AWS beanstalk vs Cloud Formation vs Opswork.?

AWS Beanstalk deploy and manage application on AWS Cloud without worrying about the environment to run your web application. No need to create and manage EC2 instance for a single application.

AWS OpsWorks is an application management service or tools that makes it easy for the DevOps user to model & manage their application. It is a tool on AWS Cloud similar to Chef, Puppet which can manage a number of servers.

AWS Cloud Formation helps developers and system administrators by providing easiest way to create and manage a large number of AWS resources, provisioning and updating them in an orderly and predictable fashion. By using cloud Formation in few hours you can build up your large scale environment from a single template.

WHAT’S THE DIFFERENCE BETWEEN HADOOP 1.X AND HADOOP 2.X?

What’s the difference between Hadoop 1.x and Hadoop 2.x?

HDFS federation brings important measures of scalability and reliability to Hadoop. YARN, the other major advance in Hadoop 2, brings significant performance improvements for some applications, supports additional processing models, and implements a more flexible execution engine.

YARN is a resource manager that was created by separating the processing engine and resource management capabilities of MapReduce as it was implemented in Hadoop 1. YARN is often called the operating system of Hadoop because it is responsible for managing and monitoring workloads, maintaining a multi-tenant environment, implementing security controls, and managing high availability features of Hadoop.

Like an operating system on a server, YARN is designed to allow multiple, diverse user applications to run on a multi-tenant platform. In Hadoop 1, users had the option of writing MapReduce programs in Java, in Python, Ruby or other scripting languages using streaming, or using Pig, a data transformation language. Regardless of which method was used, all fundamentally relied on the MapReduce processing model to run.

IN AWS CLOUD, HOW TO LOGIN TO EC2 INSTANCE IF ONE LOSES .PPK FILE OR PASSWORD?

In AWS Cloud, How to login to EC2 instance if one loses .ppk file or password?

Create an AMI for that particular instance whose key was lost and launch a new instance using that AMI. Create a new key pair for this(download the .pem file and use PuttyGen to create a new .ppk file) and download it and now we can start a new instance and check for that key pair to work and then delete the old instance and continue with the new one. For this we have to make sure the new instance is created in the same availability zone as the original one.