Monthly Archives: April 2018


What is GraphQL


GraphQL is an application layer query language that construes a string by a server, which then returns the required data in a detailed format. Open sourced by Facebook in 2015, GraphQL was developed internally during the evolution from Facebook’s HTML5-powered mobile apps to native apps. Today it is the principle graph data query language driving the majority of interactions within iOS and Facebook Android applications. The makers of GraphQL plan to open-source an instruction program on how to implement the server. The goal is to increase GraphQL capabilities to adapt to a wide range of back-end’s. This will let the coders to use the technology in projects of any size to access their own data more competently. The product developers will be able to structure servers and provides powerful frameworks, abstractions, and tools.


GraphQL is a different way of structuring the contract between client and server. The servers publish a set of rules specific to an application. GraphQL then supplies the language to query the developer’s data for the correct answer, within the constraints of rules. This way product developers are able to execute data requirements in a natural form, such as a declarative one and a hierarchical one. To understand how to use GraphQL, better first look at what it is trying to achieve.

Give client-side developers a well-organized way to query data they want to retrieve.
Give server-side developers a well-organized way to get their data out to their users.
Give everyone an easy and efficient way of retrieving data.


The GraphQL query language is sited between client applications and the actual data sources. It works independently from data sources and other platforms. A GraphQL server can be made in PHP, Node or any other platform of the user’s choice. The users should be able to connect to GraphQL server with web apps, mobile apps or any other type of apps they may be using. They can then query and alter the data they are looking for. The major difference is that GraphQL does not come with a transport layer. That end of the operation is handled by a high-level framework, such as Relay. However, GraphQL have an excellent type system. The system is basically constructed on top of a layer of graphs with a set of rules defined by you. The users may not feel the need to display their data groups as a graph, but if the users really thinks about it, most of the time the user can structure his/her data schema in the form of graphs.

To envision what we are talking about, let’s have a look at some code. For this instance, we will look at creating a query for Product Hunt and want to retrieve data for a typical page:

{ product (id: “react-native-for-android”) { name, link, votes, comments
{ text } } }

The result would be:
{ ”
{ ”
{ ”

“React Native for Android”
, ”
, ”

, ”
[ { ”

“Huuuuge for cross-platform progress.”
}, { ”

“Exciting stuff.”
} ]

Now the user is working with an upgraded version of Product Hunt and want to display the name of the author next to the comment. Then the user has to simply change the query like this:

{ product (id: “react-native-for-android”) { name, link, votes, comments
{ text, author { avatar } } } }


GraphQL is a very liberating platform. GraphQL, product developers no longer need to cope with ad hoc endpoints or worry about accessing data with multiple roundtrip object retrievals. Instead, the developer can easily create a sophisticated, declarative-hierarchical query dispatched to one single endpoint. Now all the experimentation and newly created views are built exclusively in the client development environment. The hassle of moving unstructured data from ad hoc endpoints into business objects is excluded. Instead, the user will get a powerful, intuitive type system you can use as a tool-building platform. The developer, will have the freedom to put more focus on client software and requirements without having to leave the development environment. As the system changes, the developer will be able to support shipped clients with confidence while staying well within the limits of mobile apps.


Flask Vs Django

Flask Vs Django

Flask and Django are two of the most popular web frameworks for Python. In this article, we will be discussing some of the points you should consider while choosing between Flask and Django.


Django is a well-sophisticated framework aimed at rapid deployment and development of numerous web apps written in Python. This framework is distributed as an open-source. The framework itself is actually a code library, which helps the developers in building reliable, scalable, and workable web apps. Django is one of the most popular frameworks from a wide variety, available to Python developers. There is one limitation though: some things are envisioned to be done in one and the only way.

You can replace certain modules yet some core functionality should remain untouched. This is totally fine in 95% of the projects, and it saves a ton of time, money, and effort during development, as the users will have all the solutions they need straight out of the box.


Flask is another widely used web framework. Differing to Django, it is focused on at providing an out-of-the-box product with complete solutions for each task, Flask works more like a LEGO set, where the user can construct anything he/she wish, using an enormous set of external libraries and add-ons. Flask philosophy is “web development, one drop at a time”. Python developers with huge experience say that Flask enables adding new modules when the time comes, instead of overwhelming the users with the details from the very beginning.


  • Object Relational Mapping allows working with several types of databases such as SQLite, Post greSQL, Oracle and MySQL.
  • Celery allows doing asynchronous tasks and replacing unix crontab for cron jobs.
    The user can use Gunicorn instead of Apache; it’s easy and fun (if the user has no trouble with using NGINX).
  • If the developers are more skilled enough, then the user can use MongoDB as a primary database; this solves quite a lot of problems later on.
  • Using named URLs, reverse function, and the URL template tag allows creating a logically structured system, where one URL update will not inflict confusion.
  • Using supervisor for process monitoring lets restart the framework processes automatically; it is truly a rescuer during development stage.
  • Redis is a valued in-memory data structure store, which can be used for queuing celery jobs, as a cache, as a store for sessions, even for auto-completion and much more.
  • Munin and statds are another great pair of apps, allowing control and monitoring of the users Django app processes.
  • As we can see from the list of websites using this framework, it is planned for creating apps with high scalability; websites that grow from thousands to millions of visitors quickly. This framework works straight out of the box and provides all the major functionalities needed to build an app with Python.


  • Flask is all about simplicity and ease. There are no limitations and the user can implement anything they want it.
  • No database access layer and ORM. Other apps like SQLAlchemy or pure SQL queries do this job without any restrictions.
  • Routing with decorators is really simple; app structure is also totally adjustable.
    Blueprints are like modules for the application. The user can have lots of them suited for any task and construct to their app like a LEGO toy, using the blueprints best suited for this particular task and the extensions are incredibly helpful and are integrated into the framework easily.
  • Web server and debugging tools: Flask comes with in-built web server and multiple debugging tools, including the in-browser, so the user do not even need NGINX or Apache to test and debug the app.
  • Flask appeared as a substitute to Django, as developers chosen to have a micro framework that would allow them using varying components, and neither of previous frameworks allowed alteration of their modules to some extent. Flask is simple and straightforward thus working in it allows an experienced Python developer in creating projects within short timeframes.


Flask works like a sandbox for developers, where they can improve their skills and quickly test solutions using different modules and libraries. We prefer using it for testing and working on less-structured objects, while using Django to deliver a solid product, meeting and exceeding customer’s expectations.


The Decentralized Web

The Decentralized Web

The Decentralized Web pictures the future world where services such as communication, currency, publishing, social networking, search, archiving and much more are provided not by centralized services owned by single organizations, but by technologies which are powered by the people.

The main idea of decentralization is that the operation of a service is not dimly trusted to any single omnipotent company. Instead of, responsibility for the service is shared by running across multiple merged servers, or possibly running across client side apps in an entirely “distributed” peer-to-peer model.

The rules that describe the decentralized service’s behaviour are designed to force participants to act fairly in order to participate in all, relying heavily on cryptographic techniques such as Merkle trees and digital signatures to allow participants to hold each other accountable.

There are major areas that the Decentralized Web victors: privacy, data portability, and security.


Decentralization forces an improved focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only approved users can read and write. Accessing to the data is entirely controlled algorithmically by the network as opposed to more centralized networks where usually the owner of that network has full access to data, simplifying customer profiling, and ad targeting.


Decentralization forces an improved focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only approved users can read and write. Accessing to the data is entirely controlled algorithmically by the network as opposed to more centralized networks where usually the owner of that network has full access to data, simplifying customer profiling, and ad targeting.


In a decentralized environment, customers own their data and choose with whom they share the data. They hold the control of it when they leave a given service provider. For instance, if the user wants to move from General Motors to BMW today, why should not the users be able to take the driving records with them? The same applies to chat platform history or health records.


We are living in a world of increased security threats. In a centralized environment, the bigger the storage tower, the bigger the honeypot is to attract bad actors. Decentralized environments are safer by their common nature against being hacked, infiltrated, acquired, bankrupted or compromised as they have been built to exist under public scrutiny from the outset.

As the internet itself need a grand re-levelling, taking different unconnected local area networks and providing a new neutral common ground that linked them all, now we see the same form happening again as technology begins to provide a new neutral common ground for higher level services. And much like Web 2.0, the first wave of this Web 3.0 invasion has walked among us for several years already.

Git is successful as fully decentralized version control system – almost it is completely replacing centralised systems like Subversion. Bitcoin demonstrates how a currency can exist without any central authority, contrasting with a centralised incumbent such as PayPal.

StatusNet provides a decentralized alternative to Twitter. XMPP was built to deliver a decentralized alternative to the messaging silos of AOL Instant Messenger, ICQ, MSN, and much more.

It is hard to forecast to which final direction Web 3.0 will take us. By unlocking the web from the hands of a few players this will unavoidably enables a surge in innovation and lets services to prioritise the user’s interests.

As the Decentralized Web attracts the interest and desire of the mainstream developer community, we cannot predict what new economies will arise and what kinds of new technologies and services they will invent.


API Testing

API Testing

An application programming interface, or API, works to link an application to the web and to other APIs. In order to debate API and Web services testing, we need to first understand what is an API and how it works. So, explore this article and know more about it.

An application is made of three vital parts that ideally should be able to work and communicate in a segmented way, so one could be swapped out for another:

Data Tier: Where data is retrieved from the database and file system and then stored.

Logic Tier: In this processes, the data between the layers, coordinating the application, processing commands, and takes logical decisions. This layer is made of the API.

Presentation Tier: This top layer of the app is the user interface, which converts tasks into something the user understands.

APIs allow organizations to become more agile, for things to go mobile, and everything to work together in a streamlined, integrated way.

Therefore, API testing is testing that APIs and the integrations allows work in the ideal manner. This form of testing focusses on using software to make API calls in order to receive an output before perceiving and logging the system’s response. This tests that the API returns a correct response or output under variable conditions.

However, there also could be no output at all or something completely unexpected occurs. This makes the tester’s role crucial to the application development process. As, APIs are the central hub of data for several applications, data-driven testing for APIs can helps to increase test coverage and accuracy.

In testing the API directly, specifying pass/fail circumstances is slightly more challenging. However, in comparing the API data in the response or in comparing the behaviour after the API call in another API would help the tester to setup ultimate validation scenarios.


All forms of software are essential to recognize bugs and discrepancies both when releasing a product and it continues to work when it is out in the wild. It is very clear that the risk of putting an insecure product on the market is greater than the cost to test it.

Let’s see some instances of common security tests that API could be vulnerable to

The API is what gives the value to the application. It is what makes our phones “smart” and streamlines business processes. If an API does not work successfully, it will never be adopted, irrespective if it is a free and open API or one that charges per call or group of calls. If an API breaks because errors were not spotted, it will not break a single application but also a chain of business processes joined to it.

What You Need to Know to Start API Testing

The first part of API testing contains setting up a testing environment, with the required set of parameters around the API. This involves configuring the database and server for the application’s requirements. Once the user sets up his/her API testing environment, make an API call right away to make sure nothing is broken before the user starts more thorough testing.

The user can also start combining the application data with their API tests to ensure that the API performs as likely against possible known input configurations.


Relational Database Vs Non-Relational Database

Relational Database

From the past few years NoSQL or Non-relational database tools have gained much popularity in terms of storing vast amount of data and scaling them easily. There are debates on whether non-relational databases will replace relational databases in future. With the growing number of social data and other unstructured data, the following are some of the questions raised on relational databases.

Are relational databases skilled of handling big data?
Are relational databases able to scale out enormous amount of data?
Are relational databases suited for the modern age data?

Well, before getting answers to those questions, let us dive deep-in and know some basics of both Relational and Non-Relational databases.


The theory of Relational Database was developed in 1970s. The most important feature of all relational databases is its support of ACID (Automicity, Consistency, Isolation, and Durability) properties which promises that all the transactions are reliably processed.

Automicity: Each transaction is unique and make sure that if one logical part of a transaction fails everything is roll backed so that data is unchanged.

Consistency: All data written in the database are subjected to the rules defined.

Isolation: Changes made in a transaction are not noticeable to other transactions until they are committed.

Durability: Changes committed in a transaction are stored and available in the database even if there is power failure or the database goes offline suddenly.

The objects in the relational databases are structurally structured. The data in the table are stowed as rows and columns. Each column has a datatype. The Structured Query Language (SQL) is suitable to relational databases to store and recover the data in a structured way. There are always fixed number of columns although additional columns can be added later. Most of the tables are related to each other with primary and foreign keys thus providing “Referential Integrity” among the objects. The key vendors are ORACLE, SQL Server, MySQL, PostgreSQL, and much more.


The idea of non-relational databases came into representation to handle rapid growth of unstructured data and scale them out effortlessly. This offers flexible schema so there is no such thing called “Referential Integrity” as we have seen in Relational databases. The data is highly de-normalised and do not require JOIN’s between objects. This reduces ACID property of relational databases and supports CAP (Consistency, Availability and Partitioning). As it is opposed by ACID, it will only support BASE (Basically Available Soft state, Eventual consistency). The initial databases created based on the following concepts, BigTable by Google, HBase by Yahoo, Cassandra by Facebook, etc.

Categories of Non-relational databases: The non-relational databases can be categorized into four major types such as Key-values database, column database, document database, and graph database.

Key-values database: This is the easiest form of NoSQL database where each value is associated with unique keys.

Column database: This database is proficient of storing and processing large amount of data using a pointer that points to many columns that are dispersed over a cluster.

Document database: This database might contain many key-value documents with many nested levels. Efficient Querying is possible with this database. The documents are stored in JSON format.

Graph database: Instead of traditional rows and columns, this database uses nodes and edges to signify graph structures and store data.

Non-Relational Database



Why is char[] preferred over String for passwords?

Strings are immutable. That means once you have created the String, if another process can dump memory, there is no way (aside from reflection) you can get rid of the data before garbage collection kicks in.

With an array, you can explicitly wipe the data after you are done with it. You can overwrite the array with anything you like, and the password won’t be present anywhere in the system, even before garbage collection.

So yes, this is a security concern – but even using char[ ] only reduces the window of opportunity for an attacker, and it’s only for this specific type of attack.

As noted in comments, it’s possible that arrays being moved by the garbage collector will leave stray copies of the data in memory. I believe this is implementation-specific – the garbage collector may clear all memory as it goes, to avoid this sort of thing. Even if it does, there is still time during which the char[ ] contains the actual characters as an attack window.


How do I find all files containing specific text on Linux?

grep -rnw ‘/path/to/somewhere/’ -e ‘pattern’
-r or -R is recursive,
-n is line number, and
-w stands for match the whole word.
-l (lower-case L) can be added to just give the file name of matching files.

Along with these, –exclude, –include, –exclude-dir flags could be used for efficient searching:
This will only search through those files which have .c or .h extensions:
grep –include=\*.{c,h} -rnw ‘/path/to/somewhere/’ -e “pattern”
This will exclude searching all the files ending with .o extension:
grep –exclude=*.o -rnw ‘/path/to/somewhere/’ -e “pattern”
For directories it’s possible to exclude a particular directory(ies) through–exclude-dirparameter.
For example, this will exclude the dirs dir1/, dir2/ and all of them matching *.dst/:
grep –exclude-dir={dir1,dir2,*.dst} -rnw ‘/path/to/somewhere/’ -e “pattern”


How to Copy files from host to Docker container?

The cp command can be used to copy files. One specific file can be copied like:
docker cp foo.txt mycontainer:/foo.txt
docker cp mycontainer:/foo.txt foo.txt

Multiple files contained by the folder src can be copied into the target folder using:
docker cp src/. mycontainer:/target
docker cp mycontainer:/src/. target

In order to copy a file from a container to the host, you can use the command
docker cp :/file/path/within/container /host/path/target


How to merge two dictionaries in a single expression?

For dictionaries x and y, z becomes a merged dictionary with values from y replacing those from x.
In Python 3.5 or greater, :
z = {**x, **y}
In Python 2, (or 3.4 or lower) write a function:
def merge_two_dicts(x, y):
z = x.copy() # start with x’s keys and values
z.update(y) # modifies z with y’s keys and values & returns None
return z
z = merge_two_dicts(x, y)