PARALLEL VS CONCURRENT PROGRAMMING

Parallel Vs Concurrent Programming

PARALLEL PROGRAMMING

When we talk about parallel programming, typically we are always interested in decreasing the execution time by taking the advantage of hardware’s ability to do more than one job at a time, whether by vectorization, instruction-level parallelism or by multiprocessing. Parallel programming does not implicit a particular programming model, and many techniques and languages for parallel programming are at various stages of development or adoption.

One model which seems to be promising is data parallelism, in which uniform operations over aggregate data can be sped up by dividing the data and computing into the partitions simultaneously. For instance, High-Performance Fortran offers parallel loops, where an operation such as adding up two arrays element-wise can be sped up by vectorizing a loop or splitting it between several processing units that handle different portions of the arrays simultaneously. Data Parallel Haskell, a research language, expands this model to allow parallelizing operations on ragged arrays of arrays. The best thing about data parallelism is that it’s usually deterministic and implicit— the user does not have to write a program that says “do these four things at the same time,” but rather say “do this to that whole array” and the compiler or run-time system works on how to partition things. The less admirable thing is that data parallelism usually only applies when the user has a big array that you want to treat more or less uniformly.

Another approach of parallelism is futures or fork/join parallelism, in which some computations whose value is not needed right away is started in parallel with the “main” computation, and then when the main computation needs the result of the side computation, it may have to wait till it is finished. An expression e could instead be written (FUTURE e), and it would mean the same thing, but might run in parallel until its value is actually needed. The other language that provides a similar model is Cilk, a C-like language that has a parallel procedure call mechanism that looks a lot similar to futures. Futures are often good when the user have a computation that can be broken into pieces whose values depend on each other in well-defined ways but aren’t necessarily uniform enough for data parallelism to work.

CONCURRENT PROGRAMMING

In concurrent programming, a program is typically factored into multiple threads of control with clear responsibilities and purposes, which may run simultaneously or take turns. This is often about dealing with concurrency and nondeterminism in the world, or about reducing latency. For instance, the user’s web browser needs to deal with events from the UI and events from the network while also performing some computation such as rendering a page. While it is possible to program as a single thread of control that checks for various events and does other computations, it’s often easier, conceptually, to think of it as several different processes cooperating on different aspects of the whole.

Most of the programming models for concurrency differ along two main axes: how processes are scheduled and how they cooperate. For the former, one end of the axis is pre-emptive multithreading, wherein several independent instruction streams run either simultaneously on different execution units or are given turns by a scheduler. This is the scheduling model used by Java and pthreads. At the other end are co-routines, which yield control to one other at well-defined points of the programmer’s choosing. This is the main idea behind Ruby’s fibers. Other models for scheduling include event loops with call-backs, as used by some GUI libraries, and stream transducers in data flow programming. One advantage of pre-emptive multithreading is that it can reduce latency, and it doesn’t require programmer effort to figure out when to transfer control, but a big disadvantage is that it can make the coordination more difficult.

Coordination—how multiple threads of control cooperate to get something done. Probably the most common model, and the hardest to use is shared memory with explicit synchronization. This is the model used by pthreads and, to some extent, Java. In this model, mutable objects may be shared by multiple threads running at the same time, which is a common source of bugs. Synchronization mechanisms such as locks can prevent interference, but they introduce problems of their own. Java improves on this model a bit with monitors, which make some synchronization less explicit, but it’s still pretty difficult to get right.

Because of the problems with shared-memory concurrency, two other models for concurrent communication have been achieving traction lately: transactional memory and message passing. In a transactional memory system, memory appears to be shared between threads, but it provides atomic transactions, whereby a thread may make several accesses to shared mutable variables and have guaranteed isolation. This has been implemented in software, originally in Concurrent Haskell, and latterly in other languages such as Clojure. In message-passing concurrency, threads no longer share mutable variables but communicate explicitly by sending messages. The poster language for message-passing concurrency is Erlang, and another language with a slightly different approach is Concurrent ML.

FINAL CONCLUSION

In this article all that we explained is the most modern programming languages support both concurrency and parallelism, through multiple paradigms. For instance, C#’s native model for concurrency is shared memory with synchronization, but there are libraries for both message passing and software transactional memory. In some sense, shared memory is the least-common-denominator on which other concurrency approaches and parallelism may be implemented. Many programs can benefit from both parallelism and concurrency. For example, the web browser may be able to speed up particular tasks, such as rendering, by parallelizing them, at the same time that it uses concurrency to deal with concurrency in the world.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>