{"id":1754,"date":"2023-01-17T08:49:40","date_gmt":"2023-01-17T08:49:40","guid":{"rendered":"https:\/\/blog.amt.in\/?p=1754"},"modified":"2023-01-17T08:49:40","modified_gmt":"2023-01-17T08:49:40","slug":"insights-on-parallel-programming-model","status":"publish","type":"post","link":"https:\/\/blog.amt.in\/index.php\/2023\/01\/17\/insights-on-parallel-programming-model\/","title":{"rendered":"Insights on Parallel Programming Model"},"content":{"rendered":"<p>In\u00c2\u00a0computing, a\u00c2\u00a0parallel programming model\u00c2\u00a0is an\u00c2\u00a0abstraction\u00c2\u00a0of\u00c2\u00a0parallel computer\u00c2\u00a0architecture, with which it is convenient to express\u00c2\u00a0algorithms\u00c2\u00a0and their composition in\u00c2\u00a0programs. The value of a programming model can be judged on its\u00c2\u00a0generality: how well a range of different problems can be expressed for a variety of different architectures, and its\u00c2\u00a0performance: how efficiently the compiled programs can execute.\u00c2\u00a0The implementation of a parallel programming model can take the form of a\u00c2\u00a0library\u00c2\u00a0invoked from a\u00c2\u00a0sequential language, as an extension to an existing language, or as an entirely new language.<\/p>\n<p>Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitating\u00c2\u00a0portability\u00c2\u00a0of software. In this sense, programming models are referred to as\u00c2\u00a0bridging\u00c2\u00a0between hardware and software.<\/p>\n<p>Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.<\/p>\n<h4><span id=\"Process_interaction\" class=\"mw-headline\">Process interaction:<\/span><\/h4>\n<p>Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer).<\/p>\n<h4><span id=\"Shared_memory\" class=\"mw-headline\">Shared memory:<\/span><\/h4>\n<p>Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to\u00c2\u00a0race conditions, and mechanisms such as\u00c2\u00a0locks,\u00c2\u00a0semaphores\u00c2\u00a0and\u00c2\u00a0monitors\u00c2\u00a0can be used to avoid these. Conventional\u00c2\u00a0multi-core processors\u00c2\u00a0directly support shared memory, which many parallel programming languages and libraries, such as\u00c2\u00a0Cilk,\u00c2\u00a0OpenMP\u00c2\u00a0and\u00c2\u00a0Threading Building Blocks, are designed to exploit.<\/p>\n<h4><span id=\"Message_passing\" class=\"mw-headline\">Message passing:<\/span><\/h4>\n<p>In a message-passing model, parallel processes exchange data through passing messages to one another. These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. The\u00c2\u00a0Communicating sequential processes\u00c2\u00a0(CSP) formalisation of message passing uses synchronous communication channels to connect processes, and led to important languages such as\u00c2\u00a0Occam,\u00c2\u00a0Limbo\u00c2\u00a0and\u00c2\u00a0Go. In contrast, the\u00c2\u00a0actor model\u00c2\u00a0uses asynchronous message passing and has been employed in the design of languages such as\u00c2\u00a0D,\u00c2\u00a0Scala\u00c2\u00a0and SALSA.<\/p>\n<h4><span id=\"Implicit_interaction\" class=\"mw-headline\">Implicit interaction:<\/span><\/h4>\n<p>In an implicit model, no process interaction is visible to the programmer and instead the compiler and\/or runtime is responsible for performing it. Two examples of implicit parallelism are with\u00c2\u00a0domain-specific languages\u00c2\u00a0where the concurrency within high-level operations is prescribed, and with\u00c2\u00a0functional programming languages\u00c2\u00a0because the absence of\u00c2\u00a0side-effects\u00c2\u00a0allows non-dependent functions to be executed in parallel.\u00c2\u00a0However, this kind of parallelism is difficult to manage\u00c2\u00a0and functional languages such as\u00c2\u00a0Concurrent Haskell\u00c2\u00a0and\u00c2\u00a0Concurrent ML\u00c2\u00a0provide features to manage parallelism explicitly.<\/p>\n<h4><span id=\"Problem_decomposition\" class=\"mw-headline\">Problem decomposition:<\/span><\/h4>\n<p>A parallel program is composed of simultaneously executing processes. Problem decomposition relates to the way in which the constituent processes are formulated.<\/p>\n<h4><span id=\"Task_parallelism\" class=\"mw-headline\">Task parallelism:<\/span><\/h4>\n<p>A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. In\u00c2\u00a0Flynn&#8217;s taxonomy, task parallelism is usually classified as\u00c2\u00a0MIMD\/MPMD\u00c2\u00a0or\u00c2\u00a0MISD.<\/p>\n<h4><span id=\"Data_parallelism\" class=\"mw-headline\">Data parallelism:<\/span><\/h4>\n<p>A data-parallel model focuses on performing operations on a data set, typically a regularly structured array. A set of tasks will operate on this data, but independently on disjoint partitions. In\u00c2\u00a0Flynn&#8217;s taxonomy, data parallelism is usually classified as\u00c2\u00a0MIMD\/SPMD\u00c2\u00a0or\u00c2\u00a0SIMD.<\/p>\n<h4><span id=\"Implicit_parallelism\" class=\"mw-headline\">Implicit parallelism:<\/span><\/h4>\n<p>As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the compiler, the runtime or the hardware is responsible. For example, in compilers,\u00c2\u00a0automatic parallelization\u00c2\u00a0is the process of converting sequential code into parallel code, and in computer architecture,\u00c2\u00a0superscalar execution\u00c2\u00a0is a mechanism whereby\u00c2\u00a0instruction-level parallelism\u00c2\u00a0is exploited to perform operations in parallel.<\/p>\n<p>Parallel programming models are closely related to\u00c2\u00a0models of computation. A model of parallel computation is an\u00c2\u00a0abstraction\u00c2\u00a0used to analyze the cost of computational processes, but it does not necessarily need to be practical, in that it can be implemented efficiently in hardware and\/or software. A programming model, in contrast, does specifically imply the practical considerations of hardware and software implementation.<\/p>\n<p>A parallel programming language may be based on one or a combination of programming models. For example,\u00c2\u00a0High Performance Fortran\u00c2\u00a0is based on shared-memory interactions and data-parallel problem decomposition, and\u00c2\u00a0Go\u00c2\u00a0provides mechanism for shared-memory and message-passing interaction.<\/p>\n<p>The above is a brief about\u00c2\u00a0Parallel Programming Model. Watch this space for more updates on the latest trends in Technology.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In\u00c2\u00a0computing, a\u00c2\u00a0parallel programming model\u00c2\u00a0is an\u00c2\u00a0abstraction\u00c2\u00a0of\u00c2\u00a0parallel<\/p>\n","protected":false},"author":1,"featured_media":1756,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[687,686,7],"tags":[688,689,939],"class_list":["post-1754","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-computer-architecture","category-parallel-programming-model","category-techtrends","tag-computer-architecture","tag-parallel-programming-model","tag-technoology"],"_links":{"self":[{"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/posts\/1754","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/comments?post=1754"}],"version-history":[{"count":1,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/posts\/1754\/revisions"}],"predecessor-version":[{"id":1755,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/posts\/1754\/revisions\/1755"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/media\/1756"}],"wp:attachment":[{"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/media?parent=1754"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/categories?post=1754"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.amt.in\/index.php\/wp-json\/wp\/v2\/tags?post=1754"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}