Parallel Programming with MPI

Parallel Programming with MPI free pdf ebook was written by Jim Giuliani on April 13, 2005 consist of 148 page(s). The pdf file is provided by www.osc.edu and available on pdfpedia since May 01, 2012.

parallel programming with mpi science & technology support high performance computing ohio supercomputer..map tasks onto “threads of execution” (processors) • threads have shared..bus (smp) • combinations are very common, e.g. itanium 2 cluster: –...

x
send send what is readshare?


Thank you for helping us grow by simply clicking on facebook like and google +1 button below ^^

Parallel Programming with MPI pdf




Read
: 634
Download
: 8
Uploaded
: May 01, 2012
Category
Author
: Jim Giuliani
Total Page(s)
: 148
Parallel Programming with MPI - page 1
Parallel Programming with MPI Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH 43212-1163 Ohio Supercomputer Center
You're reading the first 10 out of 148 pages of this docs, please download or login to readmore.
Parallel Programming with MPI - page 2
Table of Contents Setting the Stage Brief History of MPI MPI Program Structure Message Passing Point-to-Point Communications Non-Blocking Communications Derived Datatypes Collective Communication Virtual Topologies Parallel Programming with MPI Ohio Supercomputer Center 2
Parallel Programming with MPI - page 3
Setting the Stage Overview of parallel computing Parallel architectures Parallel programming models Hardware Software Parallel Programming with MPI Ohio Supercomputer Center 3
Parallel Programming with MPI - page 4
Overview of Parallel Computing • Parallel computing is when a program uses concurrency to either – decrease the runtime needed to solve a problem – increase the size of problem that can be solved • The direction in which high-performance computing is headed! • Mainly this is a price/performance issue – Vector machines (e.g., Cray X1) very expensive to engineer and run – Commodity hardware/software - Clusters! Parallel Programming with MPI Ohio Supercomputer Center 4
Parallel Programming with MPI - page 5
Writing a Parallel Application • Decompose the problem into tasks – Ideally, these tasks can be worked on independently of the others • Map tasks onto “threads of execution” (processors) • Threads have shared and local data – Shared: used by more than one thread – Local: Private to each thread • Write source code using some parallel programming environment • Choices may depend on (among many things) – the hardware platform to be run on – the level performance needed – the nature of the problem Parallel Programming with MPI Ohio Supercomputer Center 5
Parallel Programming with MPI - page 6
Parallel Architectures • Distributed memory (Pentium 4 and Itanium 2 clusters) – Each processor has local memory – Cannot directly access the memory of other processors • Shared memory (Cray X1, SGI Altix, Sun COE) – Processors can directly reference memory attached to other processors – Shared memory may be physically distributed • The cost to access remote memory may be high! – Several processors may sit on one memory bus (SMP) • Combinations are very common, e.g. Itanium 2 Cluster: – 258 compute nodes, each with 2 CPUs sharing 4GB of memory – High-speed Myrinet interconnect between nodes. Parallel Programming with MPI Ohio Supercomputer Center 6
Parallel Programming with MPI - page 7
Parallel Programming Models • Distributed memory systems – For processors to share data, the programmer must explicitly arrange for communication - “Message Passing” – Message passing libraries: • MPI (“Message Passing Interface”) • PVM (“Parallel Virtual Machine”) • Shmem (Cray only) • Shared memory systems – “Thread” based programming – Compiler directives (OpenMP; various proprietary systems) – Can also do explicit message passing, of course Parallel Programming with MPI Ohio Supercomputer Center 7
Parallel Programming with MPI - page 8
Parallel Computing: Hardware • In very good shape! • Processors are cheap and powerful – Intel, AMD, IBM PowerPC, … – Theoretical performance approaching 10 GFLOP/sec or more. • SMP nodes with 8-32 CPUs are common • Clusters with tens or hundreds of nodes are common • Affordable, high-performance interconnect technology is available - clusters! • Systems with a few hundreds of processors and good inter- processor communication are not hard to build Parallel Programming with MPI Ohio Supercomputer Center 8
Parallel Programming with MPI - page 9
Parallel Computing: Software • Not as mature as the hardware • The main obstacle to making use of all this power – Perceived difficulties with writing parallel codes outweigh the benefits • Emergence of standards is helping enormously – MPI – OpenMP • Programming in a shared memory environment generally easier • Often better performance using message passing – Much like assembly language vs. C/Fortran Parallel Programming with MPI Ohio Supercomputer Center 9
Parallel Programming with MPI - page 10
Brief History of MPI What is MPI MPI Forum Goals and Scope of MPI MPI on OSC Parallel Platforms Parallel Programming with MPI Ohio Supercomputer Center 10
You're reading the first 10 out of 148 pages of this docs, please download or login to readmore.

People are reading about...