Non quia difficilia sunt non audemus, sed quia non audemus difficilia sunt
Home -> Teaching -> MPI Tutorials -> CSCS'12
Home
  Publications
  Awards
  Research
  Teaching
    
MPI Tutorials
      
CSCS'12
      ISC'12
      Speedup'15
      PPoPP'13
      ICS'13
      EuroMPI'14
      ISC'16
    CS498

  Miscellaneous
  Full CV [pdf]
  BLOG






  Events








  Past Events





Advanced Distributed Memory Parallel Programming: MPI-2.2, MPI 3.0 and PGAS Tutorial at CSCS

Advanced Distributed Memory Parallel Programming: MPI-2.2, MPI 3.0 and PGAS

Host

Overview

The goal of this training workshop is to introduce MPI-2.2 performance critical topics and to provide an overview of MPI 3.0, MPI for hybrid computing and Partitioned Global Address Space (PGAS) languages, Coarray Fortran and Unified Parallell C (UPC). For lab sessions, Cray XK6, a massively parallel processing (MPP) platform with GPUs and a QDR InfiniBand cluster with Intel processors and GPUs will be targeted.

The advanced MPI part (first two days with hands-on sessions) is presented by Torsten Hoefler (UIUC/ETH) and the PGAS part (last day with hands-on sessions) is presented by Roberto Ansaloni (Cray). This page focusses on the MPI part.

Agenda

First Day (May 23, 2012)

Time
Section Title
09.30 Welcome
09.40 Introduction to Advanced MPI Usage
10.00 MPI data types (details and potential for productivity and performance with several examples)
10.30
Break
11.00 Contd. MPI data types (details and potential for productivity and performance with several examples)
11.30 Nonblocking and Collective communication (including nonblocking collectives, software pipelining, tradeoffs
and parametrization)
12.15
Lunch
13.30 User talks and discussion
14.30 Lab (MPI data types, non-blocking and collective communication)
15.00
Break
15.30 Contd. Lab
17.00 Wrap up

Second Day (May 24, 2012)

Time
Section Title
09.00 Topology mapping and Neighborhood Collective Communication
09.45 One sided communication (MPI-2 and MPI 3.0)
10.30
Break
11.00 One sided communication (MPI-2 and MPI 3.0)
11.30 MPI and hybrid programming primer (OpenMP, GPU, accelerators, MPI 3.0 proposals)
12.00
Lunch
13.30 User talks and discussion
14.30 Lab (Topology mapping, collective communication, one-sided communication)
15.00
Break
15.30 Lab and feedback on MPI 3.0 proposal
17.00 Wrap up

Slides

The full slides are available for download: (Size: 11,537.58 kb)

Slidecast

To be added soon.

serving: 52.90.40.84:43304© Torsten Hoefler