Event Detail

Registration

Sorry, the registration period for this event is over.

Advanced Distributed Memory Parallel Programming: MPI-2.2, MPI 3.0 and PGAS - 23-25 May 2012

 

The goal of this training workshop is to introduce MPI-2.2 performance critical topics and to provide an overview of MPI 3.0, MPI for hybrid computing and Partitioned Global Address Space (PGAS) languages, Coarray Fortran and Unified Paralell C (UPC). For lab sessions, Cray XK6, a massively parallel processing (MPP) platform with GPUs and a QDR InfiniBand cluster with Intel processors and GPUs will be targeted.

Attendees are encouraged to bring in their own applications and codes for the hands-on sessions. Representatives from MPI 3.0 forum (http://meetings.mpi-forum.org/MPI_3.0_main_page.php) and Cray PE will be present at the meeting for discussions and feedback.  There will be invited talks where presenters share their experiences and discuss issues in using MPI and PGAS on the CSCS systems.

Registration deadlineMay 18, 2012.  

Please contact sadaf.alam(at)cscs.ch for further technical information

Instructors

Torsten Hoefler, UIUC and Roberto Ansaloni, Cray

Invited Speakers

Romain Teyssier, University of Zurich; Stefan Goedecker, University of Basel; Paolo Angelino, EPFL; Roger Käppeli, ETHZ; Will Sawyer, CSCS

VenueCSCS, Via Trevano 131, Lugano www.cscs.ch/about_us/visitor_information/index.html
Time
Day 1: 9.30 - 17.00; Day 2: 9.00 - 17.00; Day 3: 9.00 - 15.00
Prerequisites
Participants are expected to bring a laptop for hands-on training

Maximum number of participants 

28
Accommodation
Participants are kindly requested to make their own arrangements for accommodation

***

Tentative agenda: 

First Day (May 23, 2012)

09.30 Welcome

09.40 Introduction to Advanced MPI Usage

10.00 MPI data types (details and potential for productivity and performance with several examples)

10.30 Break

11.00 Contd. MPI data types (details and potential for productivity and performance with several examples)

11.30 Nonblocking and Collective communication (including nonblocking collectives, software pipelining, tradeoffs and parametrization)

12.15 Lunch

13.30 User talks and discussion

14.30 Lab (MPI data types, non-blocking and collective communication)

15.00 Break

15.30 Contd. Lab

17.00 Wrap up

 

Second Day (May 24, 2012)

09.00 Topology mapping and Neighborhood Collective Communication

09.45 One sided communication (MPI-2 and MPI 3.0)

10.30 Break

11.00 One sided communication (MPI-2 and MPI 3.0)

11.30 MPI and hybrid programming primer (OpenMP, GPU, accelerators, MPI 3.0 proposals)

12.00 Lunch

13.30 User talks and discussion

14.30 Lab (Topology mapping, collective communication, one-sided communication)

15.00 Break

15.30 Lab and feedback on MPI 3.0 proposal

17.00 Wrap up

 

Third Day (May 25, 2012)

09.00 CAF and UPC introduction (portability and performance)

10.00 Cray programming environment and PGAS compilers

10.30 Break

11.00 Cray performance tools for MPI and PGAS code development and tuning

11.30 User talk

12.00 Lunch

13.30 Lab

15.00 Wrap up



Back to listing