Event Detail

Registration

CSCS-USI Summer School 2014 - 30 June - 10 July, 2014

 

View from the Conference Hotel!

CSCS  and USI organize a 10 day Summer School focused on parallel programming MPI, OpenMP, CUDA and OpenACC aimed at graduate students who are new to the world of high performance computing and hybrid systems and who wish to learn the basic skills required to write, develop and maintain parallel applications in scientific computing. The purpose of the summer school is to teach programming skills and therefore a large proportion of the course will be dedicated to practical exercises.

The Summer School 2014 will take place at the Hotel Serpiano, in Tessin, Switzerland.

Potential participants are advised to apply by filling the application form and send it in pdf format to themis.athanassiadou(at)cscs.ch.

Deadline for sending your application is April 30th, 2014 

Acceptance notification: May 9th, 2014

Registration will be opened immediately on May 10th, 2014

Registration fees (include conference fee and accommodation): 

  • CHF 1000.00 for graduate students
  • CHF 2000.00 for participants from industry 

Tentative Agenda 

Please note that the program is under revision and there might still be changes/additions

Week 1 (5 days) Week 2 (4 days)

General:

-  Parallel Architectures and Programming Models

-  HPC Computing Keynote 

CSCS 

General:

-  Overview of GPU architecture

-  Overview of GPU programming languages

Keynote 

MPI - 1:

-  MPI Overview

-  MPI Process Model

-  Point-to-Point Communication

-  Non-Blocking Communication

-  Derived Datatypes

-  Virtual Topologies

Collective Communication

CUDA:

-  program structure / kernels

-  memory and data transfer

-  threads and warps

-  tiling

-  Performance Considerations

MPI CUDA

OpenMP:

-  Overview and execution model

-  Variables + Scope

-  Work sharing directives

-  Reductions

-  Race conditions

-  Pitfalls /caveats

OpenACC:

-  Overview and execution model

-  Directives

-  Reveal tutorial

-  Clauses

-  Gangs/workers

-  CUDA interoperability

Performance Considerations 

MPI – 2, -3:

One sided- communication

MPI I/O

OpenCL

Execution model

Comparison with CUDA/OpenACC

Back to listing