Skip to main content
CODE 90535
ACADEMIC YEAR 2025/2026
CREDITS
SCIENTIFIC DISCIPLINARY SECTOR INF/01
LANGUAGE English
TEACHING LOCATION
  • GENOVA
SEMESTER 1° Semester
TEACHING MATERIALS AULAWEB

OVERVIEW

Parallel programming was once a niche field reserved only for government labs, research universities, and certain forward-looking industries, but today it is a requirement for most applications. 

Up to 2006, CPU designers in fact  have achieved performance gains improving clock speed, execution optimization and cache size. But now the performance improvement in new chips is fueled by hyperthreading, multicore and cache size. Hyperthreading and multicore CPUs have nearly no positive impact on most current software because it has been designed in a sequential fashion. 

Therefore, the performance lunch isn’t free any more. Now is the time to analyse applications to identyfy the CPU-sensitive operations that could benefit from parallel computing.

The aim of this course is to provide an introduction on the architecture of parallel processing systems  along with programming paradigms (openMP, MPI and CUDA) essential for exploitng them.

 

AIMS AND CONTENT

LEARNING OUTCOMES

Learning the main aspects of modern, heterogeneous high-performance computing systems (e.g. pipeline/superscalar processors, accelerators as GPUs, shared-memory systems, clusters, supercomputers) and basic programming skills for high-performance computing, i.e. the proper use of the cache and vectorization, OpenMP, MPI, and CUDA.

AIMS AND LEARNING OUTCOMES

At the end of the course the student will be able to

  • understand why and how the compiler is one of the most important tools for HPC;
  • identify program hostspots and possible strategies to improve the execution time;
  • modify the code to vectorise it, if possible, by interacting with the compiler and profiling tools;
  • parallelise simple programs using openMP, MPI and CUDA;
  • evaluate and discuss the performance of parallel programs;
  • start thinking in parallel.

 

PREREQUISITES

Basic knowledge of computer architecture, fair programming skills in C/C++ or Fortran.

TEACHING METHODS

Lessons, practicals and a final project.

SYLLABUS/CONTENT

  1. The processor architecture: 
    Performance of a pipeline system and its analytical evaluation. Overall structure of a pipeline processor. Pipeline hazards (structural, data, control) and their impact on performance. Reducing hazards and/or their impact: hardware techniques. Instruction-level parallelism in sequential programs. The importance of cache-aware software..
  2. Multiprocessor computers:
    The purpose of a parallel computer. Limits of parallel computers: Amdahl's law, communication delays. MIMD computers: shared memory, distributed memory with shared address space, distributed memory with message-passing.
  3. High-level parallel programming on shared address space: the OpenMP directives. Practicals with OpenMP.
  4. Message-passing MIMD computers:
    Overall organization. Cooperation among processes: message-passing communication and synchronization. Blocking vs. non-blocking, point-to-point vs. collectives, implementation aspects. Non-blocking communication and instruction reordering. 
  5. High-level parallel programming with message-passing:
    The SPMD paradigm, the Message Passing Interface (MPI) standard. Practicals with MPI.
  6. SIMD parallel computers: vector processors. Modern descendants of SIMD computers: vector extensions, GPUs.
  7. GPU programming with CUDA.
  8. Architecture of large-scale computing platforms.

Topics 4,5 and 8 are for students attending the 9 credits course. 

RECOMMENDED READING/BIBLIOGRAPHY

Slides, tutorials, and code samples provided during the course.

TEACHERS AND EXAM BOARD

LESSONS

LESSONS START

According to the calendar approved by the Degree Program Board: https://corsi.unige.it/corsi/10852/studenti-orario

Class schedule

The timetable for this course is available here: Portale EasyAcademy

EXAMS

EXAM DESCRIPTION

The exam consists of a written test on the key theoretical topics presented in the course, plus the discussion of a project made individually or in a small group (2-3 students).

For 6 credits, the project will consist of the parallelization of a sequential algorithm using openMP and CUDA. 

For 9 credits, the project will consist of the parallelization of a sequential algorithm using openMP, MPI+openMP and CUDA. 

ASSESSMENT METHODS

The written exam will focus on the effective acquisition of theoretical concepts presented in the first part of the course. It will be composed of a quiz with open and closed questions. It represents 40% of the final mark.

The project will be evaluated not only on the basis of the achievable performance of the parallel code, but also on how the code analysis, the adopted parallelization strategies, and the achieved performance have been presented and discussed.

This means, for example, that the parallel concepts are properly used, results are presented in a suitable and meaningful way, and overheads and issues have been correctly identified and discussed. It represents 60% of the final mark.

FURTHER INFORMATION

Contact the instructor for any additional information not included in the course description.

Agenda 2030 - Sustainable Development Goals

Agenda 2030 - Sustainable Development Goals
Quality education
Quality education
Decent work and economic growth
Decent work and economic growth
Industry, innovation and infrastructure
Industry, innovation and infrastructure