Salta al contenuto principale della pagina

HIGH PERFORMANCE COMPUTING

CODE 90535
ACADEMIC YEAR 2022/2023
CREDITS
  • 9 cfu during the 2nd year of 10852 COMPUTER SCIENCE (LM-18) - GENOVA
  • 6 cfu during the 2nd year of 10852 COMPUTER SCIENCE (LM-18) - GENOVA
  • SCIENTIFIC DISCIPLINARY SECTOR INF/01
    LANGUAGE English
    TEACHING LOCATION
  • GENOVA
  • SEMESTER 1° Semester
    TEACHING MATERIALS AULAWEB

    OVERVIEW

    The aim of this course is to provide basic knowledge on the architecture of parallel processing systems, along with a minimum of programming skills essential to using such systems. As far as the programming part is concerned, the emphasis is on parallel message-passing programming using MPI, but the OpenMP shared memory paradigm is also practiced.  GPU architecture is presented, and GPU programming with OpenCL is practiced.  Architectural aspects of large-scale computing platforms are presented.

    AIMS AND CONTENT

    LEARNING OUTCOMES

    Learning the main aspects of modern high-performance computing systems (pipeline/superscalar processors,shared-memory/message-passing multiprocessors, vector processors, GPUs) and basic programming skills for high-performance computing (cache optimization, OpenMP, MPI, OpenCL).

    PREREQUISITES

    Basic knowledge of computer architecture, fair programming skills.

    TEACHING METHODS

    Lessons, practicals, individual study with textbooks

    SYLLABUS/CONTENT

    1. Performance of a computer:
      Direct and inverse, absolute and relative performance indices. Benchmarks.
    2. Pipeline processor architecture:
      Performance of a pipeline system and its analytical evaluation. Overall structure of a pipeline processor. Pipeline hazards (structural, data, control) and their impact on performance. Reducing hazards and/or their impact: hardware techniques. Instruction-level parallelism in sequential programs. How to make better use of instruction-level parallelism when using pipeline processors: loop unrolling, instruction reordering.
    3. Advanced pipeline processors:
      Dynamic Instruction Scheduling with Tomasulo Architecture.
      Branch Prediction.
      Speculative execution of instructions.
    4. Superscalar processors:
      Multiple-issue processors. Scheduling instructions on multiple-issue processors. VLIW processors.
    5. Cache Memory and Computer Performance:
      Hardware techniques to reduce cache miss penalties. Hardware and software techniques to reduce cache miss frequency. Hardware techniques to hide cache miss overheads. Practicals with matrix multiplication.
    6. Multiprocessor computers:
      The purpose of a parallel computer. Limits of parallel computers: Amdahl's law, communication delays. MIMD computers: shared memory, distributed memory with shared address space, distributed memory with message-passing.
    7. MIMD shared-memory computers:
      Overall organization. Reducing memory access contention. Cache coherency: snooping-based protocols.
    8.  
    9. MIMD computers with distributed memory and shared address space:
      Overall organization. Directory-based cache coherence protocols.
    10. Cooperation among processes on shared address space:
      Communication and synchronization. Synchronization algorithms: lock/unlock and barrier synchronization on shared address space and their performance.
    11. Consistency in main memory. Weak consistency models and their performance benefits.
    12. High-level parallel programming on shared address space: the OpenMP directives. Practicals with OpenMP.
    13. Message-passing MIMD computers:
      Overall organization. Cooperation among processes: message-passing communication and synchronization. Blocking vs. non-blocking, point-to-point vs. collectives, implementation aspects. Non-blocking communication and instruction reordering.
    14. High-level parallel programming with message-passing:
      the SPMD paradigm, the Message Passing Interface (MPI) standard. Practicals with MPI.
    15. The concept of load balancing and its impact on performance.
      Dynamic load balancing: the "farm" parallel structure and its performance analysis.
    16. SIMD parallel computers: vector processors. Modern descendants of SIMD computers: vector extensions, GPUs.
      GPU programming with OpenCL. Practicals with OpenCL on a GPU.
    17. Parallel I/O and MPI-I/O.
    18. Architecture of large-scale computing platforms.

    RECOMMENDED READING/BIBLIOGRAPHY

    John Hennessy, David Patterson: Computer architecture: a quantitative approach (fifth edition), Morgan Kaufmann.

    Here is the online version of Jan Foster's book: Designing and Building Parallel Programs, Addison Wesley.

    Slides, tutorials, and code samples, that can be found on Aulaweb, integrate but do not replace the textbooks.

    TEACHERS AND EXAM BOARD

    Exam Board

    DANIELE D'AGOSTINO (President)

    ANNALISA BARLA

    GIORGIO DELZANNO (President Substitute)

    NICOLETTA NOCETI (Substitute)

    LESSONS

    Class schedule

    All class schedules are posted on the EasyAcademy portal.

    EXAMS

    EXAM DESCRIPTION

    The exam consists of a written essay on topics of the course suggested by the instructor. The evaluation takes into account the work done during the practicals. Those who do not participate in the practicals shall take an additional practical test on parallel programming.

    Exam schedule

    Date Time Location Type Notes
    13/01/2023 09:00 GENOVA Esame su appuntamento
    13/07/2023 09:00 GENOVA Esame su appuntamento
    13/09/2023 09:00 GENOVA Esame su appuntamento

    FURTHER INFORMATION

    More info at http://www.disi.unige.it/person/CiaccioG/hpc.html