Parallel Numerics an der TU München | Karteikarten & Zusammenfassungen

Lernmaterialien für Parallel Numerics an der TU München

Greife auf kostenlose Karteikarten, Zusammenfassungen, Übungsaufgaben und Altklausuren für deinen Parallel Numerics Kurs an der TU München zu.

TESTE DEIN WISSEN

Name three synchronization mechanisms

Lösung anzeigen
TESTE DEIN WISSEN

Barrier, lock and semafore

Lösung ausblenden
TESTE DEIN WISSEN

What does MPI_ANY_SENDER do? 

Lösung anzeigen
TESTE DEIN WISSEN

Allows to receive arbitrary messages, but makes the reception order of the messages also arbitrary.

Lösung ausblenden
TESTE DEIN WISSEN

What is not synchronized in MPI terms?

Lösung anzeigen
TESTE DEIN WISSEN

Standard MPI operations are not synchronized: There is no handshake nor guarantee that they will ever arrive

Lösung ausblenden
TESTE DEIN WISSEN

Pros/cons Cyclic vs Blocking partition

Lösung anzeigen
TESTE DEIN WISSEN

Blocking: Faster access to data due to data locality


Cyclic: Better load balancing

Lösung ausblenden
TESTE DEIN WISSEN

What are the three types of MIMD systems?

Lösung anzeigen
TESTE DEIN WISSEN

Shared memory: The processors share a common address space. They communicate by reading and writing to it.

Distributed memory: Each processor has a private address space. They communicate by messaging.

Hybrid: The nodes have a shared address space, but they communicate amongst them by messaging.

Lösung ausblenden
TESTE DEIN WISSEN

What does MPI_Ssend() do?

Lösung anzeigen
TESTE DEIN WISSEN

It's a send that guarantees synchronous semantics (As long as the matching receive is blocking). This means that it won't return until the matching receive has been posted.

Lösung ausblenden
TESTE DEIN WISSEN

What does MPI_Rsend() do?

Lösung anzeigen
TESTE DEIN WISSEN

It performs a send operation if matching receive has been posted, otherwise errors. It returns as soon as the buffer is available.

Lösung ausblenden
TESTE DEIN WISSEN

What's an embarrassingly parallel problem?

Lösung anzeigen
TESTE DEIN WISSEN

Can be decomposed in parallel with virtually no need to share data

Lösung ausblenden
TESTE DEIN WISSEN

What's load balancing?

Lösung anzeigen
TESTE DEIN WISSEN

Distribution of the work with the goal of keeping the processors busy at all times

Lösung ausblenden
TESTE DEIN WISSEN

What's a deadlock?

Lösung anzeigen
TESTE DEIN WISSEN

When two processors wait indefinitely for the results of each other

Lösung ausblenden
TESTE DEIN WISSEN

What's a race condition?

Lösung anzeigen
TESTE DEIN WISSEN

Non-deterministic result of a program depending on the chronology of tasks in a shared memory machine

Lösung ausblenden
TESTE DEIN WISSEN

What's a Barrier? 

Lösung anzeigen
TESTE DEIN WISSEN

All tasks of a communicator perform work until reaching the barrier. Then, they wait until the last task of the communicator reaches the barrier and then move on

Lösung ausblenden
  • 338303 Karteikarten
  • 7654 Studierende
  • 331 Lernmaterialien

Beispielhafte Karteikarten für deinen Parallel Numerics Kurs an der TU München - von Kommilitonen auf StudySmarter erstellt!

Q:

Name three synchronization mechanisms

A:

Barrier, lock and semafore

Q:

What does MPI_ANY_SENDER do? 

A:

Allows to receive arbitrary messages, but makes the reception order of the messages also arbitrary.

Q:

What is not synchronized in MPI terms?

A:

Standard MPI operations are not synchronized: There is no handshake nor guarantee that they will ever arrive

Q:

Pros/cons Cyclic vs Blocking partition

A:

Blocking: Faster access to data due to data locality


Cyclic: Better load balancing

Q:

What are the three types of MIMD systems?

A:

Shared memory: The processors share a common address space. They communicate by reading and writing to it.

Distributed memory: Each processor has a private address space. They communicate by messaging.

Hybrid: The nodes have a shared address space, but they communicate amongst them by messaging.

Mehr Karteikarten anzeigen
Q:

What does MPI_Ssend() do?

A:

It's a send that guarantees synchronous semantics (As long as the matching receive is blocking). This means that it won't return until the matching receive has been posted.

Q:

What does MPI_Rsend() do?

A:

It performs a send operation if matching receive has been posted, otherwise errors. It returns as soon as the buffer is available.

Q:

What's an embarrassingly parallel problem?

A:

Can be decomposed in parallel with virtually no need to share data

Q:

What's load balancing?

A:

Distribution of the work with the goal of keeping the processors busy at all times

Q:

What's a deadlock?

A:

When two processors wait indefinitely for the results of each other

Q:

What's a race condition?

A:

Non-deterministic result of a program depending on the chronology of tasks in a shared memory machine

Q:

What's a Barrier? 

A:

All tasks of a communicator perform work until reaching the barrier. Then, they wait until the last task of the communicator reaches the barrier and then move on

Parallel Numerics

Erstelle und finde Lernmaterialien auf StudySmarter.

Greife kostenlos auf tausende geteilte Karteikarten, Zusammenfassungen, Altklausuren und mehr zu.

Jetzt loslegen

Das sind die beliebtesten Parallel Numerics Kurse im gesamten StudySmarter Universum

Numerik

Leibniz Universität Hannover

Zum Kurs
Numerik

Universität Freiburg im Breisgau

Zum Kurs
Numerik

Bergische Universität Wuppertal

Zum Kurs
Numerik

Universität Jena

Zum Kurs

Die all-in-one Lernapp für Studierende

Greife auf Millionen geteilter Lernmaterialien der StudySmarter Community zu
Kostenlos anmelden Parallel Numerics
Erstelle Karteikarten und Zusammenfassungen mit den StudySmarter Tools
Kostenlos loslegen Parallel Numerics