Programming Shared Address Space Platforms PDF

Summary

This document is lecture notes on programming shared address space platforms. It covers topics like thread basics, POSIX thread API, and OpenMP, with examples of multithreaded programming using these concepts.

Full Transcript

7. Programming Shared Address Space Platforms Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 1 Topic Overview Thread Basics The POSIX Thread API Synchronization Primitives in Pthreads Composite Synchronization Cons...

7. Programming Shared Address Space Platforms Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 1 Topic Overview Thread Basics The POSIX Thread API Synchronization Primitives in Pthreads Composite Synchronization Constructs OpenMP: a Standard for Directive Based Parallel Programming Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 2 Overview of Programming Models Programming models provide support for expressing concurrency and synchronization. Process based models assume that all data associated with a process is private, by default, unless otherwise specified. Lightweight processes and threads assume that all memory is global. Directive based programming models extend the threaded model by facilitating creation and synchronization of threads. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 3 Overview of Programming Models A thread is a single stream of control in the flow of a program. A program like: for (row = 0; row < n; row++) for (column = 0; column < n; column++) c[row][column] = dot_product( get_row(a, row), get_col(b, col)); can be transformed to: for (row = 0; row < n; row++) for (column = 0; column < n; column++) c[row][column] = create_thread( dot_product(get_row(a, row), get_col(b, col))); In this case, one may think of the thread as an instance of a function that returns before the function has finished executing. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 4 Thread Basics All memory in the logical machine model of a thread is globally accessible to every thread. The stack corresponding to the function call is generally treated as being local to the thread for liveness reasons. This implies a logical machine model with both global memory (default) and local memory (stacks). It is important to note that such a flat model may result in very poor performance since memory is physically distributed in typical machines. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 5 Thread Basics The logical machine model of a thread-based programming paradigm. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 6 Thread Basics Threads provide software portability. Inherent support for latency hiding. Scheduling and load balancing. Ease of programming and widespread use. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 7 The POSIX Thread API Commonly referred to as Pthreads, POSIX has emerged as the standard threads API, supported by most vendors. The concepts discussed here are largely independent of the API and can be used for programming with other thread APIs (NT threads, Solaris threads, Java threads, etc.) as well. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 8 Thread Basics: Creation and Termination Pthreads provides two basic functions for specifying concurrency in a program: #include int pthread_create ( pthread_t *thread_handle, const pthread_attr_t *attribute, void * (*thread_function)(void *), void *arg); int pthread_join ( pthread_t thread, void **ptr); The function pthread_create invokes function thread_function as a thread Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 9 Multithreaded Calculate PI #include #include #define MAX_THREADS 512 void *compute_pi (void *); int total_hits, total_misses, hits[MAX_THREADS], sample_points, sample_points_per_thread, num_threads; main() { int i; pthread_t p_threads[MAX_THREADS]; pthread_attr_t attr; double computed_pi; double time_start, time_end; struct timeval tv; struct timezone tz; pthread_attr_init (&attr); pthread_attr_setscope (&attr,PTHREAD_SCOPE_SYSTEM); printf("Enter number of sample points: "); scanf("%d", &sample_points); printf("Enter number of threads: "); scanf("%d", &num_threads); Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 10 Multithreaded Calculate PI gettimeofday(&tv, &tz); time_start = (double)tv.tv_sec + (double)tv.tv_usec / 1000000.0; total_hits = 0; sample_points_per_thread = sample_points / num_threads; for (i=0; i< num_threads; i++) { hits[i] = i; pthread_create(&p_threads[i], &attr, compute_pi, (void *) &hits[i]); } for (i=0; i< num_threads; i++) { pthread_join(p_threads[i], NULL); total_hits += hits[i]; } computed_pi = 4.0*(double) total_hits / ((double)(sample_points)); gettimeofday(&tv, &tz); time_end = (double)tv.tv_sec + (double)tv.tv_usec / 1000000.0; printf("Computed PI = %lf\n", computed_pi); printf(" %lf\n", time_end - time_start); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 11 Multithreaded Calculate PI void *compute_pi (void *s) { int seed, i, *hit_pointer; double rand_no_x, rand_no_y; int local_hits; hit_pointer = (int *) s; seed = *hit_pointer; local_hits = 0; for (i = 0; i < sample_points_per_thread; i++) { rand_no_x =(double)(rand_r(&seed))/(double)((2 pending_writers = 0; pthread_mutex_init(&(l -> read_write_lock), NULL); pthread_cond_init(&(l -> readers_proceed), NULL); pthread_cond_init(&(l -> writer_proceed), NULL);} Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 41 Read-Write Locks void mylib_rwlock_rlock(mylib_rwlock_t *l) { pthread_mutex_lock(&(l -> read_write_lock)); while ((l -> pending_writers > 0) || (l -> writer > 0)) { l -> pending_readers = 1; pthread_cond_wait(&(l -> readers_proceed), &(l -> read_write_lock)); } l -> pending_readers = 0; l -> readers ++; pthread_mutex_unlock(&(l -> read_write_lock)); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 42 Read-Write Locks void mylib_rwlock_wlock(mylib_rwlock_t *l) { pthread_mutex_lock(&(l -> read_write_lock)); while ((l -> writer > 0) || (l -> readers > 0)) { l -> pending_writers ++; pthread_cond_wait(&(l -> writer_proceed), &(l -> read_write_lock)); l -> pending_writers --; } l -> writer ++; pthread_mutex_unlock(&(l -> read_write_lock)); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 43 Read-Write Locks void mylib_rwlock_unlock(mylib_rwlock_t *l) { pthread_mutex_lock(&(l -> read_write_lock)); if (l -> writer > 0) l -> writer = 0; else if (l -> readers > 0) l -> readers --; pthread_mutex_unlock(&(l -> read_write_lock)); if ((l -> readers == 0) && (l -> pending_writers > 0)) pthread_cond_signal(&(l -> writer_proceed)); else if (l -> pending_readers > 0) pthread_cond_broadcast(&(l -> readers_proceed)); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 44 Barriers As in MPI, a barrier holds a thread until all threads participating in the barrier have reached it. Barriers can be implemented using a counter, a mutex and a condition variable. A single integer is used to keep track of the number of threads that have reached the barrier. If the count is less than the total number of threads, the threads execute a condition wait. The last thread entering (and setting the count to the number of threads) wakes up all the threads using a condition broadcast. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 45 Barriers typedef struct { pthread_mutex_t count_lock; pthread_cond_t ok_to_proceed; int count; } mylib_barrier_t; void mylib_init_barrier(mylib_barrier_t *b) { b -> count = 0; pthread_mutex_init(&(b -> count_lock), NULL); pthread_cond_init(&(b -> ok_to_proceed), NULL); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 46 Barriers void mylib_barrier (mylib_barrier_t *b, int num_threads) { pthread_mutex_lock(&(b -> count_lock)); b -> count ++; if (b -> count == num_threads) { b -> count = 0; pthread_cond_broadcast(&(b -> ok_to_proceed)); } else while (pthread_cond_wait(&(b -> ok_to_proceed), &(b -> count_lock)) != 0); pthread_mutex_unlock(&(b -> count_lock)); } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 47 Barriers The barrier described above is called a linear barrier. The trivial lower bound on execution time of this function is therefore O(n) for n threads. This implementation of a barrier can be speeded up using multiple barrier variables organized in a tree. We use n/2 condition variable-mutex pairs for implementing a barrier for n threads. At the lowest level, threads are paired up and each pair of threads shares a single condition variable-mutex pair. Once both threads arrive, one of the two moves on, the other one waits. This process repeats up the tree. This is also called a log barrier and its runtime grows as O(log p). Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 48 Barrier Execution time of 1000 sequential and logarithmic barriers as a function of number of threads on a 32 processor SGI Origin 2000. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 49 Tips for Designing Asynchronous Programs Never rely on scheduling assumptions when exchanging data. Never rely on liveness of data resulting from assumptions on scheduling. Do not rely on scheduling as a means of synchronization. Where possible, define and use group synchronizations and data replication. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 50 OpenMP: a Standard for Directive Based Parallel Programming OpenMP is a directive-based API that can be used with FORTRAN, C, and C++ for programming shared address space machines. OpenMP directives provide support for concurrency, synchronization, and data handling while obviating the need for explicitly setting up mutexes, condition variables, data scope, and initialization. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 51 OpenMP Programming Model OpenMP directives in C and C++ are based on the #pragma compiler directives. A directive consists of a directive name followed by clauses. #pragma omp directive [clause list] OpenMP programs execute serially until they encounter the parallel directive, which creates a group of threads. #pragma omp parallel [clause list] The main thread that encounters the parallel directive becomes the master of this group of threads and is assigned the thread id 0 within the group. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 52 OpenMP Programming Model The clause list is used to specify conditional parallelization, number of threads, and data handling. – Conditional Parallelization: The clause if (scalar expression) determines whether the parallel construct results in creation of threads. – Degree of Concurrency: The clause num_threads(integer expression) specifies the number of threads that are created. – Data Handling: The clause private (variable list) indicates variables local to each thread. The clause firstprivate (variable list) is similar to the private, except values of variables are initialized to corresponding values before the parallel directive. The clause shared (variable list) indicates that variables are shared across all the threads. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 53 OpenMP Programming Model A sample OpenMP program along with its Pthreads translation that might be performed by an OpenMP compiler. Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 54 OpenMP Programming Model #pragma omp parallel if (is_parallel== 1) num_threads(8) \ private (a) shared (b) firstprivate(c) { } If the value of the variable is_parallel equals one, eight threads are created. Each of these threads gets private copies of variables a and c, and shares a single value of variable b. The value of each copy of c is initialized to the value of c before the parallel directive. The default state of a variable is specified by the clause default (shared) or default (none). Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 55 Reduction Clause in OpenMP The reduction clause specifies how multiple local copies of a variable at different threads are combined into a single copy at the master when threads exit. The usage of the reduction clause is reduction (operator: variable list). The variables in the list are implicitly specified as being private to threads. The operator can be one of +, *, -, &, |, ^, &&, and ||. #pragma omp parallel reduction(+: sum) num_threads(8) { } Parallel & Distributed Processing- 0403412 Dr. Ali El-Moursy 56 OpenMP Programming: Example #pragma omp parallel default(private) shared (npoints) \ reduction(+: sum) num_threads(8) { num_threads = omp_get_num_threads(); sample_points_per_thread = npoints / num_threads; sum = 0; for (i = 0; i < sample_points_per_thread; i++) { rand_no_x =(double)(rand_r(&seed))/(double)((2

Use Quizgecko on...
Browser
Browser