3 Thread Creation&Amnipulation&Synchronization

of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
   Thread Creation, Manipulation and Synchronization Operating Systems Lecture Notes Lecture 3 Thread Creation, Manipulation andSynchronization Martin C. Rinard ã We first must postulate a thread creation and manipulation interface. Will use the one in  Nachos: ã class Thread { ã  public: ã  Thread(char* debugName); ã  ~Thread(); ã  void Fork(void (*func)(int) int arg); ã  void !ield(); ã  void Finish(); ã ã The Thread  constructor creates a new thread. It allocates a data structure with space for the TCB. ã To actually start the thread running, must tell it what function to start running when it runs. The Fork  method gives it the function and a parameter to the function. ã What does Fork  do? It first allocates a stac for the thread. It then sets up the TCB so thatwhen the thread starts running, it will invoe the function and pass it the correct  parameter. It then puts the thread on a run !ueue someplace. Fork  then returns, and the thread that called Fork  continues. ã ow does #$ set up TCB so that the thread starts running at the function? %irst, it sets the stac pointer in the TCB to the stac. Then, it sets the &C in the TCB to 'e the first instruction in the function. Then, it sets the register in the TCB holding the first parameter to the parameter. When the thread system restores the state from the TCB, the function will magically start to run. ã The system maintains a !ueue of runna'le threads. Whenever a processor 'ecomes idle, the thread scheduler gra's a thread off of the run !ueue and runs the thread. ã Conceptually, threads e(ecute concurrently. This is the 'est way to reason a'out the  'ehavior of threads. But in practice, the #$ only has a finite num'er of processors, and it can)t run all of the runna'le threads at once. $o, must multiple( the runna'le threads on the finite num'er of processors.  ã *et)s do a few thread e(amples. %irst e(ample: two threads that increment a varia'le. ã int a # $; ã void sum(int p) { ã  a%%; ã  printf(&'d : a # 'dn& p a); ã ã void main() { ã  Thread *t # ne Thread(&child&); ã  t+Fork(sum ,); ã  sum($); ã ã The two calls to sum  run concurrently. What are the possi'le results of the program? To understand this fully, we must 'rea the sum  su'routine up into its primitive components. ã sum  first reads the value of a  into a register. It then increments the register, then stores thecontents of the register 'ac into a . It then reads the values of of the control string, p  and a  into the registers that it uses to pass arguments to the printf  routine. It then calls printf , which prints out the data. ã The 'est way to understand the instruction se!uence is to loo at the generated assem'ly language +cleaned up ust a 'it-. ou can have the compiler generate assem'ly code instead of o'ect code 'y giving it the /$ flag. It will put the generated assem'ly in the same file name as the .c or .cc file, 'ut with a .s suffi(. ã  la a 'r$ ã  ld -'r$.'r, ã  add 'r,,'r, ã  st 'r,-'r$. ãã  ld -'r$. 'o/ 0 parameters are passed starting ith 'o$ ã  mov 'o$ 'o, ã  la 12,3 'o$ ã  call printf ã $o when e(ecute concurrently, the result depends on how the instructions interleave. What are possi'le results? ã $ : , $ : , ã , : 4 , : , ãã , : 4 , : , ã $ : , $ : , ãã , : , $ : 4 ã $ : 4 , : 4 ã   ã $ : 4 , : 4 ã , : , $ : 4  $o the results are nondeterministic / you may get different results when you run the  program more than once. $o, it can 'e very difficult to reproduce 'ugs. Nondeterministic e(ecution is one of the things that maes writing parallel programs much more difficult than writing serial programs. ã Chances are, the programmer is not happy with all of the possi'le results listed a'ove. &ro'a'ly wanted the value of a  to 'e 0 after 'oth threads finish. To achieve this, must mae the increment operation atomic. That is, must prevent the interleaving of the instructions in a way that would interfere with the additions. ã Concept of atomic operation. 1n atomic operation is one that e(ecutes without any interference from other operations / in other words, it e(ecutes as one unit. Typically  'uild comple( atomic operations up out of se!uences of primitive operations. In our case the primitive operations are the individual machine instructions. ã 2ore formally, if several atomic operations e(ecute, the final result is guaranteed to 'e the same as if the operations e(ecuted in some serial order. ã In our case a'ove, 'uild an increment operation up out of loads, stores and add machine instructions. Want the increment operation to 'e atomic. ã 3se synchroni4ation operations to mae code se!uences atomic. %irst synchroni4ation a'straction: semaphores. 1 semaphore is, conceptually, a counter that supports two atomic operations, & and 5. ere is the $emaphore interface from Nachos: ã class 5emaphore { ã  public: ã  5emaphore(char* debugName int initial6alue); ã  ~5emaphore(); ã  void 7(); ã  void 6(); ã ã ere is what the operations do: o $emphore+name, count- : creates a semaphore and initiali4es the counter to count. o &+- : 1tomically waits until the counter is greater than 6, then decrements the counter and returns. o 5+- : 1tomically increments the counter. ã ere is how we can use the semaphore to mae the sum  e(ample wor: ã int a # $; ã 5emaphore *s; ã void sum(int p) { ã  int t; ã  s+7(); ã  a%%; ã  t # a;  ã  s+6(); ã  printf(&'d : a # 'dn& p t); ã ã void main() { ã  Thread *t # ne Thread(&child&); ã  s # ne 5emaphore(&s& ,); ã  t+Fork(sum ,); ã  sum($); ã ã We are using semaphores here to implement a mutual e(clusion mechanism. The idea  'ehind mutual e(clusion is that only one thread at a time should 'e allowed to do something. In this case, only one thread should access a . 3se mutual e(clusion to mae operations atomic. The code that performs the atomic operation is called a critical section. ã $emaphores do much more than mutual e(clusion. They can also 'e used to synchroni4e  producer7consumer programs. The idea is that the producer is generating data and the consumer is consuming data. $o a 3ni( pipe has a producer and a consumer. ou can alsothin of a person typing at a ey'oard as a producer and the shell program reading the characters as a consumer. ã ere is the synchroni4ation pro'lem: mae sure that the consumer does not get ahead of the producer. But, we would lie the producer to 'e a'le to produce without waiting for the consumer to consume. Can use semaphores to do this. ere is how it wors: ã 5emaphore *s; ã void consumer(int dumm8) { ã  hile (,) { ã  s+7(); ã  consume the ne9t unit of data ã   ã ã void producer(int dumm8) { ã  hile (,) { ã  produce the ne9t unit of data ã  s+6(); ã   ã ã void main() { ã  s # ne 5emaphore(&s& $); ã  Thread *t # ne Thread(&consumer&); ã  t+Fork(consumer ,); ã  t # ne Thread(&producer&); ã  t+Fork(producer ,); ã In some sense the semaphore is an a'straction of the collection of data. ã In the real world, pragmatics intrude. If we let the producer run forever and never run the consumer, we have to store all of the produced data somewhere. But no machine has an
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks