CIS
400 LECTURE 24
Concurrency
Co-routines
are NOT concurrent, (they take turns executing)
® producer/consumer, (both
use the same buffer on a "time sharing" basis)
® "dining
philosophers" concept, (with only one pair of chopsticks, only one
philosopher can dine at a time, he/she eats, then wanders off thinking happy
philosopher thoughts until its feed time again. Meanwhile, another hunger
philosopher bellys-up to the feed trough)
Process
communication:
Passing data between tasks
Synchronization:
Controlling order in which process
is executed
Race
condition:
Outcome depends on relative process
speeds
Time
critical:
Parts of computation effected by
race condition
At
issue: trying to avoid "deadlock", (where each process is waiting for
some message from each other)
Fair
scheduling:
Every process executes to completion
PL/1
First
to have concurrency, (parallel execution)
Mutual
exclusion: two time critical procedures would not enter at the same time in
order to prevent deadlock
"wait"
also prevents deadlock
Programming
language support for concurrency
1)
Need
a way to identify program section to execute in parallel
2)
Need
a way to determine when a process may execute
3)
Need
a way of guaranteeing mutual exclusion:
a)
Protect
shared data, semaphores or monitors
b)
Insist
on messages for shared data
4)
Means
to synchronize tasks
5)
A
way of assigning priorities to processes
6)
Mechanism
for delaying process execution for some time
Semaphore:
wait (s): if s = 1, then s = 0
else
place process into process queue
signal (s): if queue ¹ empty, then remove process
from queue and schedule
execution
else
s = 1
Disadvantages:
It
will still work if programmer makes mistakes:
Monitors
E.g.
Monitor name
begin
declare local data
declare procedure
initialization of local identifiers
end
advantages:
1)
Means
to invoke monitor processes
2)
Provide
means of scheduling calls made by outside procedure
3)
Mechanism
inside monitor to suspend and continue procedure
Disadvantages:
Doesn't
work well for distributed processing
Message
passing:
Rendezvous:
DP-
distributed process
CSP-
communicating sequential processes
In
CSP:
E.g.
In
DP:
Procedure
used for communication:
SEE
DR. MAXIM'S CIS 350 PAGE FOR MORE DETAILS OF THE FOLLOWING:
Issues:
1)
Granularity:
power of each processor
2)
Topology:
shape of processor network, (ring, star, completely connected, nearest
neighbor, tree, hyper-cube)
3)
Processor
communication: shared memory, (EREW, CREW, CRCW), interconnection network,
(distributed memory), systolic device, ("data pump"), circuit, (a-cyclic
graph)
4)
Synchronization
5)
Data
access
6)
Fault
tolerance
7)
Efficiency,
(complexity)
Eg.
Running time = T(n, p)
N = # data
P = # processors
Speed, S(p)
= T(n,1)
T(n,p)
Efficiency = S(p)
p
8)
Scalability-
extendibility
9)
Correctness
Examples
of algorithms:
Choosing
the smallest of 16 cards:
1)
Pick
up 2 cards
2)
Place
larger in separate stack
3)
Return
smaller to original stack
4)
Repeat
until one card remains
#
of processors |
Comparisons |
1 |
15 |
2 |
7
+ 1 = 8 |
4 |
3
+ 1 + 1 = 5 |
8 |
1
+ 1 + 1 + 1 = 4 |
Doing
the same with only ONE comparison: (remember, input and output are
"free")
1)
In
graph below, look at row communicator, (the bold value in a given card's row)
2)
Record
the value
3)
Look
at the column communicator
4)
Record
value
5)
If
row > column, raise hand
6)
If
any row member has hand raised, that communicator sits
1 |
2 |
3 |
4 |
5 |
6 |
7
|
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
Selection
sort complexity with multiple processors:
#
of processors |
Comparisons |
1 |
15
+ 14 +…+ 2 + 1 = 120 |
2 |
(7
+ 6 +…+ 2 + 1) + 15 = 43 |
4 |
6
+ 7 + 15 = 28 |
8 |
1
+ 3 + 7 + 15 = 26 |