BAB XI
Concurrency
Concurrency in software execution can occur at four
different levels: instruction level (executing two or more machine instructions
simultaneously), statement level (executing two or more high-level language
statements simultaneously), unit level (executing two or more subprogram units
simultaneously), and program level (executing two or more programs
simultaneously). Because no language design issues are involved with them,
instruction-level and program-level concurrency are not discussed in this
chapter. Concurrency at both the subprogram and the statement levels is discussed,
with most of the focus on the subprogram level.
There are two distinct categories of concurrent unit
control. The most natural category of concurrency is that in which, assuming
that more than one processor is available, several program units from the same
program literally execute simultaneously. This is physical concurrency. A
slight relaxation of this concept of concurrency allows the programmer and the
application software to assume that there are multiple processors providing
actual concurrency, when in fact the actual execution of programs is taking
place in interleaved fashion on a single processor. This is logical
concurrency. From the programmer’s and language designer’s points of view,
logical concurrency is the same as physical concurrency. It is the language
implementor’s task, using the capabilities of the underlying operating system,
to map the logical concurrency to the host hardware. Both logical and physical
concurrency allow the concept of concurrency to be used as a program design
methodology.
Concurrency can occur at four levels:
· Machine
instruction level
· High-level
language statement level
· Unit level
· Program
level
Because there are no language issues in instruction- and
program-level concurrency, they are not addressed here.
Introduction to Subprogram-Level Concurrency
· A task or
process or thread is a program unit that can be in concurrent execution with
other program units
· Tasks differ
from ordinary subprograms in that:
o A task may be
implicitly started
o When a program
unit starts the execution of a task, it is not necessarily suspended
o When a task’s
execution is completed, control may not return to the caller
· Tasks
usually work together
· Two general
categories of task
o Heavyweight tasks
execute in their own address space
o Lightweight tasks
all run in the same address space – more efficient
o A task is disjoint
if it does not communicate with or affect the execution of any other task in
the program in any way
· Task
synchronization
o A mechanism that
controls the order in which tasks execute
o Two kinds of
synchronization
§ Cooperation
synchronization
§ Competition
synchronization
o Task communication
is necessary for synchronization, provided by:
§ Shared nonlocal
variables
§ Parameters
§ Message passing
· Kinds of
synchronization
o Cooperation: Task
A must wait for task B to complete some specific activity before task A can
continue its execution, e.g., the producer-consumer problem
o Competition: Two
or more tasks must use some resource that cannot be simultaneously used, e.g.,
a shared counter.
Scheduler
o Providing
synchronization requires a mechanism for delaying task execution
o Task execution
control is maintained by a program called the scheduler, which maps task
execution onto available processors
· Task
Execution States
o New - created but
not yet started
o Ready - ready to
run but not currently running (no available processor)
o Running
o Blocked - has been
running, but cannot now continue (usually waiting for some event to occur)
o Dead - no longer
active in any sense
· Liveness and
Deadlock
o Liveness is a
characteristic that a program unit may or may not have
§ In sequential code,
it means the unit will
eventually
complete its execution
o In a concurrent
environment, a task can easily lose its liveness
o If all tasks in a
concurrent environment lose their liveness, it is called deadlock
· Design
issues for concurrency
o Competition and
cooperation synchronization*
o Controlling task
scheduling
o How can an
application influence task scheduling
o How and when tasks
start and end execution
o How and when are
tasks created (The most important issue)
· Methods of
providing synchronization
o Semaphores
o Monitors
o Message Passing
Semaphores
· Dijkstra –
1965
o In an effort to
provide competition synchronization through mutually exclusive access to shared
data structures, Edsger Dijkstra devised semaphores in 1965 (Dijkstra, 1968b).
Semaphores can also be used to provide cooperation synchronization.
· A semaphore
is a data structure consisting of a counter and a queue for storing task
descriptors
o A task descriptor
is a data structure that stores all of the relevant information about the execution
state of the task
· Semaphores
can be used to implement guards on the code that accesses shared data
structures
· Semaphores
have only two operations, wait and release (originally called P and V by
Dijkstra)
· Semaphores
can be used to provide both competition and cooperation synchronization
· Cooperation
Synchronization with Semaphores
o Example: A shared
buffer
o The buffer is
implemented as an ADT with the operations DEPOSIT and FETCH as the only ways to
access the buffer
o Use two semaphores
for cooperation: emptyspots and fullspots
o The semaphore
counters are used to store the numbers of empty spots and full spots in the
buffer
o DEPOSIT must first
check emptyspots to see if there is room in the buffer
o If there is room,
the counter of emptyspots is decremented and the value is inserted
o If there is no
room, the caller is stored in the queue of emptyspots
o When DEPOSIT is
finished, it must increment the counter of fullspots
o FETCH must first
check fullspots to see if there is a value
§ If there is a full
spot, the counter of fullspots is decremented and the value is removed
§ If there are no
values in the buffer, the caller must be placed in the queue of fullspots
§ When FETCH is
finished, it increments the counter of emptyspots
o The operations of
FETCH and DEPOSIT on the semaphores are accomplished through two semaphore
operations named wait and release
· Semaphores:
Wait and Release Operations
wait(aSemaphore)
if aSemaphore’s counter > 0 then
decrement
aSemaphore’s counter
else
put the caller in
aSemaphore’s queue
attempt to transfer
control to a ready task
-- if the task
ready queue is empty,
-- deadlock
occurs
end
release(aSemaphore)
if aSemaphore’s queue is empty then
increment
aSemaphore’s counter
else
put the calling
task in the task ready queue
transfer control to
a task from aSemaphore’s queue
end
· Producer and
consumer tasks
semaphore fullspots, emptyspots;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
--
produce VALUE –-
wait
(emptyspots); {wait for space}
DEPOSIT(VALUE);
release(fullspots); {increase filled}
end
loop;
end producer;
task consumer;
loop
wait
(fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
--
consume VALUE –-
end loop;
end consumer;
· Competition
Synchronization with Semaphores
o A third semaphore,
named access, is used to control access (competition synchronization)
§ The counter of
access will only have the values 0 and 1
§ Such a semaphore is
called a binary semaphore
o Note that wait and
release must be atomic!
· Evaluation
of Semaphores
o Misuse of
semaphores can cause failures in cooperation synchronization, e.g., the buffer
will overflow if the wait of fullspots is left out
o Misuse of
semaphores can cause failures in competition synchronization, e.g., the program
will deadlock if the release of access is left out
Monitors
· Ada, Java,
C#
· The idea:
encapsulate the shared data and its operations to restrict access
· A monitor is
an abstract data type for shared data
· Competition
Synchronization
o Shared data is
resident in the monitor (rather than in the client units)
o All access
resident in the monitor
§ Monitor
implementation guarantee synchronized access by allowing only one access at a
time
§ Calls to monitor
procedures are implicitly queued if the monitor is busy at the time of the call
Message Passing
· Message
passing is a general model for concurrency
o It can model both
semaphores and monitors
o It is not just for
competition synchronization
· Central
idea: task communication is like seeing a doctor--most of the time she waits
for you or you wait for her, but when you are both ready, you get together, or
rendezvous
· Rendezvous
Time Lines
Tidak ada komentar:
Posting Komentar