- Scheduling's a lot of work. What's the payoff?
- The virtual machine abstraction.
- Is that all? Hardware's cheap.
- Got a program to run? Buy a machine.
- The Internet appliance.
- This works well for programs doing one thing.
- Embedded systems in your car.
- Many programs can't get by doing just one thing.
- On a network? Here come some packets!
- Writing one program to do everything isn't pratical.
- It makes the program much more complex.
- It duplicates efforts amoung programs.
- Better to factor tasks into separate programs.
- Well, I can factor the extra tasks into libraries.
- And load only what I need.
- True, but you still have to multiplex the CPU among the tasks.
- And here comes scheduling.
- Serial algorithms structure computations as a single program.
- The result is a Serial (or sequential) computation.
- Concurrent algorithms structure computations as multiple programs.
- The result is a concurrent (or parallel or distribute)
The coordinated execution of multiple processes allows for
- Easier design, with large grain and narrow interfaces.
- Better performance, with no waiting and overlapping execution.
- Increased reliability; failure isn't final; easy replication
- Concurrent computations allow for easier design, with large grain and
- Consider your basic read-process-write loop.
- Each stage can be designed separately.
- Concurrent computations can have better performance than sequential
- Concurrency eliminates waiting and overlaps execution.
- Concurrency provides increased reliability.
- Easy replication reduces the effects of failure
- If one process fails, replace it with a copy.
- Concurrent computations happen "at the same time".
- Multiprocessing provides simulated concurrency.
- Computations seem to be occurring at the same time.
- Multiprocessors provide true concurrency.
- Computations are occurring at the same time.
- How are concurrent computations organized?
- Symmetric coupling.
- Loosely coupling.
- The boss-worker architecture is straight-forward.
- Coordination is simple but possibly inefficient.
- Reliability is fair to poor.
- Performance is dependent on task
Symmetric-coupled systems are fully connnected.
- Coordination is similar to loosely-coupled systems; bandwidth is
- Reliability is good, and is easier to achieve.
- Performance less dependent on task.
- Loosely-coupled systems are also know as client-server systems.
- Coordination is somewhat complicated but more efficient.
- Reliability is good, but may be hard to achieve.
- Performance depends on task.
- what's the problem
- Consider the editor example.
- Unmediated access to shared information.
- what's the solution
- control access to shared information
- mutual exclusion - at most one process accessing shared data at a
- critical regions - an implementation of mutual exclusion
- simple minded boolean flags
- test and set
- a hardware solution - almost universally implemented
- inefficient due to spin waiting
- wait and signal
- signals aren't sticky - like phones without answering machines
Patterns of Process Cooperation
- one-to-one, n-to-m
- one-way flow of information
- a standard database problem
- readers read, writers write
- writers need exclusive access; readers can share
- no unnecessary delay, timely updates, no starvation
- barrier synchronization
- multi-phase tasks; each phase must be finished before the next starts
- pure synchronization - the sleeping barber
- essentially one-bit communication - stop-go
- no data are transfered, just state (which is data in some sense)
- general sharing - the dining philosophers
Concurrency Mechanisms Programming Languages
- abstracting operating systems features in a language, usually at a
- threads and monitors (sorta)
This page last modified on 14 November 2004.