- An early design decision, expensive to redo once done.
- Influences design, implementation, performance.
- Depends on expected load, software, hardware, and networking
technologies - all only temporarily stable.
- Concurrency can improve availablity, performance, and security.
- Delay decisions as long as possible.
- No decision is final.
- What is the number of computations?
- Constant or variable number of computations - long term and extreme
behavior is important.
- Unbounded concurrency is bounded by other resources - connections,
processes, cpu cycles.
- Banging up against resource limimts can be rude and dangerous, or it
can be a design feature.
- The longest delay possible
- Creating and deleting new computations.
- Computation management should not exceed request processing time.
- Periods of high demand.
- Fixed number of computations.
- Small, constant overhead amortized over many requests.
- Some coordination overhead between listener and workers.
- Long-lived computations are less forgiving than short-lived ones -
storage leaks.
- Listener-worker coordination.
- Shared listen socket forks workers; need mutual exclusion on the
socket; the listener doesn't do much.
- Trickier without listen-socket mutex semantics - implement mutual
exclusion; have the listener broker.
- Connectionless preallocation is similar to connection-oriented;
useful for improving best-effort datagram services.
- Pre-allocation and true concurrency eliminates the scheduling overhead.
- Handle variable requests
- service time may be non-constant and difficult to determine
- iteration needs short service times, concurrency needs long service
times - any particular request may be either
- start iteratively, then go concurrent after some period of time
- can be expensive, can be tricky
- Recover resources; refresh workers;
- A function of idle time, time of day.
This page last modified on 25 February 2003.