Lecture Notes for Client-Server Interfaces

22 January 2002 - Distributed Computing


  1. distributed computing - independent computations coordinated to slove a problem

    1. independent - no resource sharing; limited communications

    2. computation - processes, threads

    3. coordinated - communications, either data or control or both

    4. graph representations - nodes as computation; arcs as communication; n-to-m patterns

  2. why do it

    1. because we can - cheap hardware and widespread communication at low cost

    2. increased reliability - replication, redundancy

    3. increased autonomy - local control over local resources

    4. increased performance - unlimited resources, parallelism

    5. high modularity - upgrades and maintenence

  3. why not do it

    1. it's hard - most of the features above are wishful thinking

    2. failure modes - many and complex; can easily drive programs into unexpected failures

    3. variable and unpredictable environment

      1. detecting problems - down or slow

      2. compensating for problems - partitioning

      3. recovering from problems - merging independent executions

    4. conceptually undernourished

      1. rpc is an early 80s idea

      2. language support is minimal - type safety, versioning

      3. os support is minimal - coordination toolkits

    5. performance - latency and load balancing

    6. unhelpful standards

      1. true standards are low level - tcp/ip

      2. higher level standards are quasi and, um, pragmatic - corba, xml-soap

  4. how to do it

    1. master-slave - simple communications patterns; one-to-many

    2. peer-peer - highly distributed; many-to-many

    3. client-server - conceptually simple; one-to-one

    4. publish-subscribe - simple and dynamic; many-to-many


This page last modified on 18 January 2002.