A stateful server keeps per-connection, and perhaps per-client, information as part of its state. The general approach to turning a stateful server into a stateless one is to off-load the per-connection information to some other location.
The most obvious location to use is the client: the state associated with a client can be packaged up and sent back to the client as a handle or ticket for the service. There are other possibilities, such as storing state in a file or database at the server's host, but the need to identify a client with some file or record may lead to subtle appearences of state in the server.
XDR's opaque data type would help convert a stateful server to a stateless one by allowing the state to be packaged up as a handle or ticket and sent back to the client. Only the server needs to be aware of the structure of the handle, so treating it as opaque data type allows the server to unilaterally change the handle, as well as providing simpler and faster presentation processing and a limited amount of security.
select()
or poll()
. You need not go into system-call details,
just give the general principles.
To start, you can either use concurrenty or not. Using concurrency is trivial: just create a computation (process or thread) per input socket.
For non-concurrent servers, polling and interrupts are the two general
approaches that allow a system to become aware of outside activity. To
implement polling you would need a non-blocking mechanism to test for the
presence of input at a socket. One such mechanisms usually provided by
operating systems is a predicate, such as io-ready(f)
, that returns true
if the file descriptor f
has data ready to read and false otherwise.
Another mechanism is asynchronous (non-blocking) io; a file descriptor opened
for asynchronous io would, when read, return either data if there was any data
to read or would return an error indication if there was no data to read.
Operating systems also usually support some form of io interrupts, but usually with respect to multi-threaded computations. An alternative approach, cheaper in resource use but not in complexity, is to implement a quasi-polling, quasi-interrupt scheme based on regular clock signals. The single-threaded computation would either commit to a synchronous read on a file descriptor which would be interrupted some time later by an alarm signal, or respond to an alarm signal by asynchronously reading a file descriptor.
True, for the most part. An attack on a concurrent server turns into an attack on an individual worker thread. As long as the listener thread remains in the clear (which is likely given that the listener thread doesn't deal directly with clients), it is available to initiate responses to further requests. It can also monitor worker threads' progress and deal with those worker threads that appear to be stuck or otherwise disabled. By using throttling or high- and low-water marks, a listener thread can also deal with flooding attacks, although a flooding attack of any significant size will quickly overwhelm not only the server but also the host on which it runs.
Big-endian machines store the most-significant byte in the left-most (lowest address) byte of a multi-byte location; little-endian machines store the least-significant byte in the left-most byte.
Endian-ness-testing code would store a value with known most- and least-significant byte values into a multi-byte location and then examine the most- or least-significant byte of the location to determine its value. The tricky part is avoiding the automatic endian adjustment performed by most multi-byte value operations. The easiest way to avoid adjustment is to treat the multi-byte location as an array of bytes; then each value is one byte an no endian adjustment is necessary (or possible).
Here's some C++ code that compiles and runs the test on a Sparc architecture (the byte values represent the powers of 256 when the entire value is interpreted as a number in the base-256 number system):
Here's the same test compiled and run on an Intel architecture:$ cat t.cc #include <stdlib.h> #include <stdio.h> static void Short(void) { short i = 0x0100; char * bytes = reinterpret_cast<char *>(&i); printf("i = %02x %02x.\n", bytes[0], bytes[1]); if (bytes[0] == 1) printf("It's a big-endian machine.\n"); else printf("It's a little-endian machine.\n"); } static void Int(void) { int i = 0x03020100; char * bytes = reinterpret_cast<char *>(&i); printf("i = %02x %02x %02x %02x.\n", bytes[0], bytes[1], bytes[2], bytes[3]); if (bytes[0] == 3) printf("It's a big-endian machine.\n"); else printf("It's a little-endian machine.\n"); } static void Long(void) { long long i = 0x0706050403020100; char * bytes = reinterpret_cast<char *>(&i); printf("i = %02x %02x %02x %02x %02x %02x %02x %02x.\n", bytes[0], bytes[1], bytes[2], bytes[3], bytes[4], bytes[5], bytes[6], bytes[7]); if (bytes[0] == 7) printf("It's a big-endian machine.\n"); else printf("It's a little-endian machine.\n"); } int main(void) { Short(); Int(); Long(); return EXIT_SUCCESS; } $ CC -o t.cc $ ./t i = 01 00. It's a big-endian machine. i = 03 02 01 00. It's a big-endian machine. i = 07 06 05 04 03 02 01 00. It's a big-endian machine. $ uname -p sparc $
A copule of people remembered that TCP/IP defines network-byte order as big-endian and used$ g++ -o t t.cc $ ./t i = 00 01. It's a little-endian machine. i = 00 01 02 03. It's a little-endian machine. i = 00 01 02 03 04 05 06 07. It's a little-endian machine. $ uname -m i686 $
htonl()
to figure the endian-ness:
This question makes no sense for Java because the Java Virtual Machine is defined to be big endian no matter where it runs (although it would be interesting to see what happens when Java is compiled to run native). This question also makes no sense for PowerPCs because they can be run as either big- or little-endian architectures (that's right: PowerPCs are bi-endian).#include <iostream> #include <stdlib.h> #include <sys/types.h> #include <netinet/in.h> #include <inttypes.h> int main(void) { cout << "It's a " << (1 == htonl(1) ? "big" : "little" ) << "-endian machine.\n"; return EXIT_SUCCESS; }
This page last modified on 11 March 2001.