R. Clayton (rclayton@monmouth.edu)
(no date)
What did you do to get the simulator to spit out the extra information like
"noop should be this, was actually this" like on the sheets we picked up in
class.
The output on the assignment evaluations are produced by my testing scripts;
these are gnarled tcl-shell programs, which are not easily made available in
a convenient to run form. I'll see if I can cook up simpler versions. In
the meantime, here's the instructions on how to generate the evaluations by
hand.
Here's an example of an output from an assignment test:
Testing pa1-tree.dsk.
25 total time is 5% of 455.
22 idle time is 42% of 52.
sub is 0, should be 6
The output has two parts: the time part and the instruction-count part. Of
the two, the instruction count part is the most important, although - as in
this case - you should also pay some attention to the timing part too.
The instruction count part compares the number of times processes on a batch
disk executed a various assembly instructions with the expected number of
times the assembly instruction should have been executed. In the example
above, the programs on the pa1-tree disk didn't execute any subtraction
instructions, when it should have been executed a total of six times. This
clearly indicates something is going wrong somewhere (probably in the os).
You can get the instruction counts by giving the ins-cnt flag to the
simulator's -D command line option; see the Running the Simulator section of
the architecture-simulator page.
You should be careful when interpreting the instruction counts for a batch
disk run; only a few instructions in each batch disk have a constant and
known value, and the instructions and counts vary from disk to disk. For
example, the idle process uses both the add and branch instructions, so the
counts for those two instructions tell you nothing because different os
implementations may legitimately run the idle process for different times (of
course, if the add and branch instructions were executed zero times, then you
can safely conclude that something's wrong somewhere because the idle process
has to run for any disk).
Those assignments with incorrect instruction counts will have a list of the
known instructions and counts for each test that failed. This list may not
be the complete set of known instructions and counts for the disk because
only the incorrect instruction counts are shown (on the other hand, it's most
likely that a mistake that causes a problem in one instruction count for a
disk will cause problems for all known instruction counts for that disk).
Those with correct assignments that want to worry excessively should seek out
their colleagues with incorrect assignments and ask to see the counts. Be
careful when comparing the instruction counts between your and my
assignments; unless you know the known instruction count values for each
disk, you could be lead down a path that will waste huge amounts of your
time.
The time part is a comparison between your and my assignment times. The
total and idle time message is automatically produced by the simulator just
before it exits. To compare, you'll have to run my assignment (I wouldn't
trust the timing values given in the assignment pages).
The same warnings that applied to the add and branch instructions apply to
the time part: different operating systems may legitimately run for different
times, and so differences in the timings may not indicate problems. However,
as always, significant deviations may indicate significant problems,
particularly if the deviations are towards zero. In the example above,
running for only 22 ticks in contrast to the 455 ticks used by my assignment
is a definite indicator of problems. The issue is a bit more tricky for 200
vs. 455 or 900 vs 455. In these cases I would rely on the instruction counts
to evaluate my assignment, although running for twice as long as necessary
may suggest a re-evaluation of my os algorithms (on the other hand, if it
works...).
This archive was generated by hypermail 2.0b3 on Fri Dec 03 2004 - 12:00:06 EST