struct
s
struct red { void f(void) { ... } void g(void) { ... } }; |
struct white : red { void f(void) { ... } void g(void) { ... } }; |
struct blue : white { void f(void) { ... } void g(void) { ... } }; |
blue b; white & w = b; red & r = b;
describe the smallest set of member functions that have to be declared virtual to produce the following execution-time behavior:
r.f(); // calls blue::f(). r.g(); // calls red::g(). w.f(); // calls blue::f(). w.g(); // calls blue::f(). b.f(); // calls blue::f(). b.g(); // calls blue::g().
b
's member functions aren't being called through a reference or a pointer,
so polymorphism doesn't enter into it. (In addition, b
's static and
dynamic types are the same, which means we can't distinguish virtual from
non-virtual member function calls with b
.) w
's static (white) and
dynamic (blue) types of are different. From the comments, both member-function
calls map to the dynamic type, and so both member functions must be virtual in
white
. Finally, r
's static (red) and dynamic (blue) types are
different too, but only one member-function call maps to the dynamic type
(r::f()
); the other (r::g()
) maps to the static type.
From the previous analysis, red::f()
is virtual, red::g()
is
non-virtual, and blue::g()
is virtual.
Many people got this wrong by forgetting the virtual flip-flop and making
blue
's member functions virtual, which isn't helpful because it's a
variable's static type that determines whether or not a member function is
virtual.
for i = intvec.begin(); i != intvec.end(); i++ for j = i; j != intvec.end(); j++ if *i + *j > intvec.back() saved.push_back(*i + *j)
You may not assume anything about the order of the integers stored in
intvec
; otherwise, make sure you clearly state the assumptions you used in
your analysis.
We can start by replacing all the atomic executions, which are all constant:
for O(1); O(1); O(1) for O(1); O(1); O(1) if O(1) saved.push_back(O(1))
Now we've got a problem. The asymptotic estimate for saved.push_back()
depends on the data structure represented by saved
, which is unspecified
by the problem. This gives us a degree of freedom, which we can exploit by
assuming saved
is a list. This assumption is reasonable because lists
support push_back()
, and nothing else in the problem contradicts the
assumption that saved
is a list. A push-back on a list takes constant
time (imagine that!), which, when used in the if rule, reduces the inner-most
loop body to a constant estimate:
for O(1); O(1); O(1) for O(1); O(1); O(1) O(1) + O(1) = O(1)
The inner-most for iterates O(n) times, giving
for O(1); O(1); O(1) O(n)*(O(1) + O(1) + O(1) + O(1)) = O(n)*O(1) = O(n)
One more turn of the for-rule crank with an O(n) iteration bound gives
O(n)*(O(n) + O(1) + O(1) + O(1)) = O(n)*O(n) = O(n2)
The given code has O(n2) run-time behavior, where n is the
number of elements in intvec
.
Many people didn't get a 100 on this because they forgot to justify their
(constant) asymptotic estimate for saved.push_back()
.
mau()
returned additive upsequences that were approximately the right
size; that is, returned additive upsequences contained neither too many nor too
few numbers.
Your testing code does not have to determine if the size of the returned additive upsequence is the correct; it just has to determine if the size of the returned additive upsequence is either obviously too small or obviously too big.
Your testing code may be asymptotic in the sense that it only has to work
reliably when the input parameter n is "big enough"; make sure
you specify in your answer what "big enough" means. Because your
testing code must run in constant time, it does not include the cost of calling
mau()
to generate the additive upsequence.
From the problem statement, each number in an additive upsequence (except the first) is the sum of two previous numbers, which means the fastest the numbers can grow is through doubling (when the summands are the same). When numbers grow by doubling, it can take no fewer than floor(log2(n)) numbers to reach n. Because the log function starts counting from 0 (20 = 1), this is an underestimate, and is valid for any n > 1.
In the other direction, you can construct an additive upsequence for n by repeatedly halving n. If n is even, halving it adds one new number to the upsequence. If n is odd, it adds two new numbers to the upsequence, one of which - n/2 + 1 - can be formed from the other (n/2) by adding 1. In the worst case, every halving of n is odd (n = 15, for example), which means that, for a given n, the largest possible upsequence formed from n will have 2*(ceiling(log2(n)) + 1) numbers. This is an over-estimate, because it assumes that every halving is odd, which is not true for the last one, and so is valid for any n > 1.
Using these two bounds, the test follows:
const unsigned mau_size = mau(n).size() min_size = floor(log2(n)) max_size = 2*ceiling(log2(n)) if (mau_size < min_size) or (max_size < mau_size) // bad size
With the exception of finding the minimal additive upsequence for n, all these operations take constant time.
Most people offered lower and upper bounds that were too big, usually n - 1 and n/2.
struct shape { virtual void draw(void) const { std::cerr << "shape can't draw!\n"; } // whatever }; struct square : shape { void draw(void) const { // draw a square. } // whatever. };
Cleverly exploiting subtype polymorphism, your colleague represents a drawing as a list of figures
typedef std::list<shape> Drawing; typedef Drawing::iterator drawing_iter;
and draws a figure by iterating down the list, drawing each element:
void show_drawing(const Drawing & drawing) { for (drawing_iter i = drawing.begin(); i != drawing.end(); i++) i->draw(); }
Unfortunately, the only thing that appears when your colleague's code shows a
drawing is shape can't draw!"
on std-err
. What is wrong with
your colleague's code and how would you recommend your colleague properly use
subtype polymorphism?
See Section 13.6.1 (page 249) in Koenig and Moo.
Very few people got this correct, with many people getting hung up on iterators, which had nothing to do with the answer.
This page last modified on 18 December 2002.