Assertion failed: x_slope || y_slope
Be sure you explain how your test case works.
A few facts help us answer this question:
Under the not unreasonable assumption that both x_slope
and y_slope
are some kind of numbers, both will be false when zero. Both components of the
slope between two points are zero only when the two points are co-incident. An
acceptable test case is
x y x y
where x
and y
are any legal integers.
Most people who got this wrong forgot when an or expression is false.
Briefly describe how inheritance could be used to provide missing member functions for a class being used as a template actual parameter.
Let class A
be missing some member functions. We need to provide another
class, complete-A
, that provides all the member functions that A
does
and includes the required member functions that A
doesn't support.
The easiest way to do this is to have complete-A
be a child of A
.
complete-A
inherits (almost) all the member functions provided by A
,
and need only implement the uninherited and missing functions (uninherited
functions include the constructors and destructor, if any).
A
could also be made a child of complete-A
with complete-A
using
virtual functions to defer to A
's implementation, but this is essentially
equivalent to wrapping A
in complete-A
(but it may be safer if
complete-A
is defined with abstract virtual member functions).
Most people got this mostly right, although the descriptions were more cryptic than they needed to be.
The following typedef
s are in effect for both algorithms
typedef pair<int, int> int_pair typedef vec<int_pair> ip_vec typedef map<double, ip_vec> slope_map
You should not worry about whether or not each algorithm is correct; just determine an asymptotic estimate for each.
a) colinear(ip_vec pts) ip_vec max_line max_line.push_back(pts[0]) max_line.push_back(pts[1]) for i = 1 to n - 1 for j = i + 1 to n slope_x = pts[i].x - pts[j].x slope_y = pts[i].y - pts[j].y ip_vec line for k = j + 1 to n delta_x = pts[i].x - pts[k].x delta_y = pts[i].y - pts[k].y if delta_x/delta_y == slope_x/slope_y line.push back(pts[k]) if line.size() > max_line.size() max_line = line return max_line b) colinear(ip_vec pts) ip_vec max_line max_line.push_back(pts[0]); max_line.push_back(pts[1]); for i = 1 to n - 1 slope_map slopes for j = i + 1 to n slopes[(pts[i].x - pts[j].x)/(pts[i].y - pts[j].y)].push_back(pts[j]) max = slopes.begin() for i = slopes.begin() + 1 to slopes.end() if i->second.size() > max->second.size() max = i if max->second.size() > max_line.size max_line = max->second. return max_line
For algorithm a, the only tricky bits are finding the estimates on the point vector operations. Point vector initialization can take either O(1) if the vector is unsized, or perhaps O(n) if the vector is presized. However, from Chapter 11 in Koenig and Moo, it is possible to initialize a vector in constant time even if it is presized, so an estimate of O(1) seems reasonable.
As for vector push-back, if the vector is presized, it takes constant time because there's no copying (assuming the presize is big enough). Otherwise, it can take worst case O(n) time to create a new, bigger vector and copy the elements from old to new. Because presizing costs the same as not presizing, it's reasonable to assume the vector is presized and that push-back takes constant time.
Vector assignment is O(n), no way around that (actually, there is by using sharing semantics, but it's unreasonable to assume the stl uses that for assignments). Return can be considered the same as an assignment.
O(1) O(1) O(1) for i = 1 to n - 1 for j = i + 1 to n O(1) O(1) O(1) for k = j + 1 to n O(1) O(1) if O(1) O(1) if O(1) O(n) O(n)
Applying the sequence and if rules to collapse adjacent estimates gives
O(1) for i = 1 to n - 1 for j = i + 1 to n O(1) for k = j + 1 to n O(1) O(n) O(n)
The guard for the inner-most loop executes in constant time and has an iteration count bounded above by n.
O(1) for i = 1 to n - 1 for j = i + 1 to n O(1) O(n)*(O(1) + O(1)) = O(n) O(n) O(n)
The current inner-most loop has the same estimates as the previous inner-most loop, and the sequence and loop rules can work their magic. Note that our agonizing over the estimate for vector initialization wasn't necessary, because either O(1) or O(n) produces the same result.
O(1) for i = 1 to n - 1 O(n)*(O(1) + (O(1) + O(n) + O(n)) = O(n)*(O(1) + O(n)) = O(n)*O(n) = O(n^2) O(n)
The remaining loop has the same estimates as the other two, and the loop and sequence rules work as before, giving us a final estimate of O(n^3) for algorithm a.
Most of the analysis used for algorithm a also applies to algorithm b, but we have to deal with the slope map. The first-cut estimates are
O(1) O(1) O(1) for i = 1 to n - 1 O(1) for j = i + 1 to n slopes[O(1)].push_back(O(1)) O(1) for i = slopes.begin() + 1 to slopes.end() if O(1) O(1) if O(1) O(n) O(n)
We can apply the pre-allocation argument to assign an O(1) estimate to the push-back. Map access is O(log n); this, along with the sequence and if rules, gives
O(1) for i = 1 to n - 1 O(1) for j = i + 1 to n O(log n) + O(1) = O(log n) O(1) for i = slopes.begin() + 1 to slopes.end() O(1) O(1) O(n)
Both inner loops have guards that execute in constant time. The upper loop has an iteration count bounded by n. The lower loop has the same bound, for roughly the same reason: the upper loop can generate at most n different slopes, which gives the map at most n different elements. The loop rule then gives
O(1) for i = 1 to n - 1 O(1) O(n)*(O(1) + O(log n)) = O(n)*O(log n) = O(n log n) O(1) O(n)*(O(1) + O(1)) = O(n)*O(1) = O(n) O(1) O(n)
The remaining loop has constant guards and O(n) iterations. The sequence and loop rules combine to give a final O(n^2 log n) estimate for algorithm b.
Because O(n^2 log n) is asymptotically smaller than O(n^3), algorithm b is the faster of the two algorithms.
Apart from some ill-considered assumptions about execution times, most people did fairly well on this problem.
struct A { // definition of f(). } struct B : public A { // definition of f(). } struct C : public B { // definition of f(). }
each define a member function f()
. Given the specified behavior for the
following code
C c; B & b = c; A & a = b; a.f(); // Prints "in A::f()". b.f(); // Prints "in B::f()". c.f(); // Prints "in C::f()".
which of the member functions f()
in each of the classes A
, B
,
and C
can be virtual?
C::f()
can be virtual or not; the given behavior occurs in either case.
B::f()
cannot be virtual; if it were, the look-up for f()
would start
and end with the dynamic type of b
, which is C
. For a similar
reason, A::f()
cannot be virtual.
I messed up this problem on the in-class test; instead of the declarations above I had
C & c; C & b; A & a = b;
One person pointed out that these declarations are illegal. Otherwise, I took
into account both declarations when grading the answers for this question, but
most people got it wrong by claiming that A::f()
is virtual, which isn't
true in either case.
This page last modified on 3 May 2002.