sort()
member function because the
sort()
generic algorithm requires random iterators and the list iterators
are only bidirectional.
The map container implements the find()
member function, which has the
same behavior as the find()
generic algorithm. However, the find()
generic algorithm requires input iterators, while the map iterators are the
bidirectional. Why does map implement find()
as a member function?
Map re-implements find()
because it can do a more efficient job looking
for values then can the generic std::find()
. std::find()
requires
input iterators, which means it must use a linear search to look for values
because input iterators can only go forward by one (++
). Further, the
sequence of values defined by the iterator range is unconstrained.
The map value sequence is stored in sorted order, which means that something
more efficient than a linear search can be used to look for specific values.
By using a tree traversal, the member function find()
can locate a value
(or not) in O(log n) time, rather than the O(n) time it takes
for a linear search.
Also, map elements are pairs of values, so std::find()
needs to look for a
pair value, while the find()
member function looks for key values. This
answer is undercut by std::find_if()
, but the find()
member function
is still more convenient than std::find_if()
.
Most people either got this one or didn't. The people who didn't get it tended to get distracted by the bidirectional iterators, which had something to do with the answer but not everything.
moves
equal to
3 in the min_3sort()
return value; that is, describe a test case in which
the wrong answer is any answer where moves
is not equal to 3.
The simplest test case is
iv1
: [ 0, 1 ]
iv2
: [ 1, 2 ]
iv3
: [ 2, 0 ]
A cyclic shift of either the first or last last three elements gives (for the last elements)
iv1
: [ 0, 0 ]
iv2
: [ 1, 1 ]
iv3
: [ 2, 2 ]
which takes the minimum three moves to do.
Most people got this one right. Those that didn't used a too complicated test
case which turned out to have the wrong property. A few people flat out
answered the wrong question, giving test cases in which moves
was not
equal to 3.
find_first(b, e, v)
generic algorithm searches the
iterator range (b, e)
and returns an iterator to either the first (or
leftmost or closest to *b
) value equal to v
or e
if there's no
such value in the range.
Let us also suppose the find_last(b, e, v)
generic algorithm searches
the iterator range (b, e)
and returns an iterator to either the last (or
rightmost or furthest from *b
) value equal to v
or e
if there's
no such value in the range.
Assuming that both generic algorithms are implemented using the weakest possible iterator kinds, can both algorithms be implemented with the same kind of iterator? Explain. You may not use reverse iterators to answer this question.
The easiest way to answer this question is to look at the implementation of each of the algorithms:
find_first(b, e, v) while b != e if *b == v return b b++ return e |
find_last(b, e, v) l = e while b != e if *b == v l = b b++ return l |
find_first()
uses r-value *
, !=
and ++
on the iterators,
which are all at least input iterators. find_last()
uses all these
operators and assignment too, which means the iterators must be at least
forward iterators. Because the weakest iterators required in each algorithm
are different, they can't be implemented using the same weakest iterator.
Only a few people answered this by working from implementation sketches, and
they did pretty well. Everybody else sort of guessed at implementations, and
they got the answer sort of right. A few people pointed out that
find_last()
could be implemented with a bidirectional iterator starting
from end()
and using --
to work backwards. That's true, but a
bidirectional iterator is not the weakest iterator possible for -->
find_last()
, as the above example shows.
Bubble sort | Selection sort |
for i = n - 1; i > 0; i-- for j = 0; j < i; j++ if a[j + 1] > a[j] t = a[j + 1] a[j + 1] = a[j] a[j] = t |
for i = n - 1; i > 0; i-- j = i for k = i - 1; k >= 0; k-- if a[k] > a[j] j = k t = a[i] a[i] = a[j] a[j] = t |
Hint: assume that an element of a
is a very large object occupying
kilobytes of storage.
Both algorithms are nested loops, which explains the O(n2) behavior. Looking more closely at the inner loops, bubble sort, in the worst case, swaps two array elements on every iteration. Selection sort, on the other hand, just assigns an index value in the inner loop and swaps array elements once per outer-loop iteration.
The hint gives us everything else we need: because array elements are large, moving them is expensive, and the asymptotic analysis should count the number of element moves:
Bubble sort | Selection sort |
for i = n - 1; i > 0; i-- for j = 0; j < i; j++ if a[j + 1] > a[j] O(1) O(1) O(1) |
for i = n - 1; i > 0; i-- for k = i - 1; k >= 0; k-- if a[k] > a[j] O(1) O(1) O(1) |
The analysis has to make some assumptions about the cost of the comparing two array elements. One reasonable but arguable assumption is that comparisons don't involve moving anything, so they don't count at all. Another reasonable but arguable assumption is that comparisons may, in the worst case, involve looking at all the bytes in an element, and so do cost something, even though nothing's moving. Because moving is always more expensive than comparing, the analysis will go with the first assumption that compares don't count. When combined with the sequence rule, this gives
Bubble sort | Selection sort |
for i = n - 1; i > 0; i-- for j = 0; j < i; j++ O(1) |
for i = n - 1; i > 0; i-- for k = i - 1; k >= 0; k-- O(1) |
as the next analysis step. The inner-loop iteration count is bounded by O(n) and the inner-loop guards involve no array elements, which leaves
Bubble sort | Selection sort |
for i = n - 1; i > 0; i-- O(n) |
for i = n - 1; i > 0; i-- O(1) |
as the next step. One more turn of the analysis crank for the outer loops produces the final results
Bubble sort | Selection sort |
O(n2) |
O(n) |
and selection sort has asymptotically better behavior than does bubble sort with respect to array-element movement.
Some people answered this without presenting an asymptotic analysis; they didn't do well. Almost everybody else did an analysis showing why both algorithms are O(n2), which is not what the question asked for. A few people then went on to describe why selection sort is more effeicent, but didn't back their discussion with an asymptotic analysis.
This page last modified on 5 November 2002.