push()
and pop()
), and that you cannot
access the stack implementation. Do not worry about errors.
dequeue()
: A dequeue()
operation removes the value at the queue head,
and so does pop()
, which means a dequeue()
operation can be implemented
with an O(1) number of pop()
operations.
enqueue()
: An enqueue()
operation adds a value to the queue tail, which
is inconveniently placed at the bottom of a stack. However, if the stack
elements could be reversed, then the stack tail would become the stack head,
and adding the new value requires one push. Fortunately, stacks are made for
reversing order, and the enqueue can be implemented as
simulatedEnqueue(V v) Stack auxiliaryStack = new Stack() while not stack.empty() auxiliaryStack.push(stack.pop()) stack.push(v) while not auxiliaryStack.empty() stack.push(auxiliaryStack.pop())
Each of the while loops has a body that does a constant number of stack operations (two) per iteration, and evaluating the guard also performs a constant number of stack operations (one). For each loop the number of iterations is bounded by the number of elements in the stack (n). The estimate for each loop is
O(n)*(O(1) + O(1)) = O(n)*(O(max(1, 1))) = O(n)*O(1) = O(n*1) = O(n)The remaining basic statements execute a constant number of stack operations, and the final estimate for a simulated enqueue operation is
O(1) + O(n) + O(1) + O(n) = O(max(1, n, 1, n)) = O(n)The answers, in no particular order and approximately verbatim:
1: These two sentences don't have anything to do with the question, or the answer; you could have just as well left them out.
2: On which queue operations?
for i = 0 to q.length if (i == q.length) dequeue i;
3: What circumstances would require that you go through the the queue?
4: How would you quantify the worst case in terms of the queue size?
5: Worst case estimates I'm guessing.
6: Is that for both enqueue and dequeue operations?
push()
. Push does the same thing so it would also be
constant. Dequeue just removes the last node reference so it would be
constant; however, for a stack we would need to pop()
the first node (not
the last), and this would require us to go through every node to get the
first so dequeue for a stack would be O(n).
7: True, but the important point is where does does enque()
create
this node?
8: True again, but the question's about simulating queue operations
with stack operations, so an estimate for enque()
isn't necessary.
9: That's correct. Now the question is what code do the new procedures contain, and what is the worst-case upper bound on the number of stack operations performed when the procedures are called?
The main difference between a stack and a queue is that a queue
allows10 removal from the bottom of a stack instead of the top. This
causes FIFO instead of LIFO. So the queue operation would add an element to
the top of the queue just like push would add to the top of a stack. So
queue in a stack would be the same as just one push statement.11 For
enqueue it is also the same as just one push statement. However, with
dequeue it is taking the element from the end of the queue which is not
possible in a stack since it is the element on the bottom.12 In order
to implement a dequeue in a stack the stack must do a pop()
for every
element in the stack. This is O(n) since it has to do a stack operation
for every element where the number of elements is n.
Queue and enque are O(1) because they are adding to the queue or stack at the top so it is independent of the number of elements already in the stack/queue.
10: You can be a bit stronger than “allows:” a queue requires that elements be added to the tail.
11: And the worst-case upper bound estimate for that is...?
12: Right, so a stack would need to use more than one stack operation
when simulating enqueue()
. Now the question is how many more?
pop()
and would require a case statement to determine where the value
taken into pop()
would be after it has been passed through
enqueue.14 Compared to the worst-case estimate of a normal queue, the
worst-case estimate would be less reliable.15 The same applies to the
dequeue operation implemented by push()
. It would be required to contain a
similar loop, thus pushing it towards an estimate that would be worse than
the worst-case estimate of a proper stack's dequeue.
13: That's true, but trivially so: you're estimating the number of stack operations used, and a queue would implement its operations without using any stack operations.
14: What? Why a case statement? And what does it mean to pass through enqueue?
15: What's making the worst-case estimate unreliable? What does it mean for a worst-case estimate to be unreliable?
push()
operations to bring the needed element to the top. This would
be (n) more times than a proper queue. To re-establish the stack,
another loop would need to push()
the elements back in. This is another
(n), resulting in O(n2) estimate.18 The simulated queue is much
slower than a proper queue.
16: But ordering is important, so this sentence is irrelevant.
17: A queue removes the least-recently added element.
18: These loops occur in succession; they're not nested.
19: The point is to simulate queue operations using a stack. Why does it matter how the queue is implemented?
20: Running through every element in an array is linear, not O(n2). The point is to count the number of stack operations required, not the number of internal operations performed by each stack operation.
21: What? Where is traversal even mentioned in the question?
22: The question doesn't have anything to do with best cases.
23: Would anything ever require such access?
Since a stack has FILO behavior and queue exhibits FIFO behavior, the worst case scenario of the dequeue operation is O(n) when the item being dequeued was the first element pushed into the stack.
The worst case scenario for the enqueue operation occurs when the stack is full, and a new stack needs to be created to fix up the values (assume dynamic implementation).24 This operation would become O(n + 1)25 because you are accessing all the elements,26 and then adding another.
Therefore, in the worst case enqueue()
tl(O)(n + 1) and
dequeue()
O(n)
24: How the stack is implemented doesn't matter.
25: Which is equivalent to O(n).
26: But that's not what you should be counting; you should be counting
the number of push and pop operations required to implement enqueue()
.
total = 0 for (i = 0; i < a.size(); i++) total += a[i]
executes in time proportional to O(n2), n the number of elements in the array. What would you estimate the execution time for the statement
avg = total/a.size()
to be? Justify your answer.
The problem is figuring out how a singly-nested loop could have O(n2)
execution time. It is unreasonable to assume that the expressions total =
0
, i = 0
, and i++
execute in anything other than constant time.
Reasonably assuming a correct array implementation, the number of loop
iterations is bounded by n, leaving i < a.size()
and total +=
a[i]
. Assuming Java, total += a[i]
executes in constant time, leaving
i < a.size()
, which must execute in O(n) time to get the specified
O(n2) estimate. The comparison <
executes in constant time, leaving
a.size()
as the O(n) operation.
With a.size()
an O(n) operation, avg = total/a.size()
is an O(n)
operation too.
1: Why twice?
2: Why three times? Why do you have to go through the elements at all?
a.size()
operation must
take O(n) time,3 coupled with the O(n) for-loop, to give O(n2). If the
a.size()
takes O(n) time, than the avg = total/a.size()
operation
will take O(n) time as well.
3: Why is that?
a
the time it takes to
execute the code does not, making the statement execute in constant
time.4
4: This answer ignores two-thirds of the question. Even if you think the question asker is incompetent and habitually asks questions containing mostly irrelevancies, you should at least go through the mental exercise of convincing yourself that the parts you didn't use really are irrelevant.
size()
has to
go through the list n times to count the size.5
5: size()
's worst case behavior can be made arbitrarily bad by
using more and more nested loops to repeatedly count the available
elements.
6: The average computation is not part of the loop. But even if it
was, avg
is not part of the loop guard, so its value has no control
over how many iterations occur. And even if the loop iterated O(log n)
times, the question asks for a estimate of the execution time for the
statement itself, not the number of times its executed.
a.size()
by itself is O(1). The
division is8 also O(1).
7: Is that interpretation consistent with the rest of the question.
8: ?
9: That's true; avg
does depend on the array size, but the
question asks about execution time.
avg
would be proportional to
O(n2) as well.
10: O(n2) is an upper bound to n, but then a lot of functions are upper bounds to n. Why is O(n2) in particular important?
11: Case statement? Where?
12: The problem states execution time's being counted, although that's equivalent to counting array accesses.
13: Why is that so?
14: It's not obviously a loop, but that doesn't mean a loop can't be hidden somewhere in the statement itself.
15: Only if the value being divided were controlling the loop iterations.
16: No, the statement's separate from the loop. But even if it weren't, the question wants to know how long it takes to execute the statement, not how many times the statement is executed.
No, it's not. Suppose A is an algorithm that three-sorts an array in worst-case O(log n) time. Because it executes in sub-linear time, A cannot touch every element in the array to be sorted, otherwise it would run in linear time. If A sorts an array in which every element is out of place (the array [ 3, 3, 1, 1, 2, 2 ] for example), it will fail to sort correctly because some of the elements will be untouched and so be out of place.
The answers, in no particular order and approximately verbatim:1: Repeated division is one way to get logarithmic behavior, but not the only one.
2: But how many array parts are there? Are there no more than O(log n)?
3: A binary search is only interested in one particular element in an array (assuming it's in the array). A three-sort has to be concerned with all parts of the array.
4: The question's asking about sorting, not searching.
5: How many times would the algorithm have to move on to the next pair?
6: This is a good way to develop a O(n log n) algorithm, but the question asked about a O(log n) algorithm.
7: Each binary-search iteration throws away half the array.
8: Does this result in a three-sort that always works?
9: Do you mean better than O(log n)? If you mean the assignment, are O(n) and O(n2) better than O(log n)?
10: Yes, easily. But the question's asking about an algorithm that runs faster.
11: The algorithm should be a sort, not a search.
12: But would the result be a correct three-sort?
13: But would that algorithm implement a three-sort?
14: Good.
15: True, but why is that so?
You should point out that the plan is unfair because it's biased in favor of the philanthropist buying the canvas. As the artists make larger pieces, the costs of buying canvas increases by O(n2), while the costs of buying stone increases by O(n3): the philanthropist buying stone has to buy more, and so pay more, than the philanthropist buying canvas. (Alternatively, you could assume artists are making smaller pieces and reverse the bias. Or you could assume no change in size, which is inconsistent with asymptotic analysis, and observe that an n-inch painting costs less than an n-inch sculpture.)
The answers, in no particular order and approximately verbatim:1: What does n represent, and what does f(n) represent?
2: You could measure the time, or make an estimate.
3: How did you go from O(1) to O(n)?
4: You can assume they produce the same amount of work over time, so why does producing them slower or faster have any bearing?
5: And what might that algorithm be?
6: Why can we assume this? Is it even true?
for (int i = 0; i < size of painters array; ++i) { O(n) int cost += array[i]; } for (int i = 0; i < size of sculptors array; ++i) { O(n) int cost += array[i]; }
Both algorithms give the same cost estimate, therefore, the cost is evenly distribute between the two philanthropists.
7: Is that an accurate representation of the costs?
8: Why not? The only cost mentioned is the material costs.
9: That's the question, though. Is that the correct interpretation of costs?
10: How did you reach that conclusion?
11: Wouldn't that be factored into the unit costs?
12: Careful: they have the same unit costs, but does that translate to the same overall costs (or the same costs per work, if you like)?
13: Standard deviation of what?
14: s and p are both ratios (units per work), and multiplying them together results in the square of ratios.
15: But the question is interested in fairness. Is this fair?
cost × # of items = total cost.says that if cost & amt. of material is the same, the total cost for each philanthropist should be the same.
16: True, if you've covered all the relevant factors contributing to cost.
17: Which is it?
18: How much of what are you actually getting?
19: That's true, but that only raises the per-work cost of materials. You'd have to consider if sculptors and painters have different error rates to determine if the overall costs are different.
20: That's the question: are the material costs the same? They have the same unit costs, but does that mean they have the same overall costs?
21: XY is the total materials costs; if each philanthropist is paying XY, they're paying twice as much as they need.
This page last modified on 25 March 2010.