No, O(nn) is worst-case upper bound for O(n3), but it is not tight because there are other worst-case upper bound for f(n) for which O(nn) is also a worst-case upper bound. For example O(n4) is a worst-case upper bound for f(n), and O(nn) is a worst-case upper bound for n4.
The answers, approximately verbatim, and in no particular order:
1: What's wrong when n > 3?
2: That's true, but the question's asking about O(nn) and whether or not it's tight.
3: It's not important by how much the upper-bound is larger as long as it is larger.
4: It looks like “suit”.
5: Just a technical note: the two occurrences of c are different, and should be distinguished, as in c1 and c2, or c and c'.
6: Not at all; it's super-linear, and even super-exponential.
7: What “it”?
8: What does this mean?
9: What does it mean for a O(n) estimate run on a constant time?
10: The difference size doesn't matter, as long as it's positive.
11: There are many worst-case upper bound; which one do you mean?
12: Higher than what?
13: That is the trivial upper bound, but a tight upper-bound is non-trivial.
14: What is a binary nature?
15: True, but how is it different, and why is it important?
16: How is it binary in nature?
17: It is the trivial upper bound, which can't be tight.
18: That's so, but what does it mean?
19: I don't know what it means for O(nn) to be true. Do you mean it's a worst-case upper bound?
while (p()) while (p()) // code
has a worst-case upper-bound of O(n4). Can you conclude that
p()
has an worst-case upper bound of O(n2)? Justify your
answer.
No. The overall estimate for the loops is
O(Io)*(O(g) + (O(Ii)*(O(g) + O(bi)))where O(Io) and O(Ii) are the worst-case upper bound estimate for the number of outer- and inner-loop iterations, respectively, O(g) is the worst-case upper bound estimate for the guard
p()
, and
O(bi) is the worst-case upper bound estimate for the inner-loop
body. Any assignment of estimates such that the above formula is O(n4)
will do, and there are assignments in which O(g) is not O(n2). For
example, if O(Ii) and O(Io) are both O(n2) and
O(g) and O(bi) are both O(1), then the above formula is
O(n4).
The answers, approximately verbatim, and in no particular order:
p()
. O(n4) can be
coded in different ways1 since it has two while loops.
1: What does it mean for O(n4) to be coded in different ways?
(p())
has worst-case upper bound as O(n2).
p()
is a function & you cannot determine its
complexity.2
p()
= O(n2).
2: But what prevents you from assuming a particular upper bound?
3: Why is being a nested loop important?
4: How?
while (p())
has a worst-case upper bound of ordrnsq(). As given
the two while loops have an upper bound of O(n4), it is clear5 because each loop has O(n2)6 as the algorithm in
this case would be O(n)·O(n) = O(n2). As the loop
will go n times i.e., O(n)7 and the loop is
O(n), that makes p()
an upper bound of O(n2).
5: What is clear?
6: What does it mean for each loop to have O(n2), and why is it necessary?
7: Why is that?
p()
) has worst-case upper bound of
O(n2). This is true because there are two statements that we need to
estimate here: “while (p())
” and “while (p())
” again.
Their estimates have to be O(n2) and O(n2) because the estimate of the
algorithm is O(n4).8 So the
product of O(n2)·O(n2) is equal to O(n4).
8: Why does that need to be the case?
(p())
should have a worst-case upper
bound of O(n2).
while (p()) O(n) · O(n3) while (p()) O(n) · O(n2) // code O(n2)
9: Why?
10: To what do O(n) and O(n2) correspond?
p()
must be O(n2).
Inner loop: the cost of the while loop is O(n) and the cost of p()
is
O(n).11
Outer loop costs the same as inner loop,12 so O(n) (=
p()
) * O(n) (= p()
) = O(n2).
11: But wait - you just wrote that p()
has to be
O(n2).
12: How can that be, given that the outer loop has a bigger loop body than the inner loop?
code
” runs O(1),13 and p()
runs O(n2).
while (p()) ⇒ O(n)*(O(n2) + O(n3)) = O(n4) O(n)*(O(n2) + O(1))
13: Why is it reasonable to assume this? Is it reasonable to assume some other estimate?
p()
has a worst-case upper bound of O(n2).
14: What are stack iterations? And what's the relation between the number of stack iterations and n?
//code
" is O(1) then p()
would
have to be O(n2), since the product of estimations is the estimation of the
product;15 i.e., O(n4) = O(n2·n2) =
O(n2)·O(n2).
However, if the hidden code is not O(1), we would not be able to estimate
the loops without first analyzing the body of the loops.16
15: What does this formula have to do with the estimate for p()
?
16: You could assume an estimate, as you did in the previous paragraph.
while (p()) #3 O(n2) · O(n2) = O(n4) while (p()) #2 O(1) · O(n2) = O(n2)17 // code #1 O(1) // assumed
The summation of the statements18 is max of the upper bounds for those statements.
17: What is this formula computing? If it's computing the cost of the inner loop, why have you assumed it iterates a constant number of times?
18: Why is the summation of statements involved?
No. Structural analysis produces an estimate for any algorithm, and any estimate is its own, admittedly trivial, worst-case upper bound.
The answers, approximately verbatim, and in no particular order:
1: Why is that true?
2: Yes, but how many such statements are there in an algorithm?
3: Why not?
4: That's true, but that's only one algorithm. What about the others?
5: Wouldn't an algorithm that does absolutely nothing have a constant-time estimate for any way you want to look at it?
6: I'm not sure what this means. How does it apply to binary search?
7: That may be so, but isn't the point to know what happens when n increases without bound?
8: True, but why does it matter?
9: What are a statement's individual bounds? Do you mean upper-bound estimate? If so, why is that true?
10: But is every algorithm a loop?
11: But
doesn't the loop while (false) { }
do a constant number of iterations?
int find(Q a[], Q n, Q, x) for (i = 0; i < n - 1; i++) if (a[i] == x) return i return -1
12: The question asked about not having a worst-case upper bound.
13: Is that so?
14: Are loops the only determinant, or the most important one among others?
15: That's true, but what does it matter?
A(T a[])
is structured with two loops. Assuming an
asymptotic analysis on n, the number of elements in a
, that counts
the number of times elements of a
are accessed (read or written), is it
possible to conclude that O(n) is an upper-bound for A
? Justify your
answer.
Yes, if the loops are sequential rather than nested, and each has an O(n) upper-bound estimate.
The answers, approximately verbatim, and in no particular order:
A(T a[])
= max(O(n), O(n)) = O(n)
(# 2) for (int n = 0; n < a.length; n++) body O(1) ·
O(n) = O(n)
(# 1) //code of O(1)
(# 2) for (int n = 0; n < a.length; n++) body O(1) ·
O(n) = O(n)
(# 1) //code of O(1)
1: You mean accessing an array element?
2: How can you conclude that?
3: Why is this formula appropriate?
for (i = 1; i < 100; i++) a[0] = 1
only runs 100 times, not worst case n times.
for (i = 0; i < n; i++) for (i = 1; i < 100; i++) a[0] = 1 code // O(n)*(odr(100)) = odr(n)
A
. Assuming
loop 1 — iterates through ‘n’ elements for
reading ∴ O(n).
loop 2 — iterates through ‘n’ elements for
writing ∴ T(n) = O(n).
either case one is accessed at a time. ∴ the loops are
not nested loops. The loops are sequential. Estimate O(f1) +
O(f2) = max O(g1, g2). ∴
The worst case upper bound is O(n).
4: Could be.
A
.
Since there are two loops in the algorithm there are two possible iterations5 which results that O(n) is an upper bound for
A
.6
5: Do you mean that each loop iterates twice each, or both together iterate a total of two times?
6: How? Particularly if the loops are doing a constant number of iterations.
a
then the cost will be O(n).
The second loop counts elements in a
, O(n).
for (int i = 0; i < n; i++) ⇒ O(n) a[i] = y; ⇒ O(1) for (int j = 0; j < a.length; j++) ⇒ O(n) count++ ⇒ O(1) max(n, n) = O(n)
A
.
Though if the algorithm has two loops, but it does a linear search. It goes
through the loop and counts the number of times elements in ‘a’ are
accessed. According to O(n), it can go through the loop constant times, and
it would still be linear.
7: O(n) could be
A
has two loops, and counts the number of times elements of
a
are accessed.8 So the body of the loop should be O(n), and the
first loop should be O(n)·O(n) = O(n2).
8: No, the asymptotic analysis counts element access, not the algorithm.
A()
is structured with two loops, it O(n) can't be upper
bound. This is because it O(n) runs on constant time.9
9: What?
A
would be
O(n2).
A1 = for (i = 0; i < n; i = i + 2) f(i)
and
A2 = for (i = 0; i < n; i++) f(i)
If the growth of A2 has worst-case upper bound O(n), is it possible that A1 is not O(n)? Justify your answer.
No. Even though A1 performs half as many iterations as A2, the number of iterations for both loops grows linearly as n gets large.
The answers, approximately verbatim, and in no particular order:
1: Why is that? Where could the looping behavior possibly occur?
i
by 2.
2: Which code? A2?
i + 2 = i
is still an upper bound at O(n). Using
this addition does not reduce the upper bound on the statement analysis for
A1.
A1 = for (i = 0; i < n; i = i + 2) #2 O(1) + O(n) = max(O(1), O(n)) = O(n) f(i) #1 O(1) A2 = for (i = 0; i < n; i++) # 2 O(1) + O(n) = max(O(1), O(n)) = O(n) f(i) # 1 O(n)
If you were to use a constant multiplier, example: i = i*2, that would drive the upper bound to O(n log n) or below O(n).3
3: Which is it? And for what reason would it happen?
4: True, but that's only part of the for loop.
i = 0
to n
, it increases
by 2 (i = i + 2
). Therefore, the upper bound of A1 will be
O(n/2).5
5: Putting constant factors (1/2 in this case) in an estimate is bad form; just use O(n).
i + 2
’, but still, it's running
through the loop constant number of times. Therefore, because of its linear
nature A1 has a worst-case upper bound of O(n).
6: How does it go through the loop constant times; that is, an amount independent of n?
7: No, the algorithm touches the elements; the worst-case upper bound describes, in a loose way, the number of elements touched
8: Dividing by two once is a constant reduction, not a logarithmic one.
n
times, f(i)
must be
O(1). O(n)·O(1) =
O(n·1) = O(n).
Knowing that and seeing that A1 runs n/2 times then you can
conclude that A1 is O(n/2), which after removing the constant
coefficient is still O(n).
f(i)
is O(1);9 i = 0
has O(1), i++
has O(1), leaving
i < n
. In order to know how many times i
compares to n
we
have to understand how i
increases.
In A2, i++
⇒ i = i +
. 1 + 1 + …
= n = O(n)
In A1, i = i + 2
. 1 + 2 + … = 1 + 2m ⇒ O(n)
So, it is not possible.
9: Why is this true?
A1 = for (i = 0; i < n; i = i + 2) A1 = for (i = 0; i < n; i = i + 2) O(n)·O(1) f(i) : O(n) A2 = for (i = 0; i < n; i++) A2 = for (i = 0; i < n; i++) O(n)·O(1) f(i) : O(n)
10: Or some other liner-bounded algorithm.