Data Structures & Algorithms
CS 305 & 503

Quiz 2, 10 November 2010


  1. If f(n) = n3, is O(nn) a tight worst-case upper bound for f? Justify your answer.


    No, O(nn) is worst-case upper bound for O(n3), but it is not tight because there are other worst-case upper bound for f(n) for which O(nn) is also a worst-case upper bound. For example O(n4) is a worst-case upper bound for f(n), and O(nn) is a worst-case upper bound for n4.

    The answers, approximately verbatim, and in no particular order:

    1. No, because it is wrong when n > 3.1 If n = 4, then O(nn) = O(44) = 256, but f(n) = 43 = 64. ⇒ when f(n) = n3, the upper bound for f should be O(n3).2

      1: What's wrong when n > 3?

      2: That's true, but the question's asking about O(nn) and whether or not it's tight.

    2. No, n3 is much smaller than nn when n > 3, even n4 is much smaller3 than nn when n > 4, which means nn > n4 > n3. So O(nn) is not tight worst-case upper bound. O(n4) is tighter than O(nn).

      3: It's not important by how much the upper-bound is larger as long as it is larger.

    3. No. Assume g(n) = nn. There is a constant number c which ???4 c·nn f(n) = n3. That means O(nn) is an upper bound for f, but still have c·nn5 T(n) = n4 or n5 and so on. n4 is also an upper bound for f. That means nn > n4 > n3. So O(nn) is not a tight worst-case upper bound for f.

      4: It looks like “suit”.

      5: Just a technical note: the two occurrences of c are different, and should be distinguished, as in c1 and c2, or c and c'.

    4. Yes, O(nn) is a tight worst-case upper bound for f when f(n) = n3. O(nn) goes for sub-linear time.6

      6: Not at all; it's super-linear, and even super-exponential.

    5. Yes. This is a tight worst-case upper bound for f because it7 will not be able to return single element8 if O(n) run on a constant time9 (ONnth) will increase the time needed to run the code.

      7: What “it”?

      8: What does this mean?

      9: What does it mean for a O(n) estimate run on a constant time?

    6. No, O(nn) is a worst-case upper bound of f(n) = n3 but not tight worst-case upper bound. Tight worst-case upper bound is slightly greater10 than the actual worst-case upper bound.11 O(nn) is growing exponentially when n grows.

      10: The difference size doesn't matter, as long as it's positive.

      11: There are many worst-case upper bound; which one do you mean?

    7. No. f(n) = n3. O(nn) is a higher upper bound12 than O(n3). The tight upper bound for f(n) = n3 = O(n3),13 the lower of the two bounds.

      12: Higher than what?

      13: That is the trivial upper bound, but a tight upper-bound is non-trivial.

    8. No. As per given case we cannot say that O(nn) is a tight worst-case upper bound for f. The given upper bound for f(n) is n3, which basically means it has a binary nature.14 f(n) = O(n)*O(n)*O(n) = O(n3), which is not the same as O(nn).15 O(nn) is binary in nature,16 but it is different from O(n3) because O(n) is multiplying O(n) n number of times.

      14: What is a binary nature?

      15: True, but how is it different, and why is it important?

      16: How is it binary in nature?

    9. No. While O(nn) is an upper bound for f, it is not a tight upper bound. If there exists a function of n that is between an upper bound and the actual work of an algorithm, then that function is a tighter upper bound. Tight upper bounds better represent the estimation of an algorithm's expected behavior. O(n3) is a tight worst-case upper bound17 since it is f, even more specific it is ≡ f.

      17: It is the trivial upper bound, which can't be tight.

    10. Yes, O(nn) is a tight worst-case upper bound if f(n) = n3. This is true because O(n3) is the upper bound for f(n),18 therefore O(nn) is also true.19

      18: That's so, but what does it mean?

      19: I don't know what it means for O(nn) to be true. Do you mean it's a worst-case upper bound?

  2. Suppose

    while (p())
      while (p())
        // code
    

    has a worst-case upper-bound of O(n4). Can you conclude that p() has an worst-case upper bound of O(n2)? Justify your answer.


    No. The overall estimate for the loops is

    O(Io)*(O(g) + (O(Ii)*(O(g) + O(bi)))
    where O(Io) and O(Ii) are the worst-case upper bound estimate for the number of outer- and inner-loop iterations, respectively, O(g) is the worst-case upper bound estimate for the guard p(), and O(bi) is the worst-case upper bound estimate for the inner-loop body. Any assignment of estimates such that the above formula is O(n4) will do, and there are assignments in which O(g) is not O(n2). For example, if O(Ii) and O(Io) are both O(n2) and O(g) and O(bi) are both O(1), then the above formula is O(n4).

    The answers, approximately verbatim, and in no particular order:

    1. No. This is because we don't know exactly what is p(). O(n4) can be coded in different ways1 since it has two while loops.

      1: What does it mean for O(n4) to be coded in different ways?

    2. Yes, (p()) has worst-case upper bound as O(n2).
      • Since p() is a function & you cannot determine its complexity.2
      • it's a nested loop (if they are sequential the O(n2))3
      • ∴ overall complexity = O(n4).4
      We are left with p() = O(n2).

      2: But what prevents you from assuming a particular upper bound?

      3: Why is being a nested loop important?

      4: How?

    3. Yes, the while (p()) has a worst-case upper bound of ordrnsq(). As given the two while loops have an upper bound of O(n4), it is clear5 because each loop has O(n2)6 as the algorithm in this case would be O(nO(n) = O(n2). As the loop will go n times i.e., O(n)7 and the loop is O(n), that makes p() an upper bound of O(n2).

      5: What is clear?

      6: What does it mean for each loop to have O(n2), and why is it necessary?

      7: Why is that?

    4. Yes, you can conclude that code (p()) has worst-case upper bound of O(n2). This is true because there are two statements that we need to estimate here: “while (p())” and “while (p())” again. Their estimates have to be O(n2) and O(n2) because the estimate of the algorithm is O(n4).8 So the product of O(n2O(n2) is equal to O(n4).

      8: Why does that need to be the case?

    5. Yes. The while loop has a worst-case upper-bound of O(n4) so the body of the while loop should be9 O(n3) = odr(n)·O(n2).10 That means the code (p()) should have a worst-case upper bound of O(n2).

      while (p())    O(n) · O(n3)
        while (p())  O(n) · O(n2)
          // code    O(n2)
      

      9: Why?

      10: To what do O(n) and O(n2) correspond?

    6. Yes. If the above code is O(n4), then p() must be O(n2). Inner loop: the cost of the while loop is O(n) and the cost of p() is O(n).11 Outer loop costs the same as inner loop,12 so O(n) (= p()) * O(n) (= p()) = O(n2).

      11: But wait - you just wrote that p() has to be O(n2).

      12: How can that be, given that the outer loop has a bigger loop body than the inner loop?

    7. Yes, assuming “code” runs O(1),13 and p() runs O(n2).

      while (p())             ⇒ O(n)*(O(n2) + O(n3)) = O(n4)
        O(n)*(O(n2) + O(1))
      

      13: Why is it reasonable to assume this? Is it reasonable to assume some other estimate?

    8. For every while loop these are a minimum of two stack iterations performed.14 Hence the code p() has a worst-case upper bound of O(n2).

      14: What are stack iterations? And what's the relation between the number of stack iterations and n?

    9. We can't make that conclusion based off what is shown. If the hidden code represented by "//code" is O(1) then p() would have to be O(n2), since the product of estimations is the estimation of the product;15 i.e., O(n4) = O(n2·n2) = O(n2O(n2). However, if the hidden code is not O(1), we would not be able to estimate the loops without first analyzing the body of the loops.16

      15: What does this formula have to do with the estimate for p()?

      16: You could assume an estimate, as you did in the previous paragraph.

    10. Yes.

      while (p())   #3  O(n2) · O(n2) = O(n4)
        while (p()) #2  O(1) · O(n2) = O(n2)17
          // code   #1  O(1) // assumed
      

      The summation of the statements18 is max of the upper bounds for those statements.

      17: What is this formula computing? If it's computing the cost of the inner loop, why have you assumed it iterates a constant number of times?

      18: Why is the summation of statements involved?

  3. Is it possible to have an algorithm that has no worst-case upper bound? If it is possible for an algorithm to have no worst-case upper bound, show such an algorithm. If it is not possible, explain why.


    No. Structural analysis produces an estimate for any algorithm, and any estimate is its own, admittedly trivial, worst-case upper bound.

    The answers, approximately verbatim, and in no particular order:

    1. No. Each algorithm should have worst-case upper bound because each algorithm should have a statement loop.1 For each statement should have a worst-case upper bound such as the single statement still execute O(1).2

      1: Why is that true?

      2: Yes, but how many such statements are there in an algorithm?

    2. No, it is not possible to have an algorithm that has no worst-case upper bound.3

      3: Why not?

    3. No, every algorithm has a worst-case upper bound. If I use linear algorithm (O(n)) to build the skyline (as in assignment 2), then the upper bound is O(max(n, largest width)).4 It's not always O(n).

      4: That's true, but that's only one algorithm. What about the others?

    4. No, every algorithm has a worst-case upper bound, assuming it is not empty. If empty, there are no estimates.5 Worst-case upper bound touches all elements in the data structure.6 Given if n = 1, the best and worst case upper bound coincides, but they both exist.7

      5: Wouldn't an algorithm that does absolutely nothing have a constant-time estimate for any way you want to look at it?

      6: I'm not sure what this means. How does it apply to binary search?

      7: That may be so, but isn't the point to know what happens when n increases without bound?

    5. No. The fastest algorithms have O(1) if the work is independent of n.8 Even complicated algorithms will always have a worst-case upper bound. The upper bound will be some function of n. Once you analyze an algorithm and break it down to its key parts (statements, sequences, branches, and loops), you will be able to determine the upper bound.

      8: True, but why does it matter?

    6. No. Any algorithm is a sequence of statements with individual bounds.9 The upper bound of any algorithm is the summation of those statements, where s1 + s2 + … + sn = max(s1, …, sn)

      9: What are a statement's individual bounds? Do you mean upper-bound estimate? If so, why is that true?

    7. No, it is not possible to have an algorithm that has no worst-case upper bound. Every algorithm has a least O(log n) upper bound because of the number of iterations a loop goes through.10 Even if the code inside the lop takes constant time, the number of iterations will cause the loop to have at least an O(log n) upper bound.11

      10: But is every algorithm a loop?

      11: But doesn't the loop while (false) { } do a constant number of iterations?

    8. Not possible. The algorithm was implemented by statements, loop, choice. Even a single statement in constant time has an upper-bound of O(1). So every algorithm should have worst-case upper bound.
    9. Yes, it is possible for an algorithm to have a worst-case upper bound.12

      int find(Q a[], Q n, Q, x)
        for (i = 0; i < n - 1; i++)
          if (a[i] == x)
            return i
        return -1
      

      12: The question asked about not having a worst-case upper bound.

    10. No, it is not possible for an algorithm to have no worst-case upper bound. Each algorithm has a loop13 and that's what determines its nature in terms of Big O notation.14 The different kind of upper bound e.g., O(n), O(n2), are telling us the nature of the algorithm whether it's linear, sub linear, or some other.15 In terms of time, as which algorithm would be the fastest or slowest as per given case.

      13: Is that so?

      14: Are loops the only determinant, or the most important one among others?

      15: That's true, but what does it matter?

  4. Suppose algorithm A(T a[]) is structured with two loops. Assuming an asymptotic analysis on n, the number of elements in a, that counts the number of times elements of a are accessed (read or written), is it possible to conclude that O(n) is an upper-bound for A? Justify your answer.


    Yes, if the loops are sequential rather than nested, and each has an O(n) upper-bound estimate.

    The answers, approximately verbatim, and in no particular order:

    1. Yes. Only if the two loops were used consecutively in the function's body sequence of statements. Example: (# 3) A(T a[]) = max(O(n), O(n)) = O(n) (# 2) for (int n = 0; n < a.length; n++) body O(1) · O(n) = O(n) (# 1) //code of O(1) (# 2) for (int n = 0; n < a.length; n++) body O(1) · O(n) = O(n) (# 1) //code of O(1)
    2. Yes, it is possible to conclude that O(n) is an upper bound for “A” because accessing an element will take constant time (O(1)).1 However, the number of iterations will cause the estimate to be O(n).2 Therefore if both loops have estimates O(n), then the max of (O(n), O(n))3 is O(n).

      1: You mean accessing an array element?

      2: How can you conclude that?

      3: Why is this formula appropriate?

    3. Yes, it is possible in the inner loop.

      for (i = 1; i < 100; i++)
        a[0] = 1
      

      only runs 100 times, not worst case n times.

      for (i = 0; i < n; i++)
        for (i = 1; i < 100; i++)
          a[0] = 1
          code //
          O(n)*(odr(100)) = odr(n)
      

    4. Yes, O(n) is4 an upper bound for A. Assuming loop 1 — iterates through ‘n’ elements for reading ∴ O(n). loop 2 — iterates through ‘n’ elements for writing ∴ T(n) = O(n). either case one is accessed at a time. ∴ the loops are not nested loops. The loops are sequential. Estimate O(f1) + O(f2) = max O(g1, g2). ∴ The worst case upper bound is O(n).

      4: Could be.

    5. Yes, it is possible to conclude that O(n) is an upper bound for A. Since there are two loops in the algorithm there are two possible iterations5 which results that O(n) is an upper bound for A.6

      5: Do you mean that each loop iterates twice each, or both together iterate a total of two times?

      6: How? Particularly if the loops are doing a constant number of iterations.

    6. Yes. If we use the first loop to generate elements (values) for an array a then the cost will be O(n). The second loop counts elements in a, O(n).

      for (int i = 0; i < n; i++)  ⇒ O(n)
        a[i] = y; ⇒ O(1)
      
      for (int j = 0; j < a.length; j++)  ⇒ O(n)
        count++ ⇒ O(1)
      
      max(n, n) = O(n)
      

    7. Yes, O(n) is7 an upper bound for A. Though if the algorithm has two loops, but it does a linear search. It goes through the loop and counts the number of times elements in ‘a’ are accessed. According to O(n), it can go through the loop constant times, and it would still be linear.

      7: O(n) could be

    8. No. Algorithm A has two loops, and counts the number of times elements of a are accessed.8 So the body of the loop should be O(n), and the first loop should be O(nO(n) = O(n2).

      8: No, the asymptotic analysis counts element access, not the algorithm.

    9. No, because if A() is structured with two loops, it O(n) can't be upper bound. This is because it O(n) runs on constant time.9

      9: What?

    10. Yes. For one, if the loops are not nested but are instead sequential, and each loop accesses every element once, then the sum of the estimations would be O(n) + O(n) = O(max(n, n)) = O(n). However, while this conclusion is possible, it is not definite. One must actually analyze the code to determine the upper bound. For example, if the loops were nested and each accessed every element, then A would be O(n2).

  5. Given

    A1 = for (i = 0; i < n; i = i + 2)
           f(i)
    

    and

    A2 = for (i = 0; i < n; i++)
           f(i)
    

    If the growth of A2 has worst-case upper bound O(n), is it possible that A1 is not O(n)? Justify your answer.


    No. Even though A1 performs half as many iterations as A2, the number of iterations for both loops grows linearly as n gets large.

    The answers, approximately verbatim, and in no particular order:

    1. The growth of A2 has worst-case upper bound O(n) but A1 will have an upper bound of O(n2).1

      1: Why is that? Where could the looping behavior possibly occur?

    2. No. A1 can have a worst-case upper bound just like any algorithm code, but it will not be the worst case in this code2 because of the fact that it is increasing i by 2.

      2: Which code? A2?

    3. No. The statement i + 2 = i is still an upper bound at O(n). Using this addition does not reduce the upper bound on the statement analysis for A1.

      A1 = for (i = 0; i < n; i = i + 2) #2 O(1) + O(n) = max(O(1), O(n)) = O(n)
        f(i) #1 O(1)
      
      A2 = for (i = 0; i < n; i++)  # 2 O(1) + O(n) = max(O(1), O(n)) = O(n)
             f(i) # 1 O(n)
      

      If you were to use a constant multiplier, example: i = i*2, that would drive the upper bound to O(n log n) or below O(n).3

      3: Which is it? And for what reason would it happen?

    4. No, it is not possible for A1 to not have worst case upper bound O(n) if A2 also has worst case upper bound O(n). This is because we can assume “i = 0” for both A1 and A2 has upper bound O(1).4 We can also assume “i + 2” for A1 and “i++” for A2 also has upper bound O(1). This leaves “i < n” and “f(i)” for both A1 and A2, and since A2 has upper bound O(n), A1 must also have upper bound O(n).

      4: True, but that's only part of the for loop.

    5. No. Although the loop of A1 runs from i = 0 to n, it increases by 2 (i = i + 2). Therefore, the upper bound of A1 will be O(n/2).5

      5: Putting constant factors (1/2 in this case) in an estimate is bad form; just use O(n).

    6. Yes. A1 is also O(n). In A2, worst-case upper bound is O(n), because it goes through the loop constant times.6 Similarly in A1, though it has ‘i + 2’, but still, it's running through the loop constant number of times. Therefore, because of its linear nature A1 has a worst-case upper bound of O(n).

      6: How does it go through the loop constant times; that is, an amount independent of n?

    7. Yes, growth rate of A1 is not O(n); growth rate of A1 is O(log n). Worst-case upper bound touches each element7 of the data structure. Assuming n = 5, A1 touches 3 elements (i.e., closest to 5/2) ∴ O(log n).8 A2 touches 5 elements (all elements) ∴ O(n). Thus, proved A1O(n).

      7: No, the algorithm touches the elements; the worst-case upper bound describes, in a loose way, the number of elements touched

      8: Dividing by two once is a constant reduction, not a logarithmic one.

    8. No. For A2 to be O(n), since the loop runs n times, f(i) must be O(1). O(nO(1) = O(n·1) = O(n). Knowing that and seeing that A1 runs n/2 times then you can conclude that A1 is O(n/2), which after removing the constant coefficient is still O(n).
    9. Not possible. If A2 has O(n), then f(i) is O(1);9 i = 0 has O(1), i++ has O(1), leaving i < n. In order to know how many times i compares to n we have to understand how i increases. In A2, i++i = i + . 1 + 1 + … = n = O(n) In A1, i = i + 2. 1 + 2 + … = 1 + 2m ⇒ O(n) So, it is not possible.

      9: Why is this true?

    10. No. If A2 has worst-case upper bound O(n) that means A2 executes every statement and does linear search.10 A1 does the same linear search, so A1 should be O(n) too.

      A1 = for (i = 0; i < n; i = i + 2)  A1 = for (i = 0; i < n; i = i + 2) O(nO(1)
             f(i) : O(n)
      
      A2 = for (i = 0; i < n; i++)  A2 = for (i = 0; i < n; i++) O(nO(1)
             f(i) : O(n)
      

      10: Or some other liner-bounded algorithm.


This page last modified on 11 November 2010.