m4_divert(2) m4_define(_commentnesting, 0) m4_define(_begincomment,m4_dnl m4_ifelse(_commentnesting, 0, )) m4_define(_warning,m4_dnl _begincomment()This document was created by gen-_name.html Do not edit this document, edit gen-_name.html._endcomment()) m4_define(_hdrs, ) m4_define(_anchor_no, 0) m4_define(_abs, m4_ifelse(m4_eval($1 < 0), 1, m4_eval(-($1)), $1)) m4_define(hdr, _warning() $2 _warning() m4_ifelse(m4_eval($1 >= 0), 1, m4_define(`_hdrs', _hdrs`'_anchor_no $1 m4_translit(`$2', `,', ) )) m4_define(_anchor_no, m4_eval(_anchor_no + 1)) ) m4_define(usetag, _warning()$2_warning()) m4_define(puttag, _warning()$2_warning()) m4_changecom() m4_define(, $1$2) m4_define(sbs, $1$2) m4_define(,_warning()$1_warning()) m4_define(,_warning()$1_warning()) m4_define(itl,_warning()$1_warning()) m4_define(bld,_warning()$1_warning()) m4_define(hrf,_warning()$2_warning()) m4_define(mailat,_warning()hrf(mailto:$1,$1)_warning()) m4_define(nwsgrp, hrf(news:$1, $1)) m4_define(bookref, $1, $2. $3: $4, $5.) m4_define(i2m,m4_ifelse( $1,01,January, $1,02,February, $1,03,March, $1,04,April, $1,05,May, $1,06,June, $1,07,July, $1,08,August, $1,09,September, $1,10,October, $1,11,November,December)) m4_dnl $Revision: 1.5 $, $Date: 1999/12/09 19:16:32 $ CS 512 Homework 4 hdr(2, Homework 4 - Complexity and Approximations) hdr(3, CS 512, Algorithm Design)


  1. In the definition of polynomially reducible on page 342 of the text, why is it that Manber can assume the size of itl(u)2 is polynomial in the size of itl(u)1?


    Becuase a polynomial reduction only does a polynomial amout of work (or, equivelently, works for a polynomial amount of time), it can only affect a polynomial change on the input string, and so the size of the output string will be a polynomial factor of the input string.


  2. Show a polynomial reduction that transforms an instance of the Hamiltonian cycle problem into an instance of the traveling-salesman problem (TSP, Section 11.4.7). Assume undirected graphs; a complete graph is a graph in which any two vertices are connected by an edge.


    The Hamiltonian cycle problem accepts a graph and returns true if there exists a simple cycle that passes through all vertices in the graph exactly once. The TSP problem accepts a weighted, complete graph and an integer itl(k) and returns true if there is a simple cycle through the graph and the weight of the cycle is at most itl(k), where the weight of a cycle is the sum of the weights of the edges making up the cycle.

    The reduction from the Hamiltonian cycle problem to the TSP takes an instance of the Hamiltonian cycle problem and constructs an instance of the TSP as follows:

    1. take the graph itl(G) associated with the Hamiltonian cycle problem and form the weighted graph itl(G)' by copying itl(G) and assigning a weight of 1 to each edge in itl(G)'.

    2. turn itl(G)' into a complete weighted graph by including an edge between every pair of vertices in itl(G)' that do not have an edge in itl(G). Assign a weight of |itl(V)| + 1, where itl(V) is the set of vertices in itl(G)' (or itl(G)), to the new edges.

    3. assign |itl(V)| to itl(k).

    This reduction is polynomial in the number of vertices in the original graph itl(G).

    Now to show the reduction preserves correctness. Let itl(H) be an instance of the Hamailtonian cycle problem and the TSP instance itl(T) be the reduction of itl(H). Assume itl(H) returns true; that is, the given graph contains a Hamiltonian cycle. Then the same cycle serves as a TSP cycle for the graph in itl(T) and the weight of the cycle is at most itl(k), so itl(T) also returns true. Now assume itl(T) returns true; then there exists a simple cycle of weight at most itl(k) = |itl(V)|. Because each of the edges added to itl(G)' but not in itl(G) have weight |itl(V)| + 1, none of the new edges are part of the cycle, and the cycle also serves as the Hamiltonian cycle for the graph in itl(H); itl(H) also returns true. Because the reduction maps solutions to solutions on both sides of the reduction, it preserves correctness.


  3. A program graph is a connected, directed graph containing a single node - the program-entry node - with in-degree zero and a single node - the program-exit node - with out-degree zero. Given a program graph itl(G), the path-coverage problem for itl(G) involves constructing a set of paths containing all paths from the program-entry node to the program-exit node in itl(G). Show that any nondeterministic algorithm solving the path-coverage problem is not in NP.


    The key to solving this problem is to recognize that the set of all paths between two nodes in a graph has exponential size. For example, consider the total number of paths between the entry and exit nodes in the program graph

    a program graph
    Because the set of paths has exponential size, any algorithm that generates the set must do an exponential amount of work, and so cannot be in NP. For example (and remember, examples are not proofs), consider an NP algorithm using correct choice:
      s <- { }
      while (p <- choice(G)) != nil
        s <- union(s, {p})
    
    Here the choice operator on each iteration constructs a path not in the set itl(s) or returns nil if no such path exists. The while loop iteration count is bounded by an exponential, so the whole algorithm has exponential running time (this analysis is sloppy because I'm ignoring potentially exponential work when dealing with the set itl(s); however because I account for exponential work in the iteration bound, it is still, more or less, correct).


  4. Develop a backtracking algorithm for solving the knapsack problem (Section 11.4.7). The algorithm in Figure 5.10 is not an acceptable solution to this problem because it isn't a backtracking algorithm.


    The knapsack decision problem accepts as input a set itl(X) of elements, an integer size itl(S), and an integer value itl(V). Each element itl(x) of itl(X) has associated with it a size itl(size)(itl(x)) and a value itl(value)(itl(x)). The knapsack decision problem produces as output true if there exists a subset itl(K) of itl(X) such that the sum of the sizes of the elements in itl(K) are at most itl(S) and the sum of the values of the elements in itl(K) are at least itl(V) and false if no such subset exists.

    The knapsack algorithm must generate subsets of the itl(X), examining each subset to see if it's suitable in terms of itl(S) and itl(V), and backtrack to the next subset if the current subset isn't suitable. Perhaps the easiest way to generate subsets is to consider each element of itl(X) in turn. Assuming element itl(x) of set itl(X) is part of an acceptable subset, the knapsack algorithm recurses to find a suitable subset of itl(X) - {itl(x)}. If it can't find such a subset, it must backtrack to the incorrect assumption that itl(x) is part of the subset and try again, this time without adding itl(x) to the subset. If the algorithm still can't find a suitable subset, there is no such acceptable subset in itl(X).

    bool knapsack(X, current_s, current_v)
    
      if X == {} 
        return current_s <= total_s and total_v <= current_v
    
      x <- an element of X
    
      if knapsack(X - {x}, current_s + size(x), current_v + value(x))
        return true
    
      if knapsack(X - {x}, current_s, current_v)
        return true
    
      return false
    
    The first if statement executes in Theta(1) time, and it takes Theta(1) time to pick an element of itl(X). Return statements execute in Theta(1) time, which means all the work in the next two if statements are done by evaluating the guards, which are recursive calls to knapsack(). Letting itl(T)(itl(n)) be time it takes knapsack() to execute on an itl(n)-element set, the two if statments take
    itl(T)(itl(n) - 1) + itl(T)(itl(n) - 1) = 2*itl(T)(itl(n) - 1)
    time to execute (note that itl(T)() represents the exact solution to knapsack()'s behavior, and with exact solutions constants do matter; it would be incorrect to omit the constant 2). Working through the recurrence relation shows that itl(T)(itl(n)) = 2itl(n), and that overall knapsack() has Theta(2itl(n)) worst-case behavior.

    We can do a little branch-and-bounding to try and improve knapsack()'s average-case behavior by replacing the first if statement with

    if current_s > total_s
      return false
    
    if X == {} 
      return total_v <= current_v
    
    That is, if the current size is greater than the maximium size, there's no use considering the current subset further because it can't lead to an acceptable knapsack packing (there is a critical assumption underlaying this argument's correctness - what is it?).



This page last modified on 18 i2m(01) 2000. m4_dnl $Revision: 1.2 $, $Date: 1999/03/12 17:36:48 $ m4_divert(1) m4_changequote({, }) {m4_define(tblfcntnts, _toc_tableit(}_hdrs{) _toc_down(_toc_ilevel, _toc_level) ) m4_define(_toc_tableit, m4_regexp($1, \([0-9]+\) \([0-9]+\) \([^ ]+\) \(.*\), m4_ifdef(_toc_level, , m4_define(_toc_level, m4_eval(\2 - 1)) m4_define(_toc_ilevel, _toc_level)) _toc_entry(\1, \2, \3) _toc_tableit(\4)) ) m4_define(_toc_down, m4_ifelse(m4_eval($1 < $2), 1, _toc_down($1, m4_eval($2 - 1))) ) m4_define(_toc_entry, m4_ifelse(m4_eval($2 > _toc_level),1,