-
Would you expect most typical implementations of insertion sort to be stable or
unstable? Justify your answer. If you have trouble interpreting
“typical” you can replace it with “more efficient.”
Most typical insertion-sort implementations slide the adjacent unordered element x through the ordered array part until x is adjacent to an element no less than it on left and larger than it on the right: ≤ x <. Because x does not slide past elements equal to itself, the relative order between equal values is preserved, and insertion sort is stable.
As an alternative, the insertion-sort implementation could slide x so that it was adjacent to an element less than it on the left and no smaller than it on the right: < x ≤. Because x moves past elements of the same value, the relative order among them is not preserved, and the insertion sort is unstable. The stable insertion sort stops sliding x earlier than does the unstable insertion sort, and so does less work for (approximately) the same results, it is more efficient, and, perhaps, more typical.
The answers, in no particular order and approximately verbatim:
-
I would expect the most typical implementation of insertion sort to be unstable because you are inserting and moving values around which in turn would cause it to be unstable.1
-
The most efficient implementation of insertion sort is always going to have worst case O(n2) behavior. This will have stable behavior on any unsorted array.2 Since it is unsorted it3 will work at an average case to be less than O(n2).4 However, when the array is already sorted it will either have O(n) behavior or O(n2) behavior and nothing in between.5 If the array is already sorted in the order it is sorting again it will iterate through every index doing O(1) work, which causes O(n) sorting. If the array is sorted in the opposite order, it will go through the loop n times passing through doing O(n) work causing O(n2) behavior.
-
I would expect most typical implementations of insertion sort to be unstable because
of the fact that6 you can only insert into data that has already been sorted.7 Therefore, if you need to insert more data (after you have already done an insertion) then your previous piece of data may be moved again.8 This could happen n amount of times.9
-
Insertion sorts are always stable because no matter how large the data, or how sorted it is, it10 does the same thing every time.11 Even if the array is sorted, it does the sort as if it wasn’t and then just inserts the values into their original position.12
-
Most efficient implementations of an insertion sort would be stable because this means that they are doing O(n2) work,13 with the constant being close to 1 (since it is an efficient search). And if this is the case the sort performs well and would be expected to be stable.14
-
Insertion sort goes through and slides values until they are all sorted properly. (The way you want it to be sorted) I would expect the most typical insertion implementation to be reasonably stable15 because it doesn’t require any chaotic sorting.16 It simply goes through multiple indices and slides them so that they are in proper order.17
-
The most typical implementations of insertion sort would be stable. An insertion sort supports itself before and after a sorting action is made,18 therefore keeping the sort stable.
-
I would expect it to be unstable. After you insert the first element19 into the sorted portion, when the second is compared to the same number,20 you no longer need to make any more comparisons.21 Putting it before the first element is more efficient and that causes the sort to be unstable.
-
Insertion sort is typically unstable, but in cases can be stable. Because of the mature of the sort, its performance is constantly changing,22 making efficient implementation of this difficult.23 The more values being sorted, the harder it is for an insertion sort to maintain efficient performance many values may need to be moved around.24
-
I would expect more efficient implementations of insertion sort to be unstable because
of the fact that25 each implementation of the sort is going through the array and touching multiple values at least once each time.26
-
Typical implementations of insertion sort would be stable because insertion sort has to do far less comparisons than other sorts.27 The others need O(n2) comparisons, versus O(1) for insertion sort.28
-
Suppose you implemented a variant of the heap sort that used a binary search
tree rather than a heap. What would you expect the worst-case performance of
your modified heap sort to be? Justify your answer.
The worst-case performance of a BST-modified heap sort is the cost of building the BST from the input array plus the cost of tearing it down into a sorted array. Building the BST can cost O(n2) in the worst case, which would be when the input array is sorted, or nearly so. Tearing down the BST can be done with an in-order tree walk, at a O(n) cost, or by using a node-by-node tear-down as is done by heap sort at a O(n2) cost. In either case, the worst-case cost of building the tree dominates for an O(n2) worst-case cost overall.
The answers, in no particular order and approximately verbatim:
-
The heap sort’s performance is O(n log n). Since a binary search tree is like a heap with different rules, I believe they would have the same estimate, O(n log n).1
-
The worst case performance from many sorts is O(n2), but since this heap sort uses a binary search tree rather than a heap we can estimate the sort to perform at O(log n), because of the binary search tree property there is a large division with each implementation of the data making the amount searched less.2
-
O(n2) because a binary tree takes in 2 conditions a greater than and a less than so every time it was sorted you would need to go through 2 conditions.3
-
Estimated worst-case performance would be O(n log n) since maintaining both properties comes into play here,4 the more work to be done.5 If a new value most be inserted6 into the sorted BST, it must fit the entries7 of both properties. The BST causes O(log n) performance, which a heap sort could8 potentially every O(n) performance by itself. Combining the two would probably be O(n log n) in the worst case.
-
If a binary search has O(n) performance9 and a heap sort has O(n log n) performance. I would assume that the binary search would decrease the efficiency of the heap sort to O(n).10
-
O(n log n). The original worst-case performance of the methods used in a heap sort is nlog n, which, after the heap becomes a binary search tree and uses those methods instead, becomes 2nlog n, based on the proposed worst-case performance of a binary search tree.11
-
Binary Search Trees (BSTs) give O(log n) access to an element in the tree.12 When adding into a BST it is sorting the data according to the BST property which puts everything to the left of the node ≤ the node and everything to the right > node. The tree is doing the sorting for you.13 Now the sort just has to remove every node from the left of the root then the root itself, then remove the right side of the root.14 In each subtree it should be removed in the order LNR to keep sequence. This is using O(n) work to remove all the nodes at O(log n) per removal causing the sort to have O(n log n) worst-case behavior.
-
I would estimate the worst-case performance to be O(n). I said linear because a binary search tree already has an order to it.15 It would be quicker to do the heap sort because you could go through the BST one time.
-
Normally, if you sort with a heap it gives you O(n log n) worst-case behavior. However, if you implement the heapsort using a bst instead of a heap. The worst-case behavior of this modified heapsort would be O(log n),16 since this is the worst-case behavior for BSTs.17
-
A binary search tree has O(log n) behavior.18 A heap sort that implemented a binary search tree would not ever sort the data because the properties of a binary search tree conflict with the properties of a heap.19
-
I would estimate the worst-case performance to be O(n log n) because since it is a binary search tree rather than a heap this causes the sort to hold the BST property causing the heap sort to do O(n log n) work.20
-
One way to occasionally speed up a sort is to have it recognize when the
unsorted array part is properly sorted; the sort can stop at that point.
Explain how to modify selection sort to include this improvement. The quality
of your improvement is inversely proportional to the time and space it adds to
the sort.
When making a pass through the unsorted array part, keep track of the number of times the extreme value index changes. If it changes for every element, then the elements in the unsorted array part are properly ordered, and the selection sort can stop.
The modification takes an extra integer variable to keep track of the number of times the extreme-value index is assigned to, an extra addition per comparison to maintain the count, and an extra test at the end of the scan to check the count for a constant time and space overhead. (You can replace the integer count with a boolean value to note when the extreme-value index isn’t assigned to. This change replaces the addition with an assignment and makes the end-of-scan test trivial.)
The answers, in no particular order and approximately verbatim:
-
The selection sort should keep track of the middle element1 and use comparisons to check if, compared to the position of that element, the other elements are sorted or not.2 This would require more time and additionally more space, but because a selection sort uses less space due to its use of less comparisons,3 the ability to end the program earlier should make up for the use of space.
-
Selection sort’s performance is O(n2). Before each comparison of values in the array, the selection sort could call a
checkIfSorted(array[])
method that goes through each value and checks it with the one before, making sure it’s bigger. If this check finishes and the array is sorted, the method would return true, and false if not sorted. Doing this is O(n) because it touches each value once4 so it wouldn’t change the selection sorts performance,5 but in the best case scenario where the array is already sorted, selection sort could be linear.
-
Selection sort uses an algorithm that compares the elements to each other at certain points in the array, & sorts it that way so in order to speed up this process you can have the program recognize which part of the array is already sorted and to do this in a selection sort you can compare the array selections at a time6 and divide up the work in order to achieve a faster sort.7 (Comparing elements is cheaper than assigning values to diff. places. So this is the most efficient way to compare a fraction of the array to another at a time and move through to find out what’s sorted & what isn’t.)
-
When selection sort begins to rearrange the data I would modify it in a way that when it checks one past of the array8 and sorts it, it would tell the selection sort how much unsorted space there is.9 As it continues, it should automatically recognize when the unsorted parts become sorted because when it goes to sort the already sorted areas it will move to another place.10
-
Selection sort keeps track of the index of the next smallest value while comparing elements in the array and does one swap in the end of the loop. This can be modified to recognize a sorted part of the array by adding the bubble sort comparison to the loop as well.11 If as we search for the minimum value we also compare adjacent indexes we can check to see if it is sorted. In each loop we can set a boolean value called sorted to true and as we move along the array if we find that any adjacent values are not in sequence we set it to false. However, if all the adjacent values are in order the boolean values sorted would remain true. If the value remains true we can end the loop because the array is sorted.
-
Have the sort go through and first check if any section12 is sorted than go through and sort the sectors that aren’t already sorted.13
-
As the sort is looking for its next value it can also make a comparison between the unsorted values.14 If you create a boolean value that becomes false when 2 elements are out of order you can determine if you need to repeat again. If you reach the end and the value is still true, you no longer need to sort.
-
A way to modify selection sort to recognize when the array is fully sorted is to keep track of when you reach the largest index (or smallest depending on which way you are sorting).15 Since selection sort checks every index it is therefore obvious that when you hit the last (or first) index you are done sorting the array.16
-
It could be modified to check the unsorted array after each sort.17 It would run through the unsorted part, comparing each value and only sort if it finds a value out of place.18 It would add alot19 of time because of all the extra comparisons.20
-
Assuming an array of integers being sorted into ascending order, starting from index 0. A simple modification would run through the entire array, comparing values in consecutive indices. The comparison must be true that the lesser index’s value, must be less than or equal to the next index’s value.21 Ex.22
for (int i = 0; i < array.length; i++)
if (array[i] ≤ array[i + 1])
return true;
else
// do some sort
-
One way to modify selection sort is to have the sort recognize when values are being moved over or when values are being moved over significant amounts.23 If you have a list similar to the one above and you use selection sort, you see that you don’t move any values until you get to the fifth index because everything is sorted.24 If you get the sort to realize it hasn’t moved any values you can cut down on time.25
-
Merge sort does something during a sort that quick sort doesn’t do, and, as a
consequence, merge sort has asymptotically better worst-case behavior than does
quick sort. Explain how this difference between sorts effects the worst-case
behavior of each sort.
Merge reliably splits the sorting task into two equally sized subtasks, which results in a O(n log n) worst-case sort. Quicksort partition is less reliable because it could split the sorting task into two maximally unequal sized subtasks, which results in an O(n2) worst-case sort.
The answers, in no particular order and approximately verbatim:
-
Merge sort sorts the subsets1 and Quick sort does not. The worst case Quick sort scenario is that each element will have to be accessed and moved with every partition. With a merge sort, the worst case scenario is that they will all be accessed once to be put in their proper position.2
-
Quick sort is exactly that ???3 as so, is not as through as merge sort. Quick sort is asymptotically worse because it sorts based upon unless it is currently comparing, having the potential to need to resort values multiple times in order to get it right. Merge sort eliminates the problem of touching values multiple times4 giving better performance.
-
When merge sort first sorts its elements, it first takes half of the unsorted list and puts those in order first before sorting each element together. This leaves room for errors in sorting,5 such as needing to backtrack if one element remains unsorted,6 giving merge sorts a chance of running at their worst case behavior. Quick sort takes a more balanced approach in sorting, using the median of the left and right elements to determine the placement of elements. The problem with quicksort is that if the elements used to determine the pivot point aren’t even,7 the quicksort itself suffers, needing more time to sort and thus having worst worst-case behavior.
-
Merge separates the values of the array8 and then sorts them as if goes along potentially checking to see whether or not the array is already sorted. Quicksort worst-case behavior is O(n2) because if the list is sorted it still does the implementation going through and touching every value.9 Merge sort only merges already sorted arrays so sorted arrays can’t have an affect like they would with quick sort.
-
Merge sort cuts the array it is sorting in half every time at the middle until it is left with single element arrays. Quick sort does not always cut the array perfectly in half every time because it is picking a pivot value and shifting everything less than or equal to it to the left and everything greater than it to the right. This does not always cut the array in half because the pivot value could be an extreme high or low and therefor would not be cutting the array into halves. This difference effects worst case behavior because if Quicksort keeps setting the pivot value to be an extreme if produces O(n2) work because it calls partitioning n times doing O(n) work. With merge sort it keeps cutting the array in half no matter what doing O(log n) work and puts it together in O(n) worth causing it to have worst case O(n log n) behavior.
-
Merge sort splits the sort in half and then reunites the halves when finished. It continues to do this as it passes through the data. This division allows logarithmic behavior which gives it better worst-case behavior. Quick-sort has no logarithmic behavior,10 which pushes its worst-case behavior in the wrong (less efficient) direction.11
-
Merge sort can put together indices to sort,12 because of this it can sort through arrays more efficiently. The quick sort cannot do this, thus having to sort more indices;13 consequently giving quick sort a worst-case behavior.
-
Quick sort runs through and sorts everything, merge sort separates it into 2 sections and than merges them together.14 The merge sort has better worst-case behavior because it sorts the 2 smaller sections quickly15 and than merging the 2 smaller sections together makes it extremely easy and efficient to sort, but the quicksort has to touch every element and individually sort it making it not so quick. footnote[ Doesn’t merge sort touch every element too? ]
-
A mergesort splits the elements being sorted in half, sorts them, and then uses an algorithm to put them back together sorted. Quicksort splits the elements in half using partitioning, so in this worst case partitioning may not divide the elements in half (if pivot is the extreme values) and in this case, it does O(n) iterations and O(n) work for each iteration, making the final worst-case for quicksort O(n2). Since the mergesort doesn’t use partitioning.16 It has O(log n) worst case behavior because it will always split the elements in half and sort that way.
-
Quick sort, typically, has the same behavior as merge sort [O(n log n)] except when the array is sorted to begin with.17 This causes quick sort to behave like O(n2). Merge sort is able to keep its O(n log n) when the array is sorted to begin with.18
-
The difference between merge sort and quick sort is that merge sort halves and then places the two halves back together sorted. Quick sort uses a pivot.19 The halving makes merge sort use O(log n) for part of the sorting and O(n) for the other part finishing with O(n log n). Quick sort uses O(n2) as it asymptotically worst case. This is because it keeps going through the data multiple times.20