Implementation and Analysis of Quick Sort

Implementation of Quick Sort with C# and analysis of running time on best, worst and average case


Category: Data Structure And Algorithms Tags: C#, Java, C++ Programming

Quick Sort Code Files

Introduction

        Quick sort is most widely used algorithm which is used in many real-time applications. We have seen Merge sort and Heap sort both with running time O(n log n), and quick sort's average running time is same O(n log n) but quick sort beats both of these algorithm in real time scenarios it runs faster than any comparison sorting algorithm.

        Quick sort's worst case running time is O(n2) and merge/heap sort runs on O(n log n) on their worst case then why is quick sort superior then merge/heap sort? We will give this answer later in this article, first we analyze the algorithm.

Implementation

        We will use C# to implement the quick sort and structure of class would be:

public class Quick
{
    public int[] A; //Array which has to be sorted
public Quick(int[] array) //Constructor { A = array; }
public int[] Sort() { QuickSort(0, A.Length - 1);
return A; }
void Swap(int i, int j) //Swaps values of two indexes { int temp = A[i]; A[i] = A[j]; A[j] = temp; }
    //Recursion public void QuickSort(int p, int r){...}
    //Gives index where to part the array as well as sorts public int Partition(int p, int r){...}
}

        In above class we can see we have to implement two methods called QuickSort and Partition. which both expects two parameters p and r, which are first and last index of array respectively. Below we have implementation of Partition:

/// <summary>
/// Sorts array in given indexes and returns the exact location of pivot or 
/// the index where we have to do partition
/// </summary>
/// <param name="p">Start Index</param>
/// <param name="r">End Index</param>
/// <returns></returns>
public int Partition(int p, int r)
{
    int pivot = A[r];  //always last element
    int i = p - 1;
    for (int j = p; j < r; j++)
    {
        if (A[j] <= pivot)
        {
            i++;
            Swap(i, j);
        }
    }
    i++;
    Swap(i, r);
    return i;
}

        We have chosen the last element as pivot and we are replacing all the smaller values(which are smaller or equal to pivot) at starting indexes, and once all starting indexes are filled we get the next position as pivot's final position and after pivot all large values than pivot would be placed. So once we place the pivot on its final position that position is our partition, pivot's index is partition point in quick sort so we return it for method QuickSort which implementation is given below

/// <summary>
/// Makes recursive calls
/// </summary>
/// <param name="p">Start Index</param>
/// <param name="r">End Index</param>
public void QuickSort(int p, int r)
{
    if (p < r)
    {
        int q = Partition(p, r);
        QuickSort(p, q-1);
        QuickSort(q + 1, r);
    }
}

Let's run above code:

Quick quick = new Quick(new int[] { 2, 1, 7, 4, 9, 3, 6 });
var sortedArr = quick.Sort();
foreach (int ele in sortedArr)
Console.WriteLine(ele);
Console.ReadLine();

Output

1

2

3

4

6

7

9

Analysis

        Here performance depends on partition or what is the location of your pivot, so we can say worst case where pivot's doesn't move from it's last position and partition would be in (n-1) and 0.

 

Worst Case: When pivot's actual position is same or is last or we can say array is sorted already, so

 

T(n) = T(n-1) + T(0) + O(n)

=> T(n) = T(n-1) + O(n)

=> T(n) = O(n2)

 

        *So the worst case occurs when array is already sorted.

 

Best Case: Best case occurs when algorithm divides the array exactly two half sub arrays, or pivot's actual position is middle, so

 

T(n) = 2T(n/2) + O(n)      ..eq1

=> T(n) = O(n log n)

 

        *The above recurrence(eq1) is already solved in Maximum Sub array problem and also in Merge Sort

 

Average Case: For average case, we should analyze it based on randomized algorithm analysis but to ease understanding we calculate it by taking average case which should be closely near to worst case. Suppose our partition comes as 9:1 or we can say algorithm does really bad partitioning (very near to worst because worst would be 10:0 as we seen before) so the recurrence will be:

 

T(n) = T(9n/10) + T(n/10) + c n   ...eq1

 

by eq 1

T(9n/10) = T(92n/102) + T(9n/102) + c 9n/10   ...eq2

and

T(n/10) = T(9n/102) + T(n/102) + c n/10     ...eq3

substituting values of eq2 and eq3 in eq1

T(n) = [ T(92n/102) + T(9n/102) + c 9n/10 ] + [ T(9n/102) + T(n/102) + c n/10 ] + c n

=> T(n) = [ T(92n/102) + 2 T(9n/102) + T(n/102)] + [ c 9n/10 + c n/10 + c n ]

=> T(n) = [ T(92n/102) + 2 T(9n/102) + T(n/102) ] + 2 c n

generalize for any kth level

=> T(n) = [ T(9kn/10k) + 2k-1 T(9n/10k) + T(n/10k) ] + k c n

for any k where 10k/9k = n so k = log10/9 n or simply log n

=> T(n) = T(1) + 2k-1 T(1/9k-1) + T(1/9k) + log n c n

 

if we see T(1/9k-1) and T(1/9k) is really less than T(1) if we ignore them and simply 2k-1 = n so

=> T(n) = n + log n c n

=> T(n) = O(n log n)

 

                *Above equation says that the average case running time of quick sort will be O(n log n) even the algorithm divides sub arrays in 99:1 the running time will be same.

 

        **Now the question, why quick sort is superior then all other comparison based algorithms even the worst case running time is n2. For real time scenarios we can say, the input will come in random manner and we will be having kind of both worst and best partitions, in random input it is not possible to have always a worst scenario, let's suppose we have to split an array of n elements and first split is worst case means in (n-1) : 0 but the next split might be a best case and might split in 1:1 ( both are (n-1)/2 ) so if it is not forcefully given that input is sorted we can say input is random or if we make our pivot selection even random then we could say easily that the algorithm will run always on Average Running Time and that would be O(n log n). And that is also better than any of other algorithms because quick sort in average case doesn't replace unnecessary elements it doesn't make extra efforts to move elements like heap sort or not even merge them like merge sort, it works better than merge/heap sort in real time scenarios.


Like 0 People
Last modified on 29 October 2018
Nikhil Joshi

Nikhil Joshi
Ceo & Founder at Dotnetlovers
Atricles: 132
Questions: 9
Given Best Solutions: 9 *

Comments:

No Comments Yet

You are not loggedin, please login or signup to add comments:

Existing User

Login via:

New User



x