Algorithm cheatsheet: Quicksort

Click on the image to download this cheatsheet on PDF!

7 thoughts on “Algorithm cheatsheet: Quicksort

  1. Hi. While it is true that Mergesort’s average is better than Quicksort’s best (and, indeed, one can prove that in the worst case, Mergesort makes as many comparisons as Quicksort’s best case!), it is important to remember that other things factor into the discussion.
    The reason why Quicksort is so often used, for example, is mostly cache behavior and real-world performance induced from this behavior.
    Likewise, it’s important to remember the assumptions one makes (in terms of the computational model) when saying that Radix sort is “faster” than Quicksort. Namely, that they are bounded by above. This is not always a reasonable assumption to make.

    In practice, implementations often choose “intelligent quicksorts”, or “intelligent mergesorts”. See Timsort, or Introsort.

    That said, cool chart 🙂

  2. Also, recursively partitioning below eight elements or so causes the bookkeeping/stack frame overhead to eat up any advantage; I think most implementations use another algorithm once the recursively divided problem set gets this small.

    Also, quicksort is not stable in its most general form; adding comparisons to induce stability may eat up speed advantage.

  3. I noticed that no one has compared these sorts to the PS9110 sort.
    The above sorts are basically O(N log N) sorts.
    The PS9110 sort is an O(N) sort.

    The second advantage is that the above sorts use a binary search.
    Binary searches are O(log N) searches.
    The PS9110 search is an O(1) search.

    Links have the problem of the resources needed to create and destroy the links that keep the data set in sorted order.

    Arrays have the problem of N/2 moves to add or delete an element from a sorted data set. {O(N) time}

    The PS9110 altorithms can insert and delete data from an array in O(1) time.

  4. @Marvon: It is a relatively easy to prove fact that no comparison sorting algorithm can do better than Omega(n log n) comparisons (specifically, strictly better than ceil(log_2(n!))) in the worst case. If you assume things about the data, better bounds are possible. Comparing algorithms that assume different things is a tricky business.

Leave a Reply

Your email address will not be published. Required fields are marked *