Tag Archives: Divide and conquer algorithm

Computer Algorithms: Strassen’s Matrix Multiplication

Introduction

The Strassen’s method of matrix multiplication is a typical divide and conquer algorithm. We’ve seen so far some divide and conquer algorithms like merge sort and the Karatsuba’s fast multiplication of large numbers. However let’s get again on what’s behind the divide and conquer approach.

Unlike the dynamic programming where we “expand” the solutions of sub-problems in order to get the final solution, here we are talking more on joining sub-solutions together. These solutions of some sub-problems of the general problem are equal and their merge is somehow well defined.

A typical example is the merge sort algorithm. In merge sort we have two sorted arrays and all we want is to get the array representing their union again sorted. Of course, the tricky part in merge sort is the merging itself. That’s because we’ve to pass through the two arrays, A and B, and we’ve to compare each “pair” of items representing an item from A and from B. A bit off topic, but this is the weak point of merge sort and although its worst-case time complexity is O(n.log(n)), quicksort is often preferred in practice because there’s no “merge”. Quicksort just concatenates the two sub-arrays. Note that in quicksort the sub-arrays aren’t with an equal length in general and although its worst-case time complexity is O(n^2) it often outperforms merge sort.

This simple example from the paragraph above shows us how sometimes merging the solutions of two sub-problems actually isn’t a trivial task to do. Thus we must be careful when applying any divide and conquer approach.

History

Volker Strassen is a German mathematician born in 1936. He is well known for his works on probability, but in the computer science and algorithms he’s mostly recognized because of his algorithm for matrix multiplication that’s still one of the main methods that outperforms the general matrix multiplication algorithm.

Strassen firstly published this algorithm in 1969 and proved that the n^3 algorithm isn’t the optimal one. Actually the given solution by Strassen is slightly better, but his contribution is enormous because this resulted in many more researches about matrix multiplication that led to some faster approaches, i.e. the Coppersmith-Winograd algorithm with O(n^2,3737). Continue reading Computer Algorithms: Strassen’s Matrix Multiplication

Computer Algorithms: Karatsuba Fast Multiplication

Introduction

Typically multiplying two n-digit numbers require n2 multiplications. That is actually how we, humans, multiply numbers. Let’s take a look of an example in case we’ve to multiply two 2-digit numbers.

12 x 15 = ?

OK, we know that the answer is 180 and there are lots of intuitive methods that help us get the right answer. Indeed 12 x 15 it’s just a bit more difficult to calculate than 10 x 15, because multiplying by 10 it really easy – we just add one 0 at the end of the number. Thus 15 x 10 equals 150. But now again on 12 x 15 – we know that this equals 10 x 15 (which is 150) and 2 x 15, which is also very easy to calculate and it is 30. The result of 12×15 will be 150 + 30, which fortunately isn’t difficult to get and equals to 180.

That was easy but in some cases the calculations are a bit more difficult and we need a structured algorithm to get the right answer. What about 65 x 97? That is not so easy as 12 x 15, right?

The algorithm we know from the primary school, described on the diagram below, is well structured and help us multiply two numbers.

Typical Multiplication

We see that even for two-digit numbers this is quite difficult – we have 4 multiplications and some additions. Continue reading Computer Algorithms: Karatsuba Fast Multiplication

You think you know algorithms. Quiz results!

Finally the results from “You think you know algorithms” are out. This time only 3 of you have answered correctly to all the questions.

1. Which string searching algorithm is faster?

  • Morris-Pratt correct answer (ref)
  • Brute force
  • Rabin-Karp

Quiz results for "Which string searching algorithm is faster?"

Continue reading You think you know algorithms. Quiz results!

Computer Algorithms: Quicksort

Introduction

When it comes to sorting items by comparing them merge sort is one very natural approach. It is natural, because simply divides the list into two equal sub-lists then sort these two partitions applying the same rule. That is a typical divide and conquer algorithm and it just follows the intuitive approach of speeding up the sorting process by reducing the number of comparisons. However there are other “divide and conquer” sorting algorithms that do not follow the merge sort scheme, while they have practically the same success. Such an algorithm is quicksort.

Overview

Back in 1960 C. A. R. Hoare comes with a brilliant sorting algorithm. In general quicksort consists of some very simple steps. First we’ve to choose an element from the list (called a pivot) then we must put all the elements with value less than the pivot on the left side of the pivot and all the items with value greater than the pivot on its right side. After that we must repeat these steps for the left and the right sub-lists. That is quicksort! Simple and elegant!

Quicksort
Continue reading Computer Algorithms: Quicksort

Computer Algorithms: Merge Sort

Introduction

Basically sorting algorithms can be divided into two main groups. Such based on comparisons and such that are not. I already posted about some of the algorithms of the first group. Insertion sort, bubble sort and Shell sort are based on the comparison model. The problem with these three algorithms is that their complexity is O(n2) so they are very slow.

So is it possible to sort a list of items by comparing their items faster than O(n2)? The answer is yes and here’s how we can do it.

The nature of those three algorithms mentioned above is that we almost compared each two items from initial list.

Insertion sort and bubble sort make too many comparisons, exactly what merge sort tries to overcome!
Insertion sort and bubble sort make too many comparisons, exactly what merge sort tries to overcome!

This, of course, is not the best approach and we don’t need to do that. Instead we can try to divide the list into smaller lists and then sort them. After sorting the smaller lists, which is supposed to be easier than sorting the entire initial list, we can try to merge the result into one sorted list. This technique is typically known as “divide and conquer”.

Normally if a problem is too difficult to solve, we can try to break it apart into smaller sub-sets of this problem and try to solve them. Then somehow we can merge the results of the solved problems.

If it's too difficult to sort a large list of items, we can break it apart into smaller sub-lists and try to sort them!
If it's too difficult to sort a large list of items, we can break it apart into smaller sub-lists and try to sort them!

Continue reading Computer Algorithms: Merge Sort