Tag Archives: graph algorithms

Computer Algorithms: Topological Sort Revisited

Introduction

We already know what’s topological sort of a directed acyclic graph. So why do we need a revision of this algorithm? First of all I never mentioned its complexity, thus to understand why we do need a revision let’s get again on the algorithm.

We have a directed acyclic graph (DAG). There are no cycles so we must go for some kind of order putting all the vertices of the graph in such an order, that if there’s a directed edge (u, v), u must precede v in that order.

Topological Sort
 

The process of putting all the vertices of the DAG in such an order is called topological sorting. It’s commonly used in task scheduling or while finding the shortest paths in a DAG.

The algorithm itself is pretty simple to understand and code. We must start from the vertex (vertices) that don’t have predecessors.

Topological Sort - step 1
 
Continue reading Computer Algorithms: Topological Sort Revisited

Computer Algorithms: Longest Increasing Subsequence

Introduction

A very common problem in computer programming is finding the longest increasing (decreasing) subsequence in a sequence of numbers (usually integers). Actually this is a typical dynamic programming problem.

Dynamic programming can be described as a huge area of computer science problems that can be categorized by the way they can be solved. Unlike divide and conquer, where we were able to merge the fairly equal sub-solutions in order to receive one single solution of the problem, in dynamic programming we usually try to find an optimal sub-solution and then grow it.

Once we have an optimal sub-solution on each step we try to upgrade it in order to cover the whole problem. Thus a typical member of the dynamic programming class is finding the longest subsequence.

However this problem is interesting because it can be related to graph theory. Let’s find out how. Continue reading Computer Algorithms: Longest Increasing Subsequence

Computer Algorithms: Strassen’s Matrix Multiplication

Introduction

The Strassen’s method of matrix multiplication is a typical divide and conquer algorithm. We’ve seen so far some divide and conquer algorithms like merge sort and the Karatsuba’s fast multiplication of large numbers. However let’s get again on what’s behind the divide and conquer approach.

Unlike the dynamic programming where we “expand” the solutions of sub-problems in order to get the final solution, here we are talking more on joining sub-solutions together. These solutions of some sub-problems of the general problem are equal and their merge is somehow well defined.

A typical example is the merge sort algorithm. In merge sort we have two sorted arrays and all we want is to get the array representing their union again sorted. Of course, the tricky part in merge sort is the merging itself. That’s because we’ve to pass through the two arrays, A and B, and we’ve to compare each “pair” of items representing an item from A and from B. A bit off topic, but this is the weak point of merge sort and although its worst-case time complexity is O(n.log(n)), quicksort is often preferred in practice because there’s no “merge”. Quicksort just concatenates the two sub-arrays. Note that in quicksort the sub-arrays aren’t with an equal length in general and although its worst-case time complexity is O(n^2) it often outperforms merge sort.

This simple example from the paragraph above shows us how sometimes merging the solutions of two sub-problems actually isn’t a trivial task to do. Thus we must be careful when applying any divide and conquer approach.

History

Volker Strassen is a German mathematician born in 1936. He is well known for his works on probability, but in the computer science and algorithms he’s mostly recognized because of his algorithm for matrix multiplication that’s still one of the main methods that outperforms the general matrix multiplication algorithm.

Strassen firstly published this algorithm in 1969 and proved that the n^3 algorithm isn’t the optimal one. Actually the given solution by Strassen is slightly better, but his contribution is enormous because this resulted in many more researches about matrix multiplication that led to some faster approaches, i.e. the Coppersmith-Winograd algorithm with O(n^2,3737). Continue reading Computer Algorithms: Strassen’s Matrix Multiplication

Computer Algorithms: Graph Breadth First Search

Introduction

Since we already know how to represent graphs, we can go further for some very simple approaches of walking through them. Passing by all the vertices of a graph is a fundamental technique for most of the graph algorithms, such as finding shortest/longest paths, etc.

First thing to note is that graphs are not trees, in most of the cases, so walking through them can’t start from a root, as we do with trees. What we must do first is to decide from where to start – in other words – choosing a starting vertex.

BFS Choosing a Starting Point
It’s clear that depending on the starting point we can get different passes through the graph. Thus choosing a starting point can be very important for our algorithm!

After that we need to know how to proceed. There are two approaches mostly known as “breadth first” and “depth first” search. While depth first search start from a vertex and goes as far as possible, then walks back and passes through vertices that haven’t been visited yet, breath first search is an approach of passing through all the neighbors of the node first, and then go to the next level.
Continue reading Computer Algorithms: Graph Breadth First Search