Computer Algorithms: Kruskal’s Minimum Spanning Tree

Introduction

One of the two main algorithms in finding the minimum spanning tree algorithms is the algorithm of Kruskal. Before getting into the details, let’s get back to the principles of the minimum spanning tree.

We have a weighted graph and of all spanning trees we’d like to find the one with minimal weight. As an example on the picture above you see a spanning tree (T) on the graph (G), but that isn’t the minimum weight spanning tree!

A graph and a possible spanning tree
 
Continue reading Computer Algorithms: Kruskal’s Minimum Spanning Tree

Computer Algorithms: Minimum Spanning Tree

Introduction

Here’s a classical task on graphs. We have a group of cities and we must wire them to provide them all with electricity. Out of all possible connections we can make, which one is using minimum amount of wire.

To wire N cities, it’s clear that, you need to use at least N-1 wires connecting a pair of cities. The problem is that sometimes you have more than one choice to do it. Even for small number of cities there must be more than one solution as shown on the image bellow.

General Wiring Problem
 

Here we can wire these four nodes in several ways, but the question is, which one is the best one. By the way defining the term “best one” is also tricky. Most often this means which uses least wire, but it can be anything else depending on the circumstances.

As we talk on weighted graphs we can generally speak of a minimum weight solution through all the vertices of the graph.

By the way there might be more the one equally optimal (minimal) solutions. Continue reading Computer Algorithms: Minimum Spanning Tree

Computer Algorithms: Shortest Path in a Directed Acyclic Graph

Introduction

We saw how to find the shortest path in a graph with positive edges using the Dijkstra’s algorithm. We also know how to find the shortest paths from a given source node to all other nodes even when there are negative edges using the Bellman-Ford algorithm. Now we’ll see that there’s a faster algorithm running in linear time that can find the shortest paths from a given source node to all other reachable vertices in a directed acyclic graph, also known as a DAG.

Because the DAG is acyclic we don’t have to worry about negative cycles. As we already know it’s pointless to speak about shortest path in the presence of negative cycles because we can “loop” over these cycles and practically our path will become shorter and shorter.

Negative Cycles
The presence of a negative cycles make our atempt to find the shortest path pointless!

Thus we have two problems to overcome with Dijkstra and the Bellman-Ford algorithms. First of all we needed only positive weights and on the second place we didn’t want cycles. Well, we can handle both cases in this algorithm. Continue reading Computer Algorithms: Shortest Path in a Directed Acyclic Graph

Computer Algorithms: Bellman-Ford Shortest Path in a Graph

Introduction

As we saw in the previous post, the algorithm of Dijkstra is very useful when it comes to find all the shortest paths in a weighted graph. However it has one major problem! Obviously it doesn’t work correctly when dealing with negative lengths of the edges.

We know that the algorithm works perfectly when it comes to positive edges, and that is absolutely normal because we try to optimize the inequality of the triangle.

Dijkstra's Approach
Since all the edges are positive we get the closest one!

Since Dijkstra’s algorithm make use of a priority queue normally we get first the shortest adjacent edge to the starting point. In our very basic example we’ll get first the edge with the length of 3 -> (S, A).

However when it comes to negative edges we can’t use any more priority queues, so we need a different, yet working solution. Continue reading Computer Algorithms: Bellman-Ford Shortest Path in a Graph

Computer Algorithms: Dijkstra Shortest Path in a Graph

Introduction

We already know how we can find the shortest paths in a graph starting from a given vertex. Practically we modified breadth-first search in order to calculate the distances from s to all other nodes reachable from s. We know that this works because BFS walks through the graph level by level.

BFS Shortest Paths
BFS is often used to find shortest paths between a starting node (s) and all other reachable nodes in a graph!

Some sources give a very simple explanation of how BFS finds the shortest paths in a graph. We must just think of the graph as a set of balls connected through strings.

The Graph as Balls and Strings
We can think of a graph as a set of balls connected through strings!

As we can see by lifting the ball called “S” all other balls fall down. The closest balls are directly connected to “s” and this is the first level, while the outermost balls are those with longest paths.

The Graph as Balls and Strings Levels
Breadth-first search works much like the image above – it explores the graph level by level, thus we’re sure that all the paths are the shortest!

Clearly edges like those between A and B doesn’t matter for our BFS algorithm because they don’t make the path from S to C through B shorter. This is also known as the triangle inequality, where the sum of the lengths of two of the sides of the triangle is always greater than the length of the third side.

Triangle inequality
What the triangle inequality says us is that if we have a direct edge between two nodes – that must be the shortest path between them!

We must only answer the question is BFS the best algorithm that finds the shortest path between any two nodes of the graph? This is a reasonable question because as we know by using BFS we don’t find only the shortest path between given vertices i and j, but we also get the shortest paths between i and all other vertices of G. This is an information that we actually don’t need, but can we find the shortest path between i and j without that info? Continue reading Computer Algorithms: Dijkstra Shortest Path in a Graph