# Bellman-Ford shortest path algorithm Tips & Tricks

If you want to find out the shortest distance between a graph. Bellman Ford is one of the algorithms you can use. To find more about Bellman-Ford algorithm check this Wiki :

And this great interactive site showing step-by-step process:

Now, in this post I’m not trying to :

• Explain how Bellman-Ford works
• Comparison between other shortest path algorithms

But, here are a few Tips.

Ironically, One of the main confusing points to me was, how do you get the shortest path by using Bellman-Ford. Wait a minute, isn’t that’s what the algorithm is supposed to do? Are we being ripped off?

Well, the algorithm is divided into two parts.

1. Calculating the graph weights
2. Calculating the shortest path

Most of the literature available is talking about the first part. You need to perform the additional step in second part to actually get the shortest path.

For that you need to keep track of trail of vertices.

See this pseudo/c++ code :

```//Have dictionary for keeping track of list of nodes visited
Dictionary<int, int> vertexTrail;
//Initialize for all edge previous edge is equal to INT_MIN value
//Also understand that, number of entries in this dictionary is equal to number of edges in graph.
for (/*run though list of all edges*/)
{
}

//if the relaxation condition is met
if ((vertexWeights[startVertexID] + curEdgeLength) < vertexWeights[endVertexID])
{
//update vertex weights
vertexWeights[endVertexID] = vertexWeights[startVertexID] + curEdgeLength;
//and also keep track of trail!
//i.e. for this end vertex the previous vertex is the start vertex. which is the shortest path
vertexTrail[endVertexID] = startVertexID;
}
```

Now, the last line is interesting. For every single vertices ID which satisfies the condition, we store the previous vertices ID. It’s a simple key-value pair dictionary.

Side Tip : No matter how you get your graph data [if it’s a network traffic, edges on CAD model, geographic distances btw points etc]. It’s better to organize it with vertices ID’s and edges ID’s. This ID can be a simple integer number.

And below snippet actually gives the ‘Shortest Path’ (Pheww!)

```// endVertex : end vertex from which you want to find out shortest path (till the start vertex)
// Dictionary<int, int> vertexTrail : trail of vertices calculated in previous step
// List<int> shortestPathEdges : list of shortest path edge IDs
int GetShortestPath(int endVertex, Dictionary<int, int> vertexTrail,

List<int> shortestPathEdges)
{
int resultCode = -1; // unknown error.

int currentVertexID = endVertex;

if (vertexTrail.ContainsKey(currentVertexID))
{
int prevVertexID = vertexTrail[currentVertexID];

//when you reach the start vertex, this loop will exit
while (prevVertexID != INT_MIN)
{
int edgeID = GetEdgeBetweenVertices(prevVertexID, currentVertexID);

if (edgeID != null)
{
resultCode = 0;
}
else
{
resultCode = -2; // Path not found
break;
}

//hop one step back. Now, your previous Vertex is current Vetext. And find it's previous vertex now.
currentVertexID = prevVertexID;
prevVertexID = vertexTrail[currentVertexID];
}
}
else
{
resultCode = -2;
}

//list is populated from end to start. reverse it.
pShortestPathEdges.reverse();

return resultCode;
}

```

Interesting thing is, you start from your end vertext, and keep hopping back untill you reach the start vertext.

Second thing is, Bellman Ford works on directional graphs. i.e. the edges between vertices are supposed to have direction. It’s kind of obvious right, if you have an edge having a start point and end point. Well, you end up defining a vector.

But, there are instances where we don’t have directional graphs. E.g. what if my graph is distance between cities? In that case distance between London to Paris is same distance between Paris to London. Well in that case, you assume an edge in opposite direction with same weight for every single edge.

Third, if I calculate the graph and get the shortest path between two points. Now, user again request the shortest path between two points. Should I calculate the graph again? Why should I? haven’t I’ve already done this?

Now, you need to understand that when you calculate graph by Bellman Ford you calculated shortest distance between Source Vertex and all other vertex.

If your source vertex is different you must re-calculate your graph weights.

But, if your source vertex is same and end vertex is changing then there is no need to re-calculate the graph weights again.

Hope you find this useful!

Images taken from here.

# Deep-Learning-Resources

Resources for deep learning

Below are some of the best resources I’ve come across for deep learning across various topics. These are probably only resources you need to watch/read if you are refreshing your knowledge or learning from scratch. This list specifically targets Convolution Neural Networks for imaging applications and in general theory of deep learning.

obviously, I’ve not gone through all the material available on the net on the topic of deep learning. But, whatever I’ve come across these are the best ones

## Installation guide for tensorflow, keras on windows

One additional problem I came across was that tensorflow only works with python 3.6. So, one needs to set environment for python 3.6 and then do a pip install.

## Deep learning theory

Best course available.

Good Summary of all the nuts and bolts

## CNN nitty gritty

vanishing gradient problem and ReLu

how exactly filter updates work in CNN

non-linearity

## CNN architectures

Overview of all popular architectures

## High level understanding of CNN / deep learning

Two lectures on ‘why to do deep learning’ than ‘how to do deep learning’

Brief overview of ‘why deep learning’

## Deep learning with tensorflow

One of the best hands-on demo.

## Tensorflow

TF course

Comprehensive TF resources

## Deep learning in-depth hands-on

Go through entire series

## Visualization of CNN’s

Demo of deep learning visualization toolbox. Showing a visual interpretation of what exactly network ‘learns’.

## Where deep learning fails

Interesting insight of CNN identifying random images with high confidence.

# Bellman-Ford shortest path algorithm optimization

This is second part of series on Bellman-Ford shortest path algorithm.

For first part check this.

So, you’ve written your Bellman-Ford shortest path algorithm! It works great!

But, its damn slow. Right? And for a huge graph like ~6,000 vertices and ~10,000 edges you would rather take a coffee break come back and still… it’s running.

Let’s talk now about the optimization of algorithm. And I am not talking about premature optimization.

First Optimization:

why does Bellman-Ford outer loop has to run for |V -1| times? (where ‘V’ is the number of vertices.)

Well, for a graph with ‘V’ vertices the number of longest connected edges would be |V – 1|. E.g. for 10 vertices you can have at max 9 edges.

And during each iteration at least one of the edges will get relaxed. Thus, you must run it for at least |V – 1| number of times.

But, is this always the case?

In above graph we have :

Vertices V : 6

Number of Longest connected edges : 4 (Am I right?)

Number of Longest connected edges : You start from your source vertex. And try to find the longest connected path to end at any other vertex. Count the number of edges in that path.

So, when you are in 5th iteration. You’re unnecessarily visiting every single vertex to update the weights. (This is true for 4th iteration too).

We can optimize here.

See the snippet below:

```for (int i = 1; i &lt;= verticesCount - 1; i++)
{
isAnyWeightUpdated = false;
for (/*run though list of all edges*/)
{
if ((vertexWeights[startVertexID] + curEdgeLength) &lt; vertexWeights[endVertexID])
{
vertexWeights[endVertexID] = vertexWeights[startVertexID] + curEdgeLength;
vertexTrail[endVertexID] = startVertexID;
isAnyWeightUpdated = true;
}
}
if (!isAnyWeightUpdated)
{
//There are no more paths having more than 'i' number of edges, so quit now.
break;
}
}
```

As it turns out in most cases, number of longest connected edges are significantly less than | V – 1 |. In some cases I found improvement up to 70x over original implementation.

Second Optimization:

You would notice that during an iteration where we update one or more vertex weights. And in the next iteration we’ll visit every single vertex to check if its weight needs an update. We know intuitively that unless a vertex weight was updated in last iteration, the next connected vertices to it will not have any effect on weights. If you look at above graph. In this iteration weights for vertices 2 and 3 are being updated. In next iteration there is no point in visiting any other vertices except 4 and 5. As all the directional edges coming out of vertex 2 and 3 end up on 4 and 5.

Here we can check if weights need to be updated for 4 and 5.

Now, if we update weight of vertex 4 in this iteration, in next iteration we’ll check only vertex 6.

If we update weight of vertex 5 in this iteration, in next iteration we’ll check only vertex 3, 4 and 6.

and so on. You get the idea. Have a look at this snippet:

```List<int> updatedVertices;
//add starting vertex as seed vertex in the list

// do the computation only for edges whose starting vertices has been updated
while (updatedVertices.size() != 0)
{
sourceVertexID = updatedVertices[i];

currentEdges.clear();
your_api_get_edges_coming_out_of_vertex(sourceVertexID, currentEdges);

//for list of all edges coming out of vertex
for (currentEdges.init(); tempEdge = currentEdges.next();)
{
//get start and end vertex for current edge
startVertexID = tempEdge.startVetexID();
endVertexID = tempEdge.endVertexID();
curEdgeLength = tempEdge.length();

if ((vertexValues[startVertexID] + curEdgeLength) < vertexWeights[endVertexID])
{
vertexWeights[endVertexID] = vertexWeights[startVertexID] + curEdgeLength;
vertexTrail[endVertexID] = startVertexID;

}
}
}
```

This will work at the expense of some extra memory which we’ll use to store list of vertices which may need an update.

The algorithm will exit when the list contains no more vertices to be updated.

In some cases, this algorithm gave me improvement of up to 10x over first optimization. And up to 700x over original algorithm.

There are other approaches related to randomization of Vertex to be updated etc. But, I did not get a chance to implement them. You can check the link below for more.

Hope this helps !

References :

You can this link to check step-by-step execution of algorithm. It will give you clear idea of optimization ideas discussed in this post.

Have a look at this paper on Bellman-Ford optimization approaches.

Images taken from here.