This is second part of series on Bellman-Ford shortest path algorithm.

For first part check this.

So, you’ve written your Bellman-Ford shortest path algorithm! It works great!

But, its damn slow. Right? And for a huge graph like ~6,000 vertices and ~10,000 edges you would rather take a coffee break come back and still… it’s running.

Let’s talk now about the optimization of algorithm. And I am not talking about premature optimization.

**First Optimization:**

why does Bellman-Ford outer loop has to run for |V -1| times? (where ‘V’ is the number of vertices.)

Well, for a graph with ‘V’ vertices the number of longest connected edges would be |V – 1|. E.g. for 10 vertices you can have at max 9 edges.

And during each iteration at least one of the edges will get relaxed. Thus, you must run it for at least |V – 1| number of times.

But, is this always the case?

In above graph we have :

Vertices V : 6

Number of Longest connected edges : 4 (Am I right?)

Number of Longest connected edges : You start from your source vertex. And try to find the longest connected path to end at any other vertex. Count the number of edges in that path.

So, when you are in 5th iteration. You’re unnecessarily visiting every single vertex to update the weights. (This is true for 4th iteration too).

We can optimize here.

See the snippet below:

for (int i = 1; i <= verticesCount - 1; i++) { isAnyWeightUpdated = false; for (/*run though list of all edges*/) { if ((vertexWeights[startVertexID] + curEdgeLength) < vertexWeights[endVertexID]) { vertexWeights[endVertexID] = vertexWeights[startVertexID] + curEdgeLength; vertexTrail[endVertexID] = startVertexID; isAnyWeightUpdated = true; } } if (!isAnyWeightUpdated) { //There are no more paths having more than 'i' number of edges, so quit now. break; } }

As it turns out in most cases, number of longest connected edges are significantly less than | V – 1 |. In some cases I found improvement up to 70x over original implementation.

**Second Optimization:**

You would notice that during an iteration where we update one or more vertex weights. And in the next iteration we’ll visit every single vertex to check if its weight needs an update. We know intuitively that unless a vertex weight was updated in last iteration, the next connected vertices to it will not have any effect on weights.

If you look at above graph. In this iteration weights for vertices 2 and 3 are being updated. In next iteration there is no point in visiting any other vertices except 4 and 5. As all the directional edges coming out of vertex 2 and 3 end up on 4 and 5.

Here we can check if weights need to be updated for 4 and 5.

Now, if we update weight of vertex 4 in this iteration, in next iteration we’ll check only vertex 6.

If we update weight of vertex 5 in this iteration, in next iteration we’ll check only vertex 3, 4 and 6.

and so on. You get the idea. Have a look at this snippet:

List<int> updatedVertices; //add starting vertex as seed vertex in the list updatedVertices.add(seedVertexID); // do the computation only for edges whose starting vertices has been updated while (updatedVertices.size() != 0) { sourceVertexID = updatedVertices[i]; currentEdges.clear(); your_api_get_edges_coming_out_of_vertex(sourceVertexID, currentEdges); //for list of all edges coming out of vertex for (currentEdges.init(); tempEdge = currentEdges.next();) { //get start and end vertex for current edge startVertexID = tempEdge.startVetexID(); endVertexID = tempEdge.endVertexID(); curEdgeLength = tempEdge.length(); if ((vertexValues[startVertexID] + curEdgeLength) < vertexWeights[endVertexID]) { vertexWeights[endVertexID] = vertexWeights[startVertexID] + curEdgeLength; vertexTrail[endVertexID] = startVertexID; updatedVertices.add(tempEdge.endVertexID()); } } }

This will work at the expense of some extra memory which we’ll use to store list of vertices which may need an update.

The algorithm will exit when the list contains no more vertices to be updated.

In some cases, this algorithm gave me improvement of up to 10x over first optimization. And up to 700x over original algorithm.

There are other approaches related to randomization of Vertex to be updated etc. But, I did not get a chance to implement them. You can check the link below for more.

Hope this helps !

References :

You can this link to check step-by-step execution of algorithm. It will give you clear idea of optimization ideas discussed in this post.

Have a look at this paper on Bellman-Ford optimization approaches.

Images taken from here.