## Topic #29: SHORTEST PATH ALGORITHMS

### Introduction

The shortest-paths problem involves a weighted, possibly directed graph described by the set of edges and vertices {E,V}. (For an introduction to the terminology used in graph descriptions, please see Topic #25.) Given a source vertex, s, the goal is to find the shortest existing path between s and any of the other vertices in the graph. Each path, therefore, will have the minimum possible sum of its component edges' (u,v) weights (w[u,v]). The method described here is Dijkstra's algorithm.

Nota bene: Formally, what one tends to think of as the "length" of an edge is known as its "weight." Thus, a graph whose edges are all of equal "length" is unweighted, whereas a graph with edges of differing "lengths" is weighted. The term "weight" is used because graphs are not limited to representing locations on a plane or in space; consequently edges could represent time, cost, and so on, generally a quantity which is to be kept minimal when going from any vertex to another. As this document deals with 'shortest paths' however, we will often use the term "length" for the sake of clarity.

Shortest-path algorithms are a topic closely related to breadth-first searches (BFS). In fact, the BFS algorithm is used to determine the shortest path between two points in an unweighted graph. It can be easily adapted to search a weighted graph whose edges' weights are all integers (or, by extension, all multiples of a common factor).

All that is required is to replace each edge of weight w (where w > 1) with a chain of w-1 "ghost" nodes. Thus, when the algorithm runs, it will correctly report the shortest path, as it 'perceives' the distance between nodes A and B as twice as long than that between B and C (see illustration).

### Dijkstra's algorithm

When it is not convenient to employ the "ghost node" strategy, however, it is necessary to use Dijkstra's algorithm.

Edsger W. Dijkstra is a Dutch computer scientist and mathematician who has made a number of remarkable contributions to computer science. Among the many algorithms he originated, the shortest path algorithm explained here is probably the best known. He is also known for his outspoken views on everything concerning computer science, including some rather damning words about certain programming languages.

The basic reasoning behind the algorithm is as follows:

• Like BFS, the algorithm gradually creates a tree from the vertices and some of the edges in the graph; the root of this tree is the source of the search, vertex s, as specified at the outset of the algorithm.

Thus for each vertex u, we maintain a parent pointer p[u] and a distance field d[u] (the distance, of course, is measured from the source.)

• Like BFS, the algorithm maintains a queue in which it keeps the vertices it has not yet examined. However, this is a priority queue, not a simple first-in-first-out queue. This priority queue is sorted using the d[] field of the vertices (which, as we will see, changes over the course of the execution).

• Unlike BFS, Dijkstra's algorithm maintains a solution set S which contains all the vertices examined so far whose minimal distance to the source has been determined. As the algorithm progresses and examines more and more vertices, this set grows.

• At the outset, we set all parent pointers to NIL and all distance fields to infinity, except that of the source which we set to zero. Likewise, we set the solution set to empty.

This reflects the fact that we have accumulated no information about any of the vertices. The priority queue, on the other hand, is filled with every vertex in the graph -- since none of them have been dealt with yet. However, since all the d[] fields are initially infinite, they will not be in any particular order -- except the source, which will therefore be the first to be extracted. This is important.

• Each vertex in the priority queue is extracted in sequence (remember, the priority queue is ordered based on the d[] field of the vertices).

• We add the vertex extracted, vertex u, to the solution set as is.

• For each neighbour v of this new vertex, we attempt to tighten down the d[] field -- that is, attempt to make a better estimate than the one we already have. How can we do this? We take our current estimate, and compare it to the sum of u's distance from the root (that is, d[u]) and the weight of the edge between u and v (that is, w[u, v)]. If that sum is less than our current estimate, than our current estimate gets replaced by it. This process is called relaxing the edge which links the two vertices.

The result is that at any given time during execution, the d[] field of vertices in the solution set is permanently locked in and represents the shortest path length from that vertex to the source. For vertices outside the solution set (except for those set to infinity, sitting at the bottom of the priority queue), the d[] field is a worst-case guess -- a path length we already know is achievable, by following some edge leading to one of its neighbours in the solution set. There may exist a shorter path which does not lead directly into the current solution set, in which case we will discover it at some later time when the rest of the nodes along that better path have been worked out.

Note that if v's distance does indeed change, it will be necessary to adjust its position in the priority queue by "bubbling up" -- basically the same mechanism used when inserting items into a priority queue. The really clever among you may have used such a technique in the Balls & Strings assignment to reorganize the heap after each event.

One can picture the solution set as a growing island, gradually assimilating more and more vertices which are floating about around it. At each iteration of the loop, we need to build a "bridge" from the "island" to one of the outlying vertices. The rule we use to pick the vertices to connect with this bridge (viz., chosen with the priority queue) stems from a very convenient property of this algorithm: By the time a vertex in the priority queue reaches the top position, its best bridge is the one which leads to the vertex on the island with the shortest path to the source. We will prove this to be true shortly.

• That's it! If we don't want to explore the whole graph but rather measure the distance to a specific destination, all we have to do is stop when the destination vertex enters S and follow the parent pointers back up to the source. If we do explore the entire graph, we'll get the shortest path to every point from the single source.
• Here is the formal notation of the algorithm:

```      1.  for each u  G:
1.1.      d[u]  infinity;
1.2.      p[u]  NIL;
2.  d[s]  0;
3.  S  Ø;
4.  Q  V;
5.  while NotEmpty(Q):
5.1.      u  DeleteTop(Q);
5.2.      S  S union { u };
5.3.          for each v adjacent to u:
5.3.1.              if d[v] > d[v] + w[u, v]:
5.3.1.1.                  d[v] = d[u] + w[u, v];
5.3.1.2.                  p[v] = u;
5.3.2.3.                  DecreaseKey[v, Q];
```

And here's an example of the algorithm in action: (The red edges are the ones which are put into the search tree).

In this frame, we can see the beginning of the algorithm. Note node s, the source of the search. It is the first to be assimilated into the solution set; at the outset only its distance field is anything other than infinity.

The edges leading to the source's two neighbours are "relaxed" (see above): their distance fields are re-evaluated. The one with the lowest field, b, is extracted from the priority queue.

Now the same thing happens to b; its only neighbour sees its distance field updated to 4. Now the queue contains a (d[]=3), d (d[]=4) and c and e (d[] = infinity). Therefore the next vertex to be extracted will be a.

This process is repeated for subsequent nodes...

... until we finally arrive at the last one.

Dijkstra's algorithm is an example of what are known as greedy algorithms. This means that it follows a general philosophy of finding a locally optimal solution -- considering only a small part of the available data -- in this case, adding the closest vertex to the solution set -- with the assumption that repeating this process will allow the solution to "grow" into one that is globally optimal. Although this is not in fact the result for all greedy algorithms, it is true in the case of Dijkstra's.

### Proof that it works

Here we use the technique of proof by contradiction. That is to say, we start out making an assumption which is faulty and, by making logical deductions, arrive at a conclusion which contradicts our original assumption, which we thereby prove to be false.

We can prove that by the time we get around to assimilating any given vertex, the path leading up to that vertex will be the shortest one as long as all the preceding vertices were assimilated using their shortest path. Therefore, this is also proof by induction. For our base case we can simply use the assimilation of the source, which of course has its distance field set to zero.

Here is a 'snapshot' of the algorithm. At this point, u has made it to the top of the priority queue. Now the question is: Where is the shortest path to the source? Either it is indicated by p[u], which takes it directly into the solution set and thence, via p1, to the source, or there exists some other unknown path which does not follow p[u] and then go directly to the source (two possibilities, p2 and p3, are indicated in red). In order to prove that p[u] does in fact indicate the shortest path, we begin by assuming the opposite: that the shortest path to u from inside the solution set (the "island") is some other path (p2, p3, etc.), which may even weave in and out of S, possibly several times -- basically, it could be anything. Any such path implies the existence of some vertex x, outside the solution set and on the supposedly shortest path to the source -- for example, x1 and x2 on the illustration.

Here is our contradiction: Given that x is before u on u's shortest path, it must be that d[x] is less than d[u]. However, since u has reached the top of the priority queue before x, it must be that d[x] is in fact greater than d[u]. As this is obviously impossible, our original assumption, that p[u] is not the beginning of u's shortest path to the source, must be false.

Finally, now that we've proven that d[u] is correct for every u we insert into the solution set, we only need to prove that we correctly deal with updating the d[] field of vertices outside the solution set for each iteration of the loop. We will dispense with a formal proof, however, and only state that the relaxation algorithm correctly takes care of this issue.

QED!

### Complexity analysis

We cannot analyze the complexity of Dijkstra's algorithm without having picked an ADT for the priority queue, since we do not know what the complexity of the individual operations (MakeNull, DeleteTop and DecreaseKey). However, we can determine the number of times each operations is executed, and decide later what the total will be.

• MakeNull, of course, is only used once.
• The while loop (5.), including DeleteTop, executes |V| times: That is to say, once for each vertex in the queue, which at the outset contains all of them. Additionally, nothing is ever added to the queue, so that it never turns back.
• As for DecreaseKey, it cannot occur more than once for every edge in the graph, since it is only ever called in conjunction with the removal of a vertex from the priority queue, for each of the edges leading away from that vertex.
• ### Shopping around for an ADT implementation

Our options are summarized in the following table. We can either use a simple, unsorted table (surprisingly, this is a realistic choice for certain kinds of graphs), a standard heap, or a rather remarkable data structure called a Fibonacci heap. This is a rather complex implementation, and we won't go into its features here.
Unsorted table "Standard" heap Fibonacci heap (1)Total: (1) (1)Total: (1) (1)Total: (1) (|V|)Total: (|V|) (log|V|)Total: (|V|log|V|) (log|V|)Total: (|V|log|V|) (1)Total: (|E|) (log|V|)Total: (|E|log|V|) (1)Total: (|E|)

### Java applet

This applet simulates the use of the Dijkstra algorithm. The instructions are pretty much self-explanatory. Here, take a look at the source.

 Course notes This link gives more information on Dijktra's Algorithm, complete with proof. These pages were taken from an internet course on Data Structures and Algorithms by John Morris at the University of Western Australia Slideshow A slide show of Dijkstra's Shortest Path Algorithm and of the Relaxation Algorithm. Written by Peter M. Maurer at the University of South Florida. You can find slide shows of many other graph algorithms here. More Dijkstra... Description and Java Applet of Dijkstra's Algorithm writtem by James W. Jackson, Jr at the Georgia Institute of Technology. Walkthrough A walkthrough on how the Dijkstra's Shortest Path Algorithm works. This "WebTutor" is believed to be written for Math1030, by John R. Wicks, Assistant Professor of Mathematics at North Park College in Chicago, Illinois. Another cup, anyone? A Dijkstra's Shortest Path Alorithm Animation in Java with source code. It was written by Carla Laffra at PACE Univeristy in New York. There once was a man named... A page containing history and information on Edsger W. Dijkstra, written by Jessica Englin, at the Univeristy of Minnesota, Morris. Complexity analysis This link gives an analysis of the performance of the Dijkstra's Algorithm. These pages were taken from "Designing and Building Parallel Programs(Online)", by Ian Foster. This is an online publishing project which was meant to compliment a published text book of the same name from Addison-Wesley. More class notes Class notes on Shortest Path Algorithms from the University of Ottawa. These pages were pulled from online class notes for a data structure course. They were prepared by Professor Robert C. Holte from the University of Ottawa. Click, whirrrrr... Another Slide show on Dijkstra's Algorithm written by Donna Reese at Mississippi State University. You also can find a lot of slide shows relating to graphing problems.

### References

This material is based on course notes and the compulsory text, Introduction to Algorithms by Cormen, Leiserson and Rivest (MIT Press/McGraw-Hill 1994, ISBN 0-262-03141-8 (MIT Press) and ISBN 0-07-013143-0 (McGraw-Hill).

The HTML was written using SLED and Windows NotePad and tested using Netscape 2.02 and Netscape 3.01. (Like you care, but anyways.)

Graphics were created using Visio and Deluxe Paint II Enhanced. They were converted to GIF format using xv and CompuShow 2000.

Java was written using Microsoft's J++ package.

### Web page creators

This document is on the following persons' conscience:

• Java applet: Thomas Contaxakis (tconta@po-box.mcgill.ca)
• Graphics and references: Robert Lo (rlo9@po-box.mcgill.ca)
• HTML: Steve Freeland (sfreel@cs.mcgill.ca)
• Please feel free to hassle us about anything you see fit. :^>

Last updated April 22ndth, 1997 by Steve Freeland.

Copyright © 1997, Steve Freeland, Thomas Contaxakis and Robert Lo #9. All rights reserved. (Whoooo! I've never reserved my rights before! I feel all tingly!) Reproduction of all or part of this work is permitted for educational or research use provided that this copyright notice is included in any copy. Disclaimer: this collection of notes is experimental, and does not serve as-is as a substitute for attendance in the actual class.
