Solution: Network Delay Time

Statement

A network of n nodes labeled 11 to nn is provided along with a list of travel times for directed edges represented as times[i]=(xi​, yi​, ti​)times[i]=(x_i​, \space y_i​, \space t_i​), where xix_i​ is the source node, yiy_i​ is the target node, and tit_i​ is the delay time from the source node to the target node.

Considering we have a starting node, k, we have to determine the minimum time required for all the remaining for n - 1 nodes to receive the signal. Return −1-1 if it is not possible for all n - 1 nodes to receive the signal.

Constraints:

  • 1≤1 \leq k ≤\leq n ≤\leq 100100
  • 1≤1 \leq times.length ≤\leq 60006000
  • times[i].length ==3== 3
  • 1≤x,y≤1 \leq x, y \leq n
  • xx !=!= yy
  • 0≤t≤1000 \leq t \leq 100
  • Unique pairs of (x,y)(x, y), which means that there should be no multiple edges

Solution

So far, you have probably brainstormed some approaches and have an idea of how to solve this problem. Let’s explore some of these approaches and figure out which one to follow based on considerations such as time complexity and any implementation constraints.

Naive approach

The naive approach is to use a simple brute force method. The algorithm would start by initializing all distances (time to travel from source to target node) to infinity, representing disconnection between the two nodes. Then, each node would use a nested loop to go through all other nodes and update their distances using the given travel times. If there is no connection between a source and a target node, its distance will not be updated. After updating all distances, the algorithm would find the maximum distance among all nodes, representing the minimum time it takes for all nodes to receive the signal. If a node that cannot be reached from node k exists, it means the distance is infinity, and it will return -1.

This approach has a time complexity of O(n2)O(n^2), where nn is the number of nodes in the graph.

Optimized approach using Dijkstra’s algorithm

Dijkstra’s algorithm is widely used for finding the shortest path between nodes in a graph. This makes it ideal for finding the minimum delay time in a network.

We will use an adjacency dictionary. The source node will be used as key, and the value is a list of tuples that have the destination node and the time for the signal to travel from source to destination node. A priority queueA priority queue is a queue where elements are assigned a priority and served according to their priority value. In our context, lower-priority elements are served before higher-priority ones. Elements with the same priority are served in the order they were added to the queue. is initialized with time initialized to 00 and starting node k as a tuple. The priority queue ensures that the node with the minimum time is retrieved in each iteration. We will iterate over the priority queue to traverse the nodes in the network. If the node is not visited, the time of the retrieved node is compared to the current delay time and updated accordingly. The neighbors of the retrieved node are found using the adjacency dictionary and are added to the queue with their times updated by adding the delay time from the retrieved node.

Finally, if all the network nodes have been visited, we will return the computed time. Otherwise, −1-1 will be returned.

The slides below illustrate how we would like the algorithm to run:

Level up your interview prep. Join Educative to access 80+ hands-on prep courses.