You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current algorithm is exhaustive as it searches from source to sink using a deque and heappop. Heappop assure that the "shortest" edge is searched first, whereby arrival at the sink is guaranteed to be via the sum of the shortest edges. Algorithmically it is not possible to search faster.
Question 1: What does numpy, scipy or networkx do?
There is no implementation in numpy. The closest thing is scipy and networkx.
Scipy does not implement a shortest path per se. It implements the Floyd-Warshall $(N^3)$, Dijkstra $N(Nk + Nlog(N))$, Bellman-Ford and Johnsons. However as graph-theory's algorithms are directed, this leads to another complication:
As currently implemented, Dijkstra’s algorithm and Johnson’s algorithm do not work for graphs with direction-dependent distances reference
networkx has nx.shortest_path(...) (doc, source) which uses nx.bidirectional_dijkstra(G, source, target, weight)(default) or nx.bellman_ford_path(G, source, target, weight).
networkx also implements Floyd-Warshall but this requires preprocessing of the data and uses a loop that is external to numpy. It could be faster than graph-theory for very large highly connected graphs, but I have not seen tests that challenge the assumption that the overhead of copy to numpy plus preprocessing is justifiable.
Answer 1: No.
Question 2: Can the implementation be augmented to search faster? Little tricks or lower level implementation?
The implementation in graph-theory is context unaware. All the information it has is concerned with the edges. This opens the venue for advantage by adding contextual knowledge.
The options I can think of are:
This permits caching repeated calls. This is implemented under the keyword memoize for SSSP but not for bidiretional search.
Preprocessing: OSM implements all pairs-shortest path which is very time and space consuming, but permits $O(1)$-lookups afterwards.
Additions of regions. By establishing boundaries the entry and exit edges for a region can be pre-processed (1) or, so that regions can bypassed if they do not contain the sink. This increases the memory footprint, but reduces the search to "border-crossings" and checking whether a given region contains the sink.
Addition of 2D or 3D information: The A* algorithm or the Simple Stupid Funnel Algorithm are the fastest methods to navigate space. Currently graph-theory doesn't have the geometrical properties to express spatial conditions (SSFA), so A* is probably the best alternative.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The current algorithm is exhaustive as it searches from source to sink using a deque and heappop. Heappop assure that the "shortest" edge is searched first, whereby arrival at the sink is guaranteed to be via the sum of the shortest edges. Algorithmically it is not possible to search faster.
Question 1: What does numpy, scipy or networkx do?
There is no implementation in numpy. The closest thing is scipy and networkx.
Scipy does not implement a shortest path per se. It implements the Floyd-Warshall$(N^3)$ , Dijkstra $N(Nk + Nlog(N))$, Bellman-Ford and Johnsons. However as graph-theory's algorithms are directed, this leads to another complication:
networkx has
nx.shortest_path(...)
(doc, source) which usesnx.bidirectional_dijkstra(G, source, target, weight)
(default) ornx.bellman_ford_path(G, source, target, weight)
.The implementation of bidirectional_dijkstra(G, source, target, weight="weight") is by comparison identical to graph-theory.
networkx also implements Floyd-Warshall but this requires preprocessing of the data and uses a loop that is external to numpy. It could be faster than graph-theory for very large highly connected graphs, but I have not seen tests that challenge the assumption that the overhead of copy to numpy plus preprocessing is justifiable.
Answer 1: No.
Question 2: Can the implementation be augmented to search faster? Little tricks or lower level implementation?
The implementation in graph-theory is context unaware. All the information it has is concerned with the edges. This opens the venue for advantage by adding contextual knowledge.
The options I can think of are:
memoize
for SSSP but not for bidiretional search.sink
. This increases the memory footprint, but reduces the search to "border-crossings" and checking whether a given region contains the sink.graph-theory
doesn't have the geometrical properties to express spatial conditions (SSFA), so A* is probably the best alternative.Beta Was this translation helpful? Give feedback.
All reactions