I'm developing a path finding program. It is said theoretically that A* is better than Dijkstra. In fact, the latter is a special case of the former. However, when testing in the real world, I begin to doubt that is A* really better?
I used data of New York City, from 9th DIMACS Implementation Challenge - Shortest Paths, in which each node's latitude and longitude is given.
When applying A*, I need to calculate the spherical distance between two points, using Haversine Formula, which involves sin, cos, arcsin, square root. All of those are very very time-consuming.
The result is,
Using Dijkstra: 39.953 ms, expanded 256540 nodes.
Using A*, 108.475 ms, expanded 255135 nodes.
Noticing that in A*, we expanded less 1405 nodes. However, the time to compute a heuristic is much more than that saved.
To my understanding, the reason is that in a very large real graph, the weight of the heuristic will be very small, and the effect of it can be ignored, while the computing time is dominating.
I think you're more or less missing the point of A*. It is intended to be a very performant algorithm, partially by intentionally doing more work but with cheap heuristics, and you're kind of tearing that to bits when burdening it with a heavy extremely accurate prediction heuristic.
For node selection in A* you should just use an approximation of distance. Simply using (latdiff^2)+(lngdiff^2) as the approximated distance should make your algorithm much more performant than Dijkstra, and not give much worse results in any real world scenario. Actually the results should even be exactly the same if you do calculate the travelled distance on a selected node properly with the Haversine. Just use a cheap algorithm for selecting potential next traversals.