I have two rays on a 2D plane that extend to infinity, but both have a starting point. They are both described by a starting point and a vector in the direction of the ray extending to infinity. I want to find out if the two rays intersect, but I don't need to know where they intersect (it's part of a collision detection algorithm).
Everything I have looked at so far describes finding the intersection point of two lines or line segments. Is there a fast algorithm to solve this?
I am sorry to disagree with the answer of Peter Walser. Solving the equations gives on my desk:
u = ((bs.y - as.y) * bd.x - (bs.x - as.x) * bd.y) / (bd.x * ad.y - bd.y * ad.x)
v = ((bs.y - as.y) * ad.x - (bs.x - as.x) * ad.y) / (bd.x * ad.y - bd.y * ad.x)
Factoring out the common terms, this comes to:
dx = bs.x - as.x
dy = bs.y - as.y
det = bd.x * ad.y - bd.y * ad.x
u = (dy * bd.x - dx * bd.y) / det
v = (dy * ad.x - dx * ad.y) / det
Five subtractions, six multiplications and two divisions.
If you only need to know if the rays intersect, the signs of u and v are enough, and these two divisons can be replaced by num*denom<0 or (sign(num) != sign(denom)), depending on what is more efficient on your target machine.
Please note that the rare case of det==0 means that the rays do not intersect (one additional comparison).