I'm just starting to learn about parallel programming, and I'm looking at binary search.
This can't really be optimized by throwing more processors at it right? I know it's supposedly dividing and conquering, but you're really "decreasing and conquering" (from Wikipedia).
Or could you possibly parallelize the comparisons? (if X
is less than array[mid]
, search from low
to mid - 1
; else if X
is greater than array[mid]
search from mid + 1
to high
, else return mid
, the index of X
)
Or how about you give half of the array to one processor to do binary search on, and the other half to another? Wouldn't that be wasteful though? Because it's decreasing and conquering rather than simply dividing and conquering? Thoughts?
You can easily use parallelism.
For k
is less than n
processors, split the array into n/k
groups and assign a processor to each group.
Run binary search on that group.
Now the time is log(n/k).
There is also a crew method that is logn/log(k+1).