Why are both little- and big-endian still in use today, after ~40 years of binary computer-science? Are there algorithms or storage formats that work better with one and much worse with the other? Wouldn't it be better if we all switched to one and stick with it?
When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations).
On the Intel 8088, which had 16-bit registers but an 8-bit data bus, being little-endian allowed such instructions to start operation after the first memory cycle. (Of course it should be possible for the memory fetches of a word to be done in decreasing order rather than increasing but I suspect this would have complicated the design a little.)
On most processors the bus width matches the register width so this no longer confers an advantage.
Big-endian numbers, on the other hand, can be compared starting with the MSB (although many compare instructions actually do a subtract which needs to start with the LSB anyway). The sign bit is also very easy to get.
Are there algorithms or storage formats that work better with one and much worse with the other?
No. There are small advantages here and there but nothing major.
I actually think litte-endian is more natural and consistent: the significance of a bit is 2 ^ (bit_pos + 8 * byte_pos). Whereas with with big endian the significance of a bit is 2 ^ (bit_pos + 8 * (word_size - byte_pos - 1)).
Wouldn't it be better if we all switched to one and stick with it?
Due to the dominance of x86, we've definitely gravitated towards little-endian. The ARM chips in many mobile devices have configurable endianness but are often set to LE to be more compatible with the x86 world. Which is fine by me.