I have seen this quoted in a lot of places:
"It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." —Alan Perlis
But I have never seen it explained why this should be true. Is it just the idea that you should try to derive the other 9 data structures from the first to avoid duplicating the data? I feel like I'm missing some context.
The quote is from Alan Perlis' Epigrams on Programming, which was published in 1982.
The meaning of this quote is embodied well in Lisp, where there are multitudes of functions that operate and deal specifically with lists, and you could accomplish a lot just with lists and the assortment of functions that operate on lists, which makes them much more powerful than any single-purpose data structure.
Lua, as another example, uses tables to simulate classes. Why use a table to make objects instead of creating language-level classes and objects like object-oriented languages do? Since your object is a table now, you can use any number of the functions defined for tables on your object, free of charge! Better yet, we didn't have to clutter the language with class-specific syntax and have to redefine functions from table that we want for our class.
What Perlis said is definitely a prominent mode of thought in Lisp and in functional programming in general. Those 100 functions on your one data structure can be composed together in lots of unique ways, since they all operate on the same data structure, but you can't really mix the 10 functions on 10 data structures as well, since they were defined only to work on their particular data structure.
A more modern and simpler variation of this is thinking in terms of abstractions. If we were coding in Java, would you rather write a 100 functions on the List interface, or the same set of ten functions, once for ArrayList, once for LinkedList, once for....