Short and sweet: I've seen several sources talking about "supercompilation". But I have yet to find one single document anywhere on the face of the Internet which describes what this is. Presumably because it seems simple enough to whoever that it isn't even worth explaining.
Does anybody know what this actually is?
Supercompilation can be approached as a generalization of partial evaluation. The idea behind partial evaluation is that many parts of a program can be evaluated at compile time, and so should be. Supercompilation extends this, evaluating things that can't be fully done at compile time as well, like turning map f (map g xs)
into map (f . g) xs
without anything besides the definition of map
(At least I think I got partial evaluation right - I've only read much about supercompilation).
Another way to look at it is as a combination of many other optimizations, like deforestation, specialization, and inlining. By acting as if it already knew the inputs to functions and evaluating, it can get a more direct method of calculating the result - it can get rid of intermediate data structures by seeing how they will be used or it can plug in all possible values and then wrap the result in a case
, or do something else with its pretend values.
Max Bolingbroke has a number of useful papers on the subject - I recommend the first one, Supercompilation by Evaluation, as an introduction. Section 2 introduces the subject by example and the rest, while a bit difficult to get through, is very informative about the process. Neil Mitchell also has a number of good presentations describing it.
I hope that helps.