I have heard of deferred evaluation in python (for example here), is it just referring to how lambdas are evaluated by the interpreter only when they are used? Or is this the proper term for describing how, due to python's dynamic design, it will not catch many errors until runtime?
Or am I missing something entirely?
Deferred evaluation is when an expression isn't evaluated until it's needed. In most languages, you use something like lambda
to make this work. Here's a contrived example that shows part of the concept:
def list_files():
for fn in os.listdir('.'):
yield fn, lambda: open(fn, 'r').read()
for fn, body in list_files():
if fn.endswith('.txt'):
print body()
Here, list_files
returns a bunch of filenames and a "thunk" (lambda with no arguments) which returns the file's contents. The "thunk" is a deferred evaluation. Using thunks allows you to separate your concerns:
list_files
could be replaced with list_ftp_files
or list_zip_archive
.list_files
function doesn't need to know which files will be read. With thunks, it doesn't have to read every single file.In proper deferred evaluation, once you evaluated the "thunk" it would replace itself with an evaluated copy, so evaluating it twice would be no more work than evaluating it once. There are other ways to accomplish the same thing, such as with classes and objects which cache values.
Deferred evaluation is a (relatively) common idiom in Scheme. In Haskell, evaluations are deferred by default and you don't need any syntax to do it (there's special syntax for turning it off).