I am probably missing something obvious but anyway:
When you import a package like os
in python, you can use any submodules/subpackages off the bat. For example this works:
>>> import os
>>> os.path.abspath(...)
However I have my own package which is structured as follows:
FooPackage/
__init__.py
foo.py
and here the same logic does not work:
>>> import FooPackage
>>> FooPackage.foo
AttributeError: 'module' object has no attribute 'foo'
What am I doing wrong?
When you import FooPackage
, Python searches the directories on PYTHONPATH until it finds a file called FooPackage.py
or a directory called FooPackage
containing a file called __init__.py
. However, having found the package directory, it does not then scan that directory and automatically import all .py files.
There are two reasons for this behaviour. The first is that importing a module executes Python code which may take time, memory, or have side effects. So you might want to import a.b.c.d
without necessarily importing all of a huge package a
. It's up to the package designer to decide whether a's __init__.py
explicitly imports its modules and subpackages so that they are always available, or whether or leaves the client program the ability to pick and choose what is loaded.
The second is a bit more subtle, and also a showstopper. Without an explicit import statement (either in FooPackage/__init__.py
or in the client program), Python doesn't necessarily know what name it should import foo.py
as. On a case insensitive file system (such as used in Windows), this could represent a module named foo
, Foo
, FOO
, fOo
, foO
, FoO
, FOo
, or fOO
. All of these are valid, distinct Python identifiers, so Python just doesn't have enough information from the file alone to know what you mean. Therefore, in order to behave consistently on all systems, it requires an explicit import statement somewhere to clarify the name, even on file systems where full case information is available.