I'm having trouble getting Alembic to autogenerate candidate migrations from changes to classes using db.Model
(Flask-SQLAlchemy) instead of Base
.
I've modified env.py
to create my Flask app, import all relevant models, initialize the database, and then run migrations:
...
uri = 'mysql://user:password@host/dbname?charset=utf8'
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = uri
app.config['SQLALCHEMY_ECHO'] = True
db.init_app(app)
with app.test_request_context():
target_metadata = db.Model.metadata
config.set_main_option('sqlalchemy.url', uri)
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
...
This approach works fine for drop_all()
, create_all()
(for example, when recreating a test db for unit testing), but it seems to fall flat in this case. The auto generated version scripts always have empty upgrade and downgrade methods, e.g.,
def upgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
My changes have included renaming columns, changing column definitions, etc., not just changes to indices and foreign keys.
Is anyone out there using Alembic with Flask-SQLAlchemy? Any idea where I'm going wrong?
Thanks much!
Alembic cannot automatically detect table or column renames. By default it will not look for column type changes either, but the compare_type
option can be enabled for this.
Excerpt from the Alembic documentation:
Autogenerate will by default detect:
Autogenerate can optionally detect:
compare_type=True
on EnvironmentContext.configure()
. The feature works well in most cases, but is off by default so that it can be tested on the target schema first. It can also be customized by passing a callable here; see the function’s documentation for details.compare_server_default=True
on EnvironmentContext.configure()
. This feature works well for simple cases but cannot always produce accurate results. The Postgresql backend will actually invoke the “detected” and “metadata” values against the database to determine equivalence. The feature is off by default so that it can be tested on the target schema first. Like type comparison, it can also be customized by passing a callable; see the function’s documentation for details.Autogenerate can not detect:
Enum
when generated on a backend which doesn’t support ENUM
directly - this because the representation of such a type in the non-supporting database, i.e. a CHAR+CHECK
constraint, could be any kind of CHAR+CHECK
. For SQLAlchemy to determine that this is actually an ENUM
would only be a guess, something that’s generally a bad idea. To implement your own “guessing” function here, use the sqlalchemy.events.DDLEvents.column_reflect()
event to alter the SQLAlchemy type passed for certain columns and possibly sqlalchemy.events.DDLEvents.after_parent_attach()
to intercept unwanted CHECK
constraints.Autogenerate can’t currently, but will eventually detect:
CHECK
, UNIQUE
, FOREIGN KEY
- these aren’t yet implemented. Right now you’ll get constraints within new tables, PK and FK constraints for the “downgrade” to a previously existing table, and the CHECK
constraints generated with a SQLAlchemy “schema” types Boolean
, Enum
.UPDATE: some of the items in this last list are supported in the Alembic 0.7.x releases.