Nevermind that I'm the one who created the texture in the first place and I should know perfectly well how many mipmaps I loaded/generated for it. I'm doing this for a unit test. There doesn't seem to be a glGetTexParameter
parameter to find this out. The closest I've come is something like this:
int max_level;
glGetTexParameter( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, &max_level );
int max_mipmap = -1;
for ( int i = 0; i < max_level; ++i )
{
int width;
glGetTexLevelParameter( GL_TEXTURE_2D, i, GL_TEXTURE_WIDTH, &width );
if ( 0 == width )
{
max_mipmap = i-1;
break;
}
)
Anyhow, glGetTexLevelParameter()
will return 0 width for a nonexistent mipmap if I'm using an NVidia GPU, but with Mesa, it returns GL_INVALID_VALUE
, which leads me to believe that this is very much the Wrong Thing To Do.
How do I find out which mipmap levels I've populated a texture with?
The spec is kinda fuzzy on this. It says that you will get GL_INVALID_VALUE
if the level parameter is "larger than the maximum allowable level-of-detail". Exactly how this is defined is not stated.
The documentation for the function clears it up a bit, saying that it is the maximum possible number of LODs for the largest possible texture (GL_MAX_TEXTURE_SIZE
). Other similar functions like the glFramebufferTexture
family explicitly state this as the limit for GL_INVALID_VALUE
. So I would expect that.
Therefore, Mesa has a bug. However, you could work around this by assuming that either 0 or a GL_INVALID_VALUE
error means you've walked off the end of the mipmap array.
That being said, I would suggest employing glTexStorage
and never having to even ask the question again. This will forcibly prevent someone from setting MAX_LEVEL to a value that's too large. It's pretty new, from GL 4.2, but it's implemented (or will be very soon) across all non-Intel hardware that's still being supported.