I have been working on an area lighting implementation in WebGL similar to this demo:
http://threejs.org/examples/webgldeferred_arealights.html
The above implementation in three.js was ported from the work of ArKano22 over on gamedev.net:
http://www.gamedev.net/topic/552315-glsl-area-light-implementation/
Though these solutions are very impressive, they both have a few limitations. The primary issue with ArKano22's original implementation is that the calculation of the diffuse term does not account for surface normals.
I have been augmenting this solution for some weeks now, working with the improvements by redPlant to address this problem. Currently I have normal calculations incorporated into the solution, BUT the result is also flawed.
Here is a sneak preview of my current implementation:
The steps for calculating the diffuse term for each fragment is as follows:
The issue with this solution is that the lighting calculations are done from the nearest point and do not account for other points on the lights surface that could be illuminating the fragment even more so. Let me try and explain why…
Consider the following diagram:
The area light is both perpendicular to the surface and intersects it. Each of the fragments on the surface will always return a nearest point on the area light where the surface and the light intersect. Since the surface normal and the vertex-to-light vectors are always perpendicular, the dot product between them is zero. Subsequently, the calculation of the diffuse contribution is zero despite there being a large area of light looming over the surface.
I propose that rather than calculate the light from the nearest point on the area light, we calculate it from a point on the area light that yields the greatest dot product between the vertex-to-light vector (normalised) and the vertex normal. In the diagram above, this would be the purple dot, rather than the blue dot.
And so, this is where I need your help. In my head, I have a pretty good idea of how this point can be derived, but don't have the mathematical competence to arrive at the solution.
Currently I have the following information available in my fragment shader:
To put all this information into a visual context, I created this diagram (hope it helps):
To test my proposal, I need the casting point on the area light – represented by the red dots, so that I can perform the dot product between the vertex-to-casting-point (normalised) and the vertex normal. Again, this should yield the maximum possible contribution value.
I have created an interactive sketch over on CodePen that visualises the mathematics that I currently have implemented:
The relavent code that you should focus on is line 318.
castingPoint.location
is an instance of THREE.Vector3
and is the missing piece of the puzzle. You should also notice that there are 2 values at the lower left of the sketch – these are dynamically updated to display the dot product between the relevant vectors.
I imagine that the solution would require another pseudo plane that aligns with the direction of the vertex normal AND is perpendicular to the light's plane, but I could be wrong!
The good news is there is a solution; but first the bad news.
Your approach of using the point that maximizes the dot product is fundamentally flawed, and not physically plausible.
In your first illustration above, suppose that your area light consisted of only the left half.
The "purple" point -- the one that maximizes the dot-product for the left half -- is the same as the point that maximizes the dot-product for both halves combined.
Therefore, if one were to use your proposed solution, one would conclude that the left half of the area light emits the same radiation as the entire light. Obviously, that is impossible.
The solution for computing the total amount of light that the area light casts on a given point is rather complicated, but for reference, you can find an explanation in the 1994 paper The Irradiance Jacobian for Partially Occluded Polyhedral Sources here.
I suggest you look at Figure 1, and a few paragraphs of Section 1.2 -- and then stop. :-)
To make it easy, I have coded a very simple shader that implements the solution using the three.js WebGLRenderer
-- not the deferred one.
EDIT: Here is an updated fiddle: http://jsfiddle.net/hh74z2ft/1/
The core of the fragment shader is quite simple
// direction vectors from point to area light corners
for( int i = 0; i < NVERTS; i ++ ) {
lPosition[ i ] = viewMatrix * lightMatrixWorld * vec4( lightverts[ i ], 1.0 ); // in camera space
lVector[ i ] = normalize( lPosition[ i ].xyz + vViewPosition.xyz ); // dir from vertex to areaLight
}
// vector irradiance at point
vec3 lightVec = vec3( 0.0 );
for( int i = 0; i < NVERTS; i ++ ) {
vec3 v0 = lVector[ i ];
vec3 v1 = lVector[ int( mod( float( i + 1 ), float( NVERTS ) ) ) ]; // ugh...
lightVec += acos( dot( v0, v1 ) ) * normalize( cross( v0, v1 ) );
}
// irradiance factor at point
float factor = max( dot( lightVec, normal ), 0.0 ) / ( 2.0 * 3.14159265 );
More Good News:
Caveats:
WebGLRenderer
does not support area lights, you can't "add the light to the scene" and expect it to work. This is why I pass all necessary data into the custom shader. ( WebGLDeferredRenderer
does support area lights, of course. )three.js r.73