OK, so we have a sphere, now how about some lighting… Basically what we need to do is determine the angle of each surface (the normal) and then the difference between that angle and the angle of light. The greater the difference, the brighter the surface should be. If this doesn’t make sense then think of how a surface whose normal was the same as the angle of the light would be pointing away from the light, and should be totally dark.
A while back I came up with a solution for lighting a mesh object using the Flash 10 3D APIs. This was my own idea based on the knowledge that the normal of a triangle can be computed by getting a vector which is perpendicular to two of its sides. The following crude diagram may help.. the triangle is ABC, the normal is AN. The normal is considered to be facing away from the surface. You’ll need to imagine that ABC is not flat on the picture plane, with B pointing away from you, and with NA perpendicular to AC and AB, which it doesn’t really look like it is in the picture, but theres only so much you can do with ASCII art :p
\ / |
\ / |
So the plan then is to iterate over every triangle in the sphere, and determine its normal, and then the luminosity of that surface by getting the difference between it and the light. The math for this involves vectors, and was new stuff for me. I recommend the excellent book ‘3D Math Primer for Graphics and Game Development’, but any introductory text to 3D graphics should be helpful to grok these concepts. Fortunately the Vector3D class has some handy methods to do such calculations, and you just need to know which method to call.
To get the normal of a triangle you need to convert the points to vectors, and then get the cross-product between two of these vectors.I just created 3 Vector3D objects for each point and subtracted them from each other to get the vectors representing the sides of the triangle. Eg. subtracting point A from B in the above triangle, gives you the vector representing AB. Note that you need to use the Vector3D subtract() method, not regular ‘-’ since the vector subtract() subtracts each component of the vector for you. At this point the code may help clarify (I will give complete code later) :
//pt1,2,3 are Vector3D objects representing points of a trianglevard1:Vector3D=pt3.subtract(pt1);vard2:Vector3D=pt3.subtract(pt2);
Then the final step to get the normal of the triangle surface is to get the cross-product of the two vectors we just derived.
//get the cross-product of the results to get the normalvarnormal:Vector3D=d1.crossProduct(d2);normal.normalize();
The normalize() method just makes sure the length of the vector is between -1 & 1. Now to get the difference between the surface normal and the light vector, we can use the angleBetween() method.
Then I converted the this difference to a color, and draw it onto the bitmapData being used as the texture of the object. I just used the drawing API to draw each triangle on a Sprite, then drew the sprite onto the bitmapData being used in the beginFill() method just before the call to drawTriangles() in the render method.
When I first did this I found that when I rotated the sphere, the lighting was rotating along with it. So I had to transform the light vector using the Matrix3D which I was using to rotate the sphere. This did not work as expected. Then I realized I had to transform the light vector using a Matrix3D which was the inverse of the rotation of the sphere (the projection matrix). So I thought I could just call the invert() method on the projection matrix and use that to transform the light. But it turns out that the projection matrix, which is derived from a perspective projection, is not invertible. SO… I had to make another Matrix3D, and rotate it in the opposite direction, in order to use it for correcting the light direction.
Here’s the working example, followed by the code. There’s a lot of it, but that will improve soon… see comments after code.
So although I was happy that this worked at all, it has some problems… performance has taken a hit, as we are re-creating the shadow/texture map every time. And the ‘texture map’ a this point only has lighting in it , so we’d have to merge the lighting onto a real texture map (with other colors). And the sphere looks more like a disco ball, not smooth at all, so if we wanted a smooth appearance, we’d have to increase the number of points, which would further degrade performance. I did get some improvement by blurring the shadow-map but this would only help with smooth curved surfaces. So this is not a scalable solution at all. I almost didn’t want to show this version at all, but it does introduce the concept of normals, and how to get the lighting on a surface.
After a while it occurred to me that maybe I should really look into normal-maps as a way to improve my lighting code. I was also hoping that I could optimize performance by using PixelBender to do the lighting calculations, based on the normal map. It turned out to be correct, so stay tuned for the next exciting installment…