I am not in the game industry. But, I like to learn about this Graphic Engines which magically creates a world inside a device. I am basically interested in how it works. SO I started to read about OpenGL ES today. But, I think I will forget stuff as I go. Therefore, I am going to log them here in a question answer format.
1. What is OpenGL-ES?
API for Programmers who wanted to write S/W for Graphic Hardware in Portable devices. How the hardware understands the code is not Software developers problem. 🙂
2. What has actually been cut off from OpenGL to make OpenGL-ES?
When there are more than one way of performing the same operation, only the most useful method was taken and the redundant techniques were removed.
Something called “precision qualifiers” were introduced to reduce power consumption and increase performance? I have no idea what that is. We will see later.
3. How do you use OpenGL-ES API? What happens?
Basically, the API accepts the 3D object (World or an object made of Polygons I believe) as array of vertices, a shader program and Textures. Then it does all sort of stuff by sending them through a production pipeline and creates pixels to show in the device screen.
After reading a bit more, I understood, there is something called “Vertex Shader” which accepts the vertex attribute array (I believe this has something to do with Color and position of each point in the world polygon), Some Constant Stuff called “Uniforms” (Why the hell are they using such funny names? argh!), Samplers (Again, argh!) and Shader Programs.
These shader programs will tell the OpenGL-ES what to do with the Attributes.
4.What do people normally expect “Vertex shaders” to do?
Basically, the 3D models have to be squashed to 2D eventually. Because, in reality all what we see in the screen is pixels in a 2D axis. The colours, lighting effect, distance between pixels and shading make it look like 3D. So, all the work (it is called transforming the positions by matrix multiplications- these matrices are supplied as Uniforms) to flat the Design and lighting effects on the color value of each pixel and all the stuff a paper artist normally does should be done at this stage.
5. What happens after the Vertex shader?
The first block in the Pipeline “Vertex Shader” produces something called “Varying”(Argh, you son of the motherless goat! ) per attribute (and Each vertex can have many attributes, mind it). This process seems to be called, “Interpolation”. And these “varying” s will be the input for next block in the pipeline – Fragment Shaders.
Each Vertex form the Vertex array may get many varyings (Color is an example) and something called gl_position from vertex shader.
6. You were talking about some matrix up there? What is it?
I understood it by reading this article. Basically, you have your object, say a monkey model. You apply a scale*rotation*translation matrix(Or Model matrix) to move it to the world space. Then if we want to see the monkey from back, we need to apply another matrix(View matrix) to rotate the world. Now we have to splat everything to the devices surface considering the depth of each vertice in count. We use another matrix called Projection matrix. This combination of Model View Projection Matrices was what we were talking earlier about. 🙂