OpenGL ES Shading Language

1. How do “Variables and Variable types” look in OpenGL?

We have scalars (eg: floats, int), floating point vectors (eg: vec2-vec4), integer vectors, (eg: ivec2-ivec4), boolean vectors (eg: bvec2-bevec4) and matirces (mat2-mat4)

2. How do Vectors and matrix construction and selection work?

eg1: 

vec3 myVec3 = vec3(0.0,1.0,2.0);

vec3 temp = myVec3.zyx; // temp = {2.0, 1.0, 0.0};

eg2:

float m2_2 = myMat4[2].z;

3. How do youdefine Constants?

const mat4 indentity = mat4(1.0);

4. How do you define Structures?

struct fogDtruct {

vec4 color;

float start;

float end;

} fogVar;

fogVar = fogStruct(vec4(0.0, 1.0, 0.0, 0.0), 0.5, 2.0);

5. How do you define Arrays?

float floatArray[5];

vec vecArray[2];

6. Tell something about the Operators in OpenGL ES?

it is as normal as other languages. But, multiplication can perform matrix multiplications and vector multiplications.

7. Tell something about the Control flow?

Nothing special about them. Similar as other languages .

8. Tell something about the Functions?

like verilog language, this also has in, out and inout. in is default.

eg: vec4 myFunc (inout float myFloat, out vec4 myVec4, mat4 myMat4); //declaration

//definition you know.

And, OpenG provides some great built in functions like dot, pow out of the box.

9. Tell something about Attributes and Uniforms?

Uniforms normally store constant transformation matrices, light parameters and colors. But, they are called variable  because they are not know at compile time.

eg uniform mat4 viewProjMatrix;

uniform vec3 lightPosition;

Attributes are only available in the vertex shader and are used to specify the per vertex input to the vertex shader.

attribute vec4 a_position;

10. Tell something about the Varyings?

varyings are the output of vertex shaders and input of fragment shaders. SO, you will find a matching varying in fragment shader.

11. Tell something about the Preprocessor and directives?

#equivalent to as other languages

12. Uniforms and Varying packing.

This is about how the OpenGL utilizes the hardware store by packing them together and saving efficiently. Not to be worried about programmers, I guess.

13. What can you tell about precision qualifiers?

Precision qualifiers enable the shader author to specify the precision with which computations for a shader variable are performed.

eg: varying lowp vec4 color;

14. What is invariance?

Due to optimisation by compiler (eg: using different precision qualifiers depending on the source), the same code on the same inputs might result in slightly different results. This might affect where the same object is being drawn on top of itself using alpha blending. ( called muti pass shader effects ). If we apply invariant keyword to varyings, this can be avoided.

eg: invariant gl_Position;

invariant varying texCoord;

#pragma STDGL invariant (all)

Advertisements

OpenGL-ES : Shaders and Programs

1. What are shader and program objects?

The shader object is an object that contains a single shader. The source code is given to the shader object and then the shader object is compiled into object form.

Then the shader object is attached to a program object.

2.Explain the Process.

1. glCreateShader(type) // Create a shader type: vertex or fragment

2. glShaderSource();//bind the shader code to the object

3. glCompileshader(); // compile the shaders

4. glCreateProgram(); // Create a program

5. glAttachShader(program, shader); // Attach shader to program

6. glLinkProgram(program); // handle the program object to link

7. glUseProgram(program); //Set the Active program

3.What do you have to remember when it comes to Uniforms?

Uniforms are variables(???) that store read-only constant values that are passed in by the application through OPenGL-ES API to the shader.

If a uniform is declared in both a vertex and fragment shader, it must have the same type and its value will be the same in both shaders. (Uniforms are shared across a program)

EGL Skim

1.Where does EGL come in the Scene?

When communicating with native windowing system of my OS EGL acts as the layer between OpenGL-ES and OS-window system.

1- eglGetDisplay()   //open a connection

2- eglInitialize()       //then initialize EGL

To query available types and configurations of drawing surfaces.

3- eglGetConfig() //query every surface configuration Optional

4 -eglGetConfigAttrib() // Analyze each config and make the chice or

5 -eglChooseConfig() // Have EGL make the choice of matching configs(create a list of “nice o have”s and pass )

To create drawing surfaces

6- eglCreateWindowSurface() // Create window with the chosen configs

7- eglCreatePbufferSurface() //Optional Offscreen buffer don’t confuse with frame buffers

8- eglCreateContext()          //Create rendering context

9- eglMakeCurrent()   // to associate particular context with surface

managing rendering resources – ex:texture map. (I have no idea why the resources has to go through this.)

OpenGL-ES EGL

1. What is EGL?

EGL is one interface between OPenGL ES and the native window system (I believe this is something to do with the hardware manufactures).

An OpenGL ES application should query the displays and initialise them using EGL before doing any sort of rendering. Then it should ask EGL to create a rendering surfaces.

2. What is a rendering context?

It stores the OpenGL ES state through out the application.

3. Explain how to Initialise a OpenGL-ES Application?

First the ES is initialised and ES code framework is booted. [esInitialize]

Then a window is created passing that context, size and other rendering requirements (for example RGB frame buffer). [esCreateWindow]

Then the context is initialised. That means – the shaders are compiled. – A Program object is created and those shaders are attached to it. Attribute locations binding in program object is done. Program object is linked and stored into the context’s “userdata”. (I have no idea what this Program Object is all about.)

Then context is registered with a Draw function which is called to render the frame. [esRegisterDrawFunc] The Draw method will initialise the view port. Then it clears the Buffers. Ask ES to use the program object we created earlier. Load the vertex data of the object. Ask the ES to Draw with a primitive method.  Swap the buffers when the back buffer is ready.

Then the context enters the main message processing loop until the window is closed [esMainLoop]

OpenGL-ES Continued

7. Before going deeper tell me all the stages in the Pipeline. First you said, Vertex shader. Then, is it fragment shader?

No, I was wrong. The order s like this.

Vertex Shader -> Primitive Assembly -> Rasterization -> Fragment Shader -> Per-Fragment Operations->Frame Buffer.

8. So, we give our model as array of vertices with all the Matrices and shaders to Vertex shader. And it emits “varyings” and gl_Position, you said. What happens after that?

Primitive Assembly is this stage. What you mean by primitives are triangle, line and points. From what I understand now is, the gl_positions still stay in the 3D space. (That means, how I understood projection matrix was wrong) And the shaded vertices of the model are assembled into individual geometric primitives. The primitives are “clip”ped and “cull”ed before sent to the next stage.

9. What the hell is Clipping?

When you apply projection matrix, few vertices would go out of the view spectrum. When the assembler creates primitives with those vertices, part of / the whole primitives may go out of the view spectrum. So, those primitives are clipped. This is called clipping. After this state the vertices are converted to screen coordinates.

10. So who will explain me about culling?

This happens in the screen coordinate space. It is discarding primitives which are backward facing. It is like not considering back side of a polygon model when camera looks at the front side.

11. What the the next stage after Primitive Assembly?

The primitives are converted into a 2 dimensional fragments, which are then processed by fragment shader. These 2 dimensional fragments represents pixels that can be drawn on the screen. This stage is called “Rasterization“.

12. What happens at the “Fragment Shader” module?

It can discard a fragment or generate a color value called  gl_FragColor from the screen cordinates, color, depth, stencil values a fragment.

13. What happens to the shaded fragment then?

It is checked if the location of the fragment is owned by current OpenGL Context and if it is inside the OpenGL’s Scissor rectangle. After that stencil and depth tests are performed. (I have no idea what they are). After that the fragment color may be blended with the color which is already in the frame buffer at that position. Then something called “Dithering is done to reduce artifacts”.  These steps are combinedly called Per-Fragment Stage.

Some OpenGL-ES

I am not in the game industry. But, I like to learn about this Graphic Engines which magically creates a world inside a device. I am basically interested in how it works. SO I started to read about OpenGL ES today. But, I think I will forget stuff as I go. Therefore, I am going to log them here in a question answer format.

1. What is OpenGL-ES?

API for Programmers who wanted to write S/W for Graphic Hardware in Portable devices. How the hardware understands the code is not Software developers problem. 🙂

2. What has actually been cut off from OpenGL to make OpenGL-ES?

When there are more than one way of performing the same operation, only the most useful method was taken and the redundant techniques were removed.

Something called “precision qualifiers” were introduced to reduce power consumption and increase performance? I have no idea what that is. We will see later.

3. How do you use OpenGL-ES API? What happens?

Basically, the API accepts the 3D object (World or an object made of Polygons I believe) as array of vertices, a shader program and Textures. Then it does all sort of stuff by sending them through a production pipeline and creates pixels to show in the device screen.

After reading a bit more, I understood, there is something called “Vertex Shader” which accepts the vertex attribute array (I believe this has something to do with Color and position of each point in the world polygon), Some Constant Stuff called “Uniforms” (Why the hell are they using such funny names? argh!), Samplers (Again, argh!) and Shader Programs.

These shader programs will tell the OpenGL-ES what to do with the Attributes.

4.What do people normally expect “Vertex shaders” to do?

Basically, the 3D models have to be squashed to 2D eventually. Because, in reality all what we see in the screen is pixels in a 2D axis. The colours,  lighting effect, distance between pixels and shading make it look like 3D. So, all the work (it is called transforming the positions by matrix multiplications- these matrices are supplied as Uniforms) to flat the Design and lighting effects on the color value of each pixel and all the stuff a paper artist normally does should be done at this stage.

5. What happens after the Vertex shader?

The first block in the Pipeline “Vertex Shader” produces something called “Varying”(Argh, you son of the motherless goat! ) per attribute (and Each vertex can  have many attributes, mind it). This  process seems to be called, “Interpolation”. And these “varying” s will be the input for next block in the pipeline – Fragment Shaders.

Each Vertex form the Vertex array may get many varyings (Color is an example) and something called gl_position from vertex shader.

6. You were talking about some matrix up there? What is it?

I understood it by reading this article. Basically, you have your object, say a monkey model. You apply a scale*rotation*translation matrix(Or Model matrix) to move it to the world space. Then if we want to see the monkey from back, we need to apply another matrix(View matrix) to rotate the world. Now we have to splat everything to the devices surface considering the depth of each vertice in count. We use another matrix called Projection matrix. This combination of Model View Projection Matrices was what we were talking earlier about. 🙂