Friday, June 17, 2011

GLSL

Good references
http://www.opengl.org/sdk/docs/tutorials/TyphoonLabs/


http://www.lighthouse3d.com/tutorials/glsl-tutorial/?pipeline
Pictures and texts are from above.



- Vertex Transformation
  • A vertex is a set of attributes such as its location in space, as well as its color, normal, texture coordinates, amongst others.
    • Vertex position transformation
    • Lighting computations per vertex
    • Generation and transformation of texture coordinates


- Rasterziation
  • Rasterization determines the fragments, and pixel positions of the primitive.
  • A fragment in this context is a piece of data that will be used to update a pixel in the frame buffer at a specific location.
  • A fragment contains not only color, but also normals and texture coordinates, amongst other possible attributes, that are used to compute the new pixel’s color.
  • Ex) 1. Each vertex has a transformed position.       
  •        2. Triangle’s vertices color weighted by the relative distances of the vertices to the fragment.

- Fragment Texturing and Coloring
  • The common end result of this stage per fragment is a color value and a depth for the fragment
  • No access to the frame buffer.
 -  Raster Operations
  • Scissor test, Alpha test, Stencil test, Depth test
  • Notice that blending occurs only at this stage because the Fragment Texturing and Coloring stage has no access to the frame buffer. The frame buffer is only accessible at this stage.





Creating a Shader

- You can create as many shaders as you want to add to a program, but remember that there can only be a main function for the set of vertex shaders and one main function for the set of fragment shaders in each single program.



GLuint glCreateShader(GLenum shaderType);
void glShaderSource(GLuint shader, int numOfStrings, const char **strings, int *lenOfStrings);
void glCompileShader(GLuint shader);

GLhandleARB glCreateShaderObjectARB(GLenum shaderType);
void glShaderSourceARB(GLhandleARB shader, int numOfStrings, const char **strings, int *lenOfStrings);
void glCompileShaderARB(GLhandleARB shader);

  




You can create as many programs as you want. Once rendering, you can switch from program to program, and even go back to fixed functionality during a single frame. For instance you may want to draw a teapot with refraction and reflection shaders, while having a cube map displayed for background using OpenGL’s fixed functionality. 

If you have a pair vertex/fragment of shaders you’ll need to attach both to the program. You can have many shaders of the same type (vertex or fragment) attached to the same program.

GLuint glCreateProgram(void);
void glAttachShader(GLuint program, GLuint shader);
void glLinkProgram(GLuint program);
void glUseProgram(GLuint prog);

void glDetachShader(GLuint program, GLuint shader);
void glDeleteShader(GLuint id);
void glDeleteProgram(GLuint id);



Communication OpenGL -> Shaders


- The shader has access to part of the OpenGL state, therefore when an application alters this subset of the OpenGL state it is effectively communicating with the shader. So for instance if an application wants to pass a light color to the shader it can simply alter the OpenGL state as it is normally done with the fixed functionality.
- However,  consider a shader that requires a variable to tell the elapsed time to perform some animation. There is no suitable named variable in the OpenGL state for this purpose.
- Fortunately, GLSL allows the definition of user defined variables for an OpenGL application to communicate with a shader.

- GLSL has two types of variable qualifiers (more qualifiers are available to use inside a shader as detailed in Data Types and Variables subsection):
  • Uniform
  • Attribute
Variables defined in shaders using these qualifiers are read-only as far as the shader is concerned.

- There is yet another way of sending values to shaders: using textures. A texture doesn’t have to represent an image; it can be interpreted as an array of data. In fact, using shaders you’re the one who decides how to interpret your textures data, even when it is an image


Uniform Variables

- A uniform variable can have its value changed by primitive only, i.e., its value can’t be changed between a glBegin / glEnd pair.
-  Uniform variables are suitable for values that remain constant along a primitive, frame, or even the whole scene. Uniform variables can be read (but not written) in both vertex and fragment shaders.
- The first thing you have to do is to get the memory location of the variable. Note that this information is only available after you link the program.
- GLint glGetUniformLocation(GLuint program, const char *name);
- GLint glUniform{1,2,3,4}fv(GLint location, GLsizei count, GLfloat *v);
- GLint glUniformMatrix{2,3,4}fv(GLint location, GLsizei count, GLboolean transpose, GLfloat *v);


- A similar set of function is available for data type integer, where “f” is replaced by “i”. There are no functions specifically for bools, or boolean vectors. Just use the functions available for float or integer and set zero for false, and anything else for true.
- The values that are set with these functions will keep their values until the program is linked again. Once a new link process is performed all values will be reset to zero. 


    loc1 = glGetUniformLocation(p, "specIntensity");
    glUniform1f(loc1, specIntensity);

    loc2 = glGetUniformLocation(p, "specColor");
    glUniform4fv(loc2, 1, sc);

    loc3 = glGetUniformLocation(p, "t");
    glUniform1fv(loc3, 2, threshold);

    loc4 = glGetUniformLocation(p, "colors");
    glUniform4fv(loc4, 3, colors);


Attribute Variables

- If it is required to set variables per vertex then attribute variables must be used. In fact attribute variables can be updated at any time. Attribute variables can only be read (not written) in a vertex shader. This is because they contain vertex data, hence not applicable directly in a fragment shader (see the section on varying variables).

- GLint glGetAttribLocation(GLuint program,char *name);
- GLint glVertexAttrib{1,2,3,4}fv(GLint location, GLfloat *v);

- Vertex Arrays can also be used together with attribute variables. The first thing to be done is to enable the arrays. 

- void glEnableVertexAttribArray(GLint loc);
- void glVertexAttribPointer(GLint loc, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const void *pointer);


- Note that the vector version is not available for arrays as is the case of uniform variables. The vector version is just an option to submit the values of a single attribute variable.

loc = glGetAttribLocation(p,"height");

glBegin(GL_TRIANGLE_STRIP);
    glVertexAttrib1f(loc,2.0);
    glVertex2f(-1,1);

    glVertexAttrib1f(loc,2.0);
    glVertex2f(1,1);

    glVertexAttrib1f(loc,-2.0);
    glVertex2f(-1,-1);

    glVertexAttrib1f(loc,-2.0);
    glVertex2f(1,-1);
glEnd();


Data Types and Variables

  • vec{2,3,4} a vector of 2,3,or 4 floats
  • bvec{2,3,4} bool vector
  • ivec{2,3,4} vector of integers
  • mat2
  • mat3
  • mat4
-  A set of special types are available for texture access. These are called samplers and are required to access texture values, also known as texels.
  • sampler1D – for 1D textures
  • sampler2D – for 2D textures
  • sampler3D – for 3D textures
  • samplerCube – for cube map textures
  • sampler1DShadow – for shadow maps
  • sampler2DShadow – for shadow maps
-  Array: arrays can be declared using the same syntax as in C. However arrays can’t be initialized when declared. Accessing array’s elements is done as in C.

- Structure:Structures are also allowed in GLSL. The syntax is the same as C.
struct dirlight {
    vec3 direction;
    vec3 color;
};
 
struct dirlight {        // type definition
    vec3 direction;
    vec3 color;
};
dirlight d2 = dirlight(vec3(1.0,1.0,0.0),vec3(0.8,0.8,0.4));

- Declaring the other types of variables follows the same pattern, but there are differences between GLSL and C regarding initialization. GLSL relies heavily on constructor for initialization and type casting.
- matrices are assigned in column major order

- It is possible to use the letters x,y,z,w to access vectors components. If you’re talking about colors then r,g,b,a can be used. For texture coordinates the available selectors are s,t,p,q.


Variable Qualifiers

  • const – The declaration is of a compile time constant
  • attributeGlobal variables that may change per vertex, that are passed from the OpenGL application to vertex shaders. This qualifier can only be used in vertex shaders. For the shader this is a read-only variable. See Attribute section.
  • uniform – Global variables that may change per primitive (may not be set inside glBegin,/glEnd), that are passed from the OpenGL application to the shaders. This qualifier can be used in both vertex and fragment shaders. For the shaders this is a read-only variable. See Uniform section
  • varying – used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and read-only in a fragment shader.

    Functions

    The parameters of a function have the following qualifiers available:
    • in – for input parameters
    • out – for outputs of the function. The return statement is also an option for sending the result of a function.
    • inout – for parameters that are both input and output of a function
    vec4 toonify(in float intensity)
    {
        vec4 color;
        if (intensity > 0.98)
            color = vec4(0.8,0.8,0.8,1.0);
        else if (intensity > 0.5)
            color = vec4(0.4,0.4,0.8,1.0);
        else if (intensity > 0.25)
            color = vec4(0.2,0.2,0.4,1.0);
        else
            color = vec4(0.1,0.1,0.1,1.0);       

        return(color);
    }


    Varying Variables

    - As mentioned before we have two types of shaders: vertex and fragment shaders. In order to compute values per fragment it is often required to access vertex interpolated data. For instance, when performing lighting computation per fragment, we need to access the normal at the fragment. However in OpenGL, the normals are only specified per vertex. These normals are accessible to the vertex shader, but not to the fragment shader since they come from the OpenGL application as an attribute variable.

    - After the vertices, including all the vertex data, are processed they move on to the next stage of the pipeline (which still remains fixed functionality) where connectivity information is available. It is in this stage that the primitives are assembled and fragments computed. For each fragment there is a set of variables that are interpolated automatically and provided to the fragment shader. An example is the color of the fragment. The color that arrives at the fragment shader is the result of the interpolation of the colors of the vertices that make up the primitive.

    - This type of variables, where the fragment receives interpolated data, are “varying variables”. GLSL has some predefined varying variables, such as the above mentioned color. GLSL also allows user defined varying variables. These must be declared in both the vertex and fragment shaders, for instance:
    varying float intensity;
    A varying variable must be written on a vertex shader, where we compute the value of the variable for each vertex. In the fragment shader the variable, whose value results from an interpolation of the vertex values computed previously, can only be read.


    Hello World in GLSL


    In order to output the transformed vertex, the shader must write to the predefined variable gl_Position.
    It is now possible to write a vertex shader that will do nothing more than transform vertices. Note that all other functionality will be lost, meaning, for instance, that lighting computations will not be performed.

    vTrans = projection * modelview * incomingVertex
    ==> (into GLSL)

    uniform mat4 gl_ModelViewMatrix;
    uniform mat4 gl_ProjectionMatrix;
    uniform mat4 gl_ModelViewProjectionMatrix;
    attribute vec4 gl_Vertex;
    attribute vec4 gl_Position;
    //vec4 fTransform(void);

    void main()
    {
        //gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
        //gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        gl_Position = fTransform();
    }


    cf. gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    The end result is of course the same. Does this guarantee the same transformation as in the fixed functionality? Well in theory yes, but in practice the process of transforming the vertices may not follow the same order of operations as in this case. This is normally a highly optimized task in a graphic card, and a special function is provided to take advantage of that optimization. Another reason for this function is due to the limit in the precision of the float data type. When calculus is done in different orders, different results may be obtained due to this limited precision. Hence the GLSL provides a function that guarantees that not only the best performance is obtained but also that the result is always the same as when using the fixed functionality. This magical function is: fTransform( )


    Color Shader

    - GLSL has an attribute variable where it keeps track of the current color. It also provides varying variables to get the color from the vertex shader to the fragment shader

    attribute vec4 gl_Color;
    varying vec4 gl_FrontColor; // writable on the vertex shader
    varying vec4 gl_BackColor; // writable on the vertex shader
    varying vec4 gl_Color; // readable on the fragment shader


    The idea is as follows:
    1. The OpenGL applications sends a color using the glColor function
    2. The vertex shader receives the color value in the attribute gl_Color
    3. The vertex shader computes the front face and back face colors, and stores them in gl_FrontColor, and gl_BackColor respectively
    4. The fragment shader receives an interpolated color in the varying variable gl_Color, depending on the orientation of the current primitive, i.e. the interpolation is done using either the gl_FrontColor or the gl_BackColor values.
    5. The fragment shader sets gl_FragColor based on the value of gl_Color
    -  The concept in here is that we have two variables in the vertex shader, namely gl_FrontColor and gl_BackColor, and these are used to derive automatically the value of gl_Color depending in the orientation of the current face.


    Flatten Shader

    void main(void)
    {
        vec4 v = vec4(gl_Vertex);
        v.z = sin(5.0*v.x )*0.25;

        gl_Position = gl_ModelViewProjectionMatrix * v;
    }


    Toon Shading – Version I

    lightDir = normalize(vec3(gl_LightSource[0].position));
    intensity = dot(lightDir, gl_Normal);

    - The vertex shader has access to the normals, as specified in the OpenGL application, through the attribute variable gl_Normal. This is the normal as defined in the OpenGL application with the glNormal function, hence in model local space.

    - The final result doesn’t look very nice, does it? The main problem is that we’re interpolating the intensity. This is not the same as computing the intensity with the proper normal for the fragment.


    Toon Shader – Version II

    - In this section we will do the toon shader effect per fragment. In order to do that, we need to have access to the fragments normal per fragment. Hence the vertex shader only needs to write the normal into a varying variable, so that the fragment shader has access to the interpolated normal.


    Toon Shader – Version III

    struct gl_LightSourceParameters {
        vec4 ambient;
        vec4 diffuse;
        vec4 specular;
        vec4 position;
        ...
    };

    uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights];

     - The OpenGL specification states that when a light position is set it is automatically converted to eye space coordinates, i.e. camera coordinates. We can assume that the light position stays normalized when automatically converted to eye space.

    - We have to convert the normal to eye space coordinates as well to compute the dot product, as it only makes sense to compute angles, or cosines in this case, between vectors in the same space, and as mentioned before the light position is stored in eye coordinates.

     - To transform the normal to eye space, we will use the pre-defined uniform variable mat3 gl_NormalMatrix. This matrix is the transpose of the inverse of the 3×3 upper left sub matrix from the modelview matrix. We will do the normal transformation per vertex.



    Lighting

    struct gl_LightSourceParameters {
        vec4 ambient;
        vec4 diffuse;
        vec4 specular;
        vec4 position;
        vec4 halfVector;
        vec3 spotDirection;
        float spotExponent;
        float spotCutoff; // (range: [0.0,90.0], 180.0)
        float spotCosCutoff; // (range: [1.0,0.0],-1.0)
        float constantAttenuation;
        float linearAttenuation;
        float quadraticAttenuation;
    };
    uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights];


    struct gl_LightModelParameters {

        vec4 ambient;
    };
    uniform gl_LightModelParameters gl_LightModel;


    struct gl_MaterialParameters {
        vec4 emission;
        vec4 ambient;
        vec4 diffuse;
        vec4 specular;
        float shininess;
    };
    uniform gl_MaterialParameters gl_FrontMaterial;
    uniform gl_MaterialParameters gl_BackMaterial;

    Directional Lights I

    - The diffuse lighting in OpenGL assumes that the light is perceived with the same intensity regardless of the viewer's position. Its intensity is proportional to both the (1) light's diffuse intensity as well as (2) material’s diffuse reflection coefficient. The intensity is also proportional to (3) the angle between the light direction and the normal of the surface.












    where I is the reflected intensity, Ld is the light’s diffuse color (gl_LightSource[0].diffuse), and Md is the material’s diffuse coefficient (gl_FrontMaterial.diffuse).

    - Since we’re not using the fixed functionality, there is no need to enable the lights.
    - Note that, for directional lights, OpenGL stores the light direction as the vector from the vertex to the light source, which is the opposite to what is shown in the above figure.
    - OpenGL stores the lights direction in eye space coordinates; hence we need to transform the normal to eye space in order to compute the dot product.





    Directional Lights II

    - The Phong model says that the specular component is proportional to the cosine between the light reflection vector and the eye vector.

    L is the vector from the light to the vertex being shaded. N is the normal vector, and Eye is the vector from the vertex to the eye, or camera. R is the vector L mirror reflected on the surface. The specular component is proportional to the cosine of alpha.

    - If the eye vector coincides with the reflection vector then we get the maximum specular intensity. As the eye vector diverges from the reflection vector the specular intensity decays. The rate of decay is controlled by a shininess factor.

    -  Blinn-Phong model: Blinn proposed a simpler and faster model, knows as the Blinn-Phong model that is based on the half-vector. The half-vector is a vector with a direction half-way between the eye vector and the light vector.


    if (NdotL > 0.0) {
            NdotHV = max( dot(normal, gl_LightSource[0].halfVector.xyz), 0.0);
            specular = gl_FrontMaterial.specular * gl_LightSource[0].specular * pow(NdotHV,gl_FrontMaterial.shininess * 1000);       
    }

    Point Light Per Pixel

    Spot Light Per Pixel

    Simple Texture

    - In order to perform texturing operations in GLSL we need to have access to the texture coordinates per vertex. GLSL provides some attribute variables, one for each texture unit:
    attribute vec4 gl_MultiTexCoord{0, 1, 2, ..., 7};
     
    - GLSL also provides access to the texture matrices for each texture unit in an uniform array.
    uniform mat4 gl_TextureMatrix[gl_MaxTextureCoords];

    - The vertex shader has access to the attributes defined above to get the texture coordinates specified in the OpenGL application. Then it must compute the texture coordinate for the vertex and store it in the pre defined varying variable gl_TexCoord[i], where i indicates the texture unit.




















      No comments:

      Post a Comment