**PV112: Computer Graphics API** Course Notes Introduction =============================================================================== This course expects knowledge at the level of PB071 course. In case you have not yet attended the course or you are attending it alongside this one we provide a short introduction to tools you need to know. ## Structure of the Course We are currently holding the course live in D2 and B311 lecture rooms. But we will closely follow the COVID-19 pandemic development and the preventive measures at Masaryk University to ensure your safety. - We will make the recordings of the lectures public in study materials, but please note that the quality may not be optimal. - We will make available written transcripts of the lectures in study materials. - Every week there will be a seminar group which is not compulsory (except the first one), but you are encouraged to join. Note that the lecturers are *NOT* obliged to answer questions outside these seminar groups, so use this time wisely. - **You are also required to deliver a working project at the end of the semester (the assignment is already available in study materials).** Again, the lecturers will be happy to help you with the assignment, if they see that you are attending the seminars. ## CMake CMake is used as a building tool for the project on both Windows and Linux. You should get acquainted with the basics at [CMake Tutorial](https://cmake.org/cmake-tutorial/) and [Modern CMake Tutorial](https://cliutils.gitlab.io/modern-cmake/). You can use CMake from the command line directly or use your favorite IDE. In this course the Visual Studio Code is used. For the PV112 lessons the project is structured in the following way: - We use VCPKG for downloading and compiling required libraries. You can use the same method to include libraries of your own choice (assuming VCPKG supports it) for the final project. To link the library with the target executables, you need to update vcpkg.json file and use CMake command [target_link_libraries](https://cmake.org/cmake/help/latest/command/target_link_libraries.html). - To simplify the lectures, we created our own framework. You can find all its sources in the framework folder. - **Finally, you can find individual lectures under `courses/PV112`. Each produces one executable that you can run. You will have to add any new lectures provided to you during the course here. A new one will be published every week alongside a solution to the previous one.** The lectures are registered by `visitlab_add_subdirectory(FOLDER_NAME)` command in the course `CMakeLists.txt`. Note that the commands are already there, automatically ignoring missing folders. ## Visual Studio Code We will be using Visual Studio Code as a default IDE. Please use the following tutorial to set up your computer. [https://visitlab.pages.fi.muni.cz/tutorials/vs-code/index.html](https://visitlab.pages.fi.muni.cz/tutorials/vs-code/index.html). Make sure that you install all extensions required for PV112. ## Window Initialization - OpenGL SuperBible: Chapter 2. Our First OpenGL Program - OpenGL Programming Guide: Chapter 1. Introduction to OpenGL - Learn OpenGL: [Creating a Window](https://learnopengl.com/Getting-started/Creating-a-window), [Hello Window](https://learnopengl.com/Getting-started/Hello-Window) Before being able to work with OpenGL we must do 2 steps first: 1. Dynamically load OpenGL functions. This is done because OpenGL implementation is usually provided by separately installed driver and not built in OS. Since loading is OS-specific and tedious work, we will use [glad](https://github.com/Dav1dde/glad) library instead. 2. Create a window we can render to. As with 1. this is OS-specific therefore [glfw](https://www.glfw.org/) cross-platform library will be used in these lessons. !!! WARNING You may find that GLFW supports Apple systems. However, we limit ourselves to modern OpenGL 4.5 (Released in 2014) and up. Apple hasn't updated their implementation since 2011 and recently officialy deprecated OpenGL in its operating systems. This limitation cannot be bypassed using virtual machine. We advise you to install either Windows or Linux alongside/instead of macOS or use school computers. ### Task: Set the Viewport - [glViewport](http://docs.gl/gl4/glViewport) Your first task consists of setting up a viewport for OpenGL to draw to. `Viewport` is defined by OpenGL as an area of a window where it should output the result of its computation. For our purposes it is desirable to set the origin of the area to starting coordinates of `[0; 0]` and window's `width` and `height` effectively covering entire window. Set the viewport at the beginning of `render` function. Use the glViewport function for this task. ### Task: Clear the Window - [glClearColor](http://docs.gl/gl4/glClearColor) - [glClear](http://docs.gl/gl4/glClear) Now that the window and the viewport are set up, your task is to clear the window using any color. This is done every frame so that user doesn't see overlapping results of computation. First choose your color using `glClearColor`. Then clear the screen with it using `glClear` function inside `render` function. ### Optional Exercise Change the color based on some kind of input or time(for example using sine function). - To retrieve the current time use [glfwGetTime](https://www.glfw.org/docs/3.0/group__time.html). - Learn how to use input at [GLFW Input Guide](https://www.glfw.org/docs/latest/input_guide.html). ### Optional homework If you want deeper understanding of inner workings of the libraries used, look at the documentation of APIs for your operating system and create a template reliant solely on them. For Windows that means including *windows.h* then calling *wglGetProcAddress* to load OpenGL functions and *CreateWindow* to (no surprise) create a window. For linux there are *glXGetProcAddress* and *Xlib* functions. Shader Compilation =============================================================================== OpenGL SuperBible: Chapter 2. Our First OpenGL Program
OpenGL Programming Guide: Chapter 2. Shader Fundamentals
Learn OpenGL: [Hello Triangle](https://learnopengl.com/Getting-started/Hello-Triangle)
Before being able to draw anything with OpenGL we need to compile shaders which instruct GPU how to draw. In this lesson there are two shaders, `main.vert` and `main.frag`, prepared for you to compile, link and use. Task: Compile Shaders ------------------------------------------------------------------------------- - [glCreateShader](http://docs.gl/gl4/glCreateShader) - [glShaderSource](http://docs.gl/gl4/glShaderSource) - [glCompileShader](http://docs.gl/gl4/glCompileShader) First compile both shaders. You can create your own function to avoid duplicity. Task: Create Program ------------------------------------------------------------------------------- - [glCreateProgram](http://docs.gl/gl4/glCreateProgram) - [glAttachShader](http://docs.gl/gl4/glAttachShader) - [glLinkProgram](http://docs.gl/gl4/glLinkProgram) - [glDetachShader](http://docs.gl/gl4/glDetachShader) - [glDeleteShader](http://docs.gl/gl4/glDeleteShader) After succesfully compiling both shaders, link them together to form a whole program to run on GPU. Task: Use the Program ------------------------------------------------------------------------------- - [glUseProgram](http://docs.gl/gl4/glUseProgram) Lastly, call `glUseProgram` in `render` function before the call to `glDrawArrays` to see the result of shaders. Ignore rest of calls for now, they will be explained in further lessons. Task: Create New Shaders ------------------------------------------------------------------------------- Create a new set of shaders and a new program that will render a geometry in different color. Use the `glViewport` function from previous lesson to render the given triangle on the left side of the window using the first program and on the right side using the second program. Task: Switch Shaders on Input ------------------------------------------------------------------------------- Switch the order of used programs for drawing on pressing keyboard key of your choice. Drawing, Buffers & Vertex Attributes =============================================================================== OpenGL SuperBible: - Chapter 2. Our First OpenGL Program - Chapter 5. Data - Buffers - Chapter 7. Vertex Processing and Drawing Commands, Chapter 7. Vertex Processing and Drawing Commands OpenGL Programming Guide: - Chapter 3. Drawing with OpenGL - Chapter 3. Drawing with OpenGL - Data in OpenGL Buffers, Vertex Specification Learn OpenGL: [Hello Triangle](https://learnopengl.com/Getting-started/Hello-Triangle) In order to draw OpenGL needs 3 things: 1. Bound program describing how to draw supplied data. 2. Vertex Array Object describing how to fetch data from buffers on GPU. 3. Draw command describing which part of data should be taken in what order. The first one was covered in the lesson before. Now we will focus on the Last one. OpenGL has many draw commands (as of OpenGL 4.5 there are 25) but we will focus on the simplest one right now, `glDrawArrays`. `Arrays` set of draw functions takes data in consecutive order and assigns them number `gl_VertexID` which you can use in shader. In our first shader we supply no external data and instead fetch them from the constant array based on the `gl_VertexID` number. You can see this in illustration below. You can offset the number using `first` parameter. You can also try supplying different primitives besides triangles, however, we will stay with triangles in our lessons as this is the native geometry supported by GPUs nowadays and most widely used. ******************************************************************************* * * * * * ^ Y+ * * | * * | * * gl_VertexID = 2 | * * [0.0; 0.5] o * * /|\ * * / | \ * * / | \ X+ * * ----------+---+---+----------> * * / | \ * * / | \ * * o------+------o * * [-0.5; -0.5] | [0.5; -0.5] * * gl_VertexID = 0 | gl_VertexID = 1 * * | * * | * ******************************************************************************* Task: Draw a Square ------------------------------------------------------------------------------- - [glDrawArrays](http://docs.gl/gl4/glDrawArrays) Modify main.vert such that there are 6 vertices ******************************* for 2 triangles forming square. Modify also * 2=5 4 * `glDrawArrays` to draw both of them. * o o-----o * (Alternatively use two calls!) * |\ \ | * * | \ \ | * * | \ \ | * * | \ \ | * * | \ \| * * o-----o o * * 0 1=3 * ******************************* Task: Draw a Square Using Triangle Strip ------------------------------------------------------------------------------- - [Triangle Strip](https://en.wikipedia.org/wiki/Triangle_strip) Modify main.vert such that there are 4 vertices ******************************* for 2 triangles forming square. Use * 2 3 * GL_TRIANGLE_STRIP mode. * o-----o * * |\ | * * | \ | * * | \ | * * | \ | * * | \| * * o-----o * * 0 1 * ******************************* Task: Copy Vertex Data to GPU ------------------------------------------------------------------------------- - [glCreateBuffers](http://docs.gl/gl4/glCreateBuffers) - [glNamedBufferStorage](http://docs.gl/gl4/glBufferStorage) - ([glNamedBufferData](http://docs.gl/gl4/glBufferData)) - [glDeleteVertexArrays](https://docs.gl/gl4/glDeleteVertexArrays) - [glDeleteBuffers](https://docs.gl/gl4/glDeleteBuffers) Acquaint yourself with the data for this lesson in `data.hpp` file. For every buffer that is located there allocate one corresponding buffer on GPU with the same size and copy the data from CPU to GPU. Remember to deallocate them in the destructor of the `Application`. You can create data for each object in their respective tasks as needed or create them all upfront. That is up to you. !!! note You can use sizeof() operator in C++ to retrieve the size. Recall that calling sizeof on array gives you the size of entire array only if its static(which it is in this case) but gives you the size of pointer in case of dynamic array. Cleaner and more general solution is to use the size of array and sizeof type it contains, for example: square_vertices_length * sizeof(uint32_t). With the data in GPU continue with tasks 3.4-3.7. All of them are exercises for creation of Vertex Array Objects. At the end of each one, draw the object as described in previous lessons. You can skip to Task 3.8 after the first one if you want to see them colored and then go back to Task 3.5. Task: Create Vertex Array Object #1 (Diamond) ------------------------------------------------------------------------------- - [glCreateVertexArrays](http://docs.gl/gl4/glCreateVertexArrays) - [glVertexArrayVertexBuffer](http://docs.gl/gl4/glBindVertexBuffer) - [glEnableVertexArrayAttrib](http://docs.gl/gl4/glEnableVertexAttribArray) - [glVertexArrayAttribFormat](http://docs.gl/gl4/glVertexAttribFormat) - [glVertexArrayAttribBinding](http://docs.gl/gl4/glVertexAttribBinding) In this lesson you will exercise sending data to GPU and giving GPU their description so that it knows how to fetch them and interpret as input to shaders. Each input/output is described with the following syntax in the shader: ```GLSL layout(location = attribute_index) in/out type name; ``` where `attribute index` is a unique number chosen by you, `in/out` keyword says whether it's input or output (we will learn about outputs later in the lesson) and the `name` is a unique identifier chosen by you as well. The `number` serves as an identification for CPU and driver side. The `name` is used only in main function of respective shader where it is declared. We will have 2 inputs in this lesson, position and color, at locations 0 and 1 respectively. ```GLSL layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; ``` We will pass the position inside the shader to the `gl_Position` built-in variable and ignore the color for now. The first object has both attributes saved in separate buffers. Each buffer corresponds to one binding and each of those bindings correspond to one location. This is the most trivial and intuitive example. However, it doesn't show why there is a binding point. 1. Create VAO for this object 2. For each buffer associate it with binding number 3. For each attribute 1. Enable it 2. Describe its format 3. Associate it with binding ******************************************************************************* * +---+---+---+---+---+---+---+ * * Buffer 0: | X | Y | Z | X | Y | Z |...| * * +---+---+---+---+---+---+---+ * * * * +---+---+---+---+---+---+---+ * * Buffer 1: | R | G | B | R | G | B |...| * * +---+---+---+---+---+---+---+ * ******************************************************************************* ******************************************************************************* *+----------+ +-----------+ +----------------------+ * *| Buffer 0 +------------>| Binding 0 +-+----------->| Location 0: Position | * *+----------+ +-----------+ +----------------------+ * * * *+----------+ +-----------+ +----------------------+ * *| Buffer 1 +------------>| Binding 1 +-+----------->| Location 1: Color | * *+----------+ +-----------+ +----------------------+ * ******************************************************************************* Task: Create Vertex Array Object #2 (Square) ------------------------------------------------------------------------------- The second object has both attributes saved in one buffer **deinterleaved**. That means you will have to provide offset into the buffer for the second binding. From bindings further the description remains the same. ******************************************************************************* * +---+---+---+---+---+---+---+---+---+---+---+---+---+---+ * * Buffer 0: | X | Y | Z | X | Y | Z |...| R | G | B | R | G | B |...| * * +---+---+---+---+---+---+---+---+---+---+---+---+---+---+ * ******************************************************************************* ******************************************************************************* *+----------+ +-----------+ +----------------------+ * *| Buffer 0 +-+---------->| Binding 0 +------------->| Location 0: Position | * *+----------+ | +-----------+ +----------------------+ * * | * * | +-----------+ +----------------------+ * * +---------->| Binding 1 +------------->| Location 1: Color | * * +-----------+ +----------------------+ * ******************************************************************************* Task: Create Vertex Array Object #3 (Triangle) ------------------------------------------------------------------------------- The third object has both attributes in one buffer **interleaved**. Plus the color is saved as 8-bit unsigned byte. Take care of the fact that we want color to be normalized to range [0.0, 1.0] before being passed to shader. We defined the interleaved attributes as an array of structs for easier consumption. ```C struct Vertex { float position[3]; uint8_t color[3]; } ``` !!! note You can use offsetof(struct, member) macro to retrieve the (relative) offset of an attribute. ******************************************************************************* * +---+---+---+---+---+---+---+---+---+---+ * * | Vertex 0 | Vertex 1 ...| * * Buffer 0: +---+---+---+---+---+---+---+---+---+---+ * * | X | Y | Z | R | G | B | X | Y | Z |...| * * +---+---+---+---+---+---+---+---+---+---+ * ******************************************************************************* ******************************************************************************* *+----------+ +-----------+ +----------------------+ * *| Buffer 0 +------------>| Binding 0 +--+---------->| Location 0: Position | * *+----------+ +-----------+ | +----------------------+ * * | * * | +----------------------+ * * +---------->| Location 1: Color | * * +----------------------+ * ******************************************************************************* Task: Create Vertex Array Object #4 (Indexed Diamond) ------------------------------------------------------------------------------- - [glVertexArrayElementBuffer](http://docs.gl/gl4/glVertexArrayElementBuffer) - [glDrawElements](http://docs.gl/gl4/glDrawElements) This object/task is exactly the same as 3.2, except now the data are indexed. There is one additional buffer for indices you have to associate with VAO. In order to draw this object you have to use `glDrawElements` instead of `glDrawArrays`. Task: Interpolate Color ------------------------------------------------------------------------------- In order to see the color it is neccessary to pass it from vertex to fragment shader. This can be done by adding output attribute to vertex shader that we will write input color to. **Add output to your vertex shader**: ``` layout(location = 0) out vec3 vs_color; ``` and write the input to it. Notice the different name and same location as input. The name has to be different so that it doesn't collide with the input. However, the number can be same as there is no place where it would create ambiguity. The output location serves only for matching between vertex and fragment. **Add the output vertex color to fragment shader as input**. Notice that name can be different in fragment shader but **location and type must be the same to match**. ```GLSL layout(location = 0) in vec3 fs_color; ``` See the illustration of the flow below. ************************************************************************************* * * * Vertex Shader * * +---------------------------------+ +-------------------------------------+ * *-->| (location = 1) in vec3 color +-->| (location = 0) out vec3 vs_color | * * +---------------------------------+ +--------------------+----------------+ * * | * * Fragment Shader +-------------------------------------+ * * v * * +---------------------------------+ +-------------------------------------+ * *-->| (location = 0) in vec3 fs_color +-->| (location = 0) out vec4 final_color +-->* * +---------------------------------+ +-------------------------------------+ * * * ************************************************************************************* At the end the result should look like this: ![Solution to 04](./images/04_final.png) Task: Look at the result in RenderDoc ------------------------------------------------------------------------------- - [RenderDoc Getting Started](https://renderdoc.org/docs/getting_started/quick_start.html) Run your finished application in RenderDoc and capture a single frame. Spent some time tinkering with it and see if you can collect all the information you gave OpenGL through glVertexArray* commands. RenderDoc is widely used in the industry and it is therefore beneficial to have a good handle of it. ![Solution 03 in RenderDoc](./images/renderdoc.png) Optional Homework ------------------------------------------------------------------------------- You can analyze frame draw of your favorite game. An example of such analysis can been seen at [Deus Ex: Human Revolution - Graphics Study](http://www.adriancourreges.com/blog/2015/03/10/deus-ex-human-revolution-graphics-study/). Fixed Function Pipeline =============================================================================== - OpenGL SuperBible: Chapter 9. Fragment Processing and the Framebuffer - OpenGL Programming Guide: Chapter 8. Color, Pixels, and Framebuffers !!! note At the beginning of OpenGL shaders didn't exist and drawing was made using draw commands that were executed immediately. For example: ``` glBegin() glVertex(...) glVertex(...) glVertex(...) glEnd() ``` would draw a triangle. These set of functions are called fixed functions. However, do not confuse it with what you are about to learn in this lesson which deals with *configurable* *non-programmable* parts of GPU's pipeline. So far we successfully pushed data to GPU and rendered them through programmable shaders. In this lesson we take look at the parts of the pipeline which are only configurable through state changing functions. Task: Polygon rasterisation Mode ------------------------------------------------------------------------------- - Functions: [glPolygonMode](http://docs.gl/gl4/glPolygonMode) GPU's rasterizer does not necessarily need to fill the specified geometry. You can let GPU render a geometry as set of lines or only as points. Let the user of your application change these modes by pressing buttons on the keyboard. Pressing F will result in change of rendering mode to FILL and pressing L will change the rendering mode to lines. ![Line rendering](./images/05_lines.png) Task: Depth ------------------------------------------------------------------------------- - [glEnable](http://docs.gl/gl4/glEnable) - Learn OpenGL: [Depth testing](https://learnopengl.com/Advanced-OpenGL/Depth-testing) The color buffer is not the only one OpenGL uses during the rendering. You should know by now from the lectures about the depth buffer. Enable the use of depth buffer and clear it along with the color buffer before rendering. ![Diamond in the upper right corner is properly rendered with the depth buffer](./images/05_depth.png) Task: Multisampling ------------------------------------------------------------------------------- - Learn OpenGL: [Anti Aliasing](https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing) You may have noticed the jagged edges on rendered triangles. From the course PB009 you may remember that this artifact is called aliasing. Enable the simplest anti-aliasing method, multisampling, by creating multisampled main framebuffer in `main.cpp` using `manager.set_multisampling_per_pixel(count)` function. Note that this custom method internally changes the amount of samples by calling `glfwWindowHint(GLFW_SAMPLES, samples_per_pixel);` when the windows is created. Use `glEnable` again, now to allow multisampling, in `Application.cpp`. !!! WARNING If your code seems to be fine but you see no visual difference between different sample counts, your graphics driver might be a reason as it may do automatically some anti aliasing. To fix this behaviour in case of Nvidia GPU, open the NVIDIA Control Panel, go to "Manage 3D Settings" and verify that your "Antialiasing - Mode" is set to "Application-controlled". ![Left: without multisampling, Right: with 4x multisampling](./images/05_multisampling.png) Task: Color Blending ------------------------------------------------------------------------------- - Functions: [glBlendFunc](http://docs.gl/gl4/glBlendFunc), [glBlendEquation](http://docs.gl/gl4/glBlendEquation) - Learn OpenGL: [Blending](https://learnopengl.com/Advanced-OpenGL/Blending) !!! WARNING Disable depth testing before this exercise. Depth + blending do not play well. When OpenGL computes a fragment it has to decide what to do with the previous one at the same position. This process is called blending and can be configured using functions `glBlendFunc` and `glBlendEquation`. The blending equation has a form: $ C_{result} = F_{source} * C_{source} \otimes F_{destination} * C_{destination} $ where source is the incoming fragment recently computed and destination the previous fragment, C is a color of a fragment and $\otimes$ the equation. Typically, blending is used for transparency in a following setup: ``` glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBlendEquation(GL_FUNC_ADD); ``` $ C_{result} = C_{source}^\alpha * C_{source} + (1.0 - C_{source}^\alpha) * C_{destination} $ Enable blending and try out different equations. ![Transparency using blending function](./images/05_blending_transparent.png) Task: Face Culling ------------------------------------------------------------------------------- - Functions: [glCullFace](http://docs.gl/gl4/glCullFace) - Learn OpenGL: [Face Culling](https://learnopengl.com/Advanced-OpenGL/Face-culling) Normally, triangles are drawn from both sides. To avoid a costly rasterisation that may not be visible a process called culling is applied. Each triangle is classified as either front-facing or back-facing based on the order of its points. The culling may be configured in OpenGL to ignore either back or front faces. Typically, triangles given in counter clockwise order are classified as front faces and back faces are culled. Enable culling and set back faces to be culled. ![Culled diamond in the left corner.](./images/05_culling.png) Uniforms =============================================================================== - [glProgramUniform*](http://docs.gl/gl4/glProgramUniform) In order to keep things simple we will use only the first arrangement of data from task 4.2 for remaining lessons. We also provide simple Geometry class inside the framework which provides ability to load objects from `.obj` files using [tinyobjloader](https://github.com/syoyo/tinyobjloader) library. The class also has a few specialized inherited classes for some basic objects such as cube, sphere, and teapot. ``` class Geometry { ... void draw(...); static Geometry from_file(std::filesystem::path file_path); private: GLuint positions_vbo; ... GLuint vao; } class Sphere : public Geometry {} ``` In previous lessons you learned how to provide geometry data for a shader program, a case where each vertex receives unique value. In this lesson, you will provide uniform data and use them for projection, lighting and materials. Geometry data are already prepared for you so you will find in the shader: ``` layout(...) in vec3 position; layout(...) in vec3 normal; layout(...) in vec2 texture_coordinate; ``` All of them are passed through to a fragment shader. Normal vector is multiplied by normal matrix as described in the lecture. Task: Apply Model + View + Projection Transformations ------------------------------------------------------------------------------- - OpenGL SuperBible: - OpenGL Programming Guide: Chapter 5. Viewing Transformations - Learn OpenGL: [Coordinate Systems](https://learnopengl.com/Getting-started/Coordinate-Systems) - [GLM Manual](https://github.com/g-truc/glm/blob/0.9.9.2/doc/manual.pdf) Add MVP matrices as uniforms to main.vert. Create MVP matrices on CPU side and upload them to the shader program. Use different model matrix for each object. ![Scene after using MVP matrices.](./images/uniforms_mvp_final.png) Task: Apply lighting ------------------------------------------------------------------------------- - [Phong Reflection Model](https://en.wikipedia.org/wiki/Phong_reflection_model) - [Blinn-Phong Shading Model](https://en.wikipedia.org/wiki/Blinn–Phong_shading_model) - Learn OpenGL: [Lighting](https://learnopengl.com/Lighting/Basic-Lighting), [Materials](https://learnopengl.com/Lighting/Materials) Add light and material properties as uniforms to main.vert and main.frag. Differentiate between directional and point light based on its .w value in position. Map the change between directional and point light to a key on a keyboard. Calculate Blinn-Phong shading. Use any attenuation for point lights, simple [inverse square law](https://en.wikipedia.org/wiki/Inverse-square_law) is enough. ![Scene with added directional light.](./images/uniforms_light_final.png) Textures =============================================================================== Task: Load and Create Texture ------------------------------------------------------------------------------- - [glCreateTextures](http://docs.gl/gl4/glCreateTextures) - [glDeleteTextures](http://docs.gl/gl4/glDeleteTextures) - [glTextureStorage2D](http://docs.gl/gl4/glTexStorage2D) - [glTextureSubImage2D](http://docs.gl/gl4/glTexSubImage2D) - [glGenerateTextureMipmap](http://docs.gl/gl4/glGenerateMipmap) - [glTextureParameteri](http://docs.gl/gl4/glTexParameter) To load pixel data from a file you can use function from [stbi](https://github.com/nothings/stb/blob/master/stb_image.h) library in a following manner (given filename is a `std::filesystem::path`): ```C int width, height, channels; unsigned char *data = stbi_load(filename.generic_string().data(), &width, &height, &channels, 4); ``` The last option `4` will force the library to always load data in RGBA8 format. `width` and `height` variables will contain size of the loaded image, you will use those in the `glTextureStorage2D` and `glTextureSubImage2D`. With the pixel data loaded from the disk start by initializing the texture using `glCreateTextures` and `glTextureStorage2D`. ```C GLuint wood_texture; glCreateTextures(GL_TEXTURE_2D, 1, &wood_texture); glTextureStorage2D(wood_texture, std::log2(width), // NUMBER OF MIMPMAP LEVELS GL_RGBA8, // SIZED INTERNAL FORMAT width, height) ``` `glTextureStorage2D` allocates a memory needed to store a texture. For that we need to specify its resolution (`width` and `height`) and size of each pixel specified as number of channels and their type. Look at the full table of all possible pixel storage at http://docs.gl/gl4/glTexStorage2D. The `sized internal format` has the following syntax: `GL_[components][size][type]`. For our use-case the `GL_RGBA8` will suffice, which correspond to 4-channel with the pixel data stored as unsigned normalized integers(without the suffix). We want the maximum number of [mipmap](https://en.wikipedia.org/wiki/Mipmap) levels for proper interpolation. An easy way to calculate the maximum number of mipmap levels is to take the log2 of the width of the image. To upload the data from the CPU to the GPU use the `glTextureSubImage2D`. `format` and `type` parameters now correspond to the data on the CPU side, which are (because of stbi library) `GL_RGBA` and `GL_UNSIGNED_BYTE`. `level` will be *0* as we want to fill the base mip. Finally, we can call `glGenerateTextureMipmap()` to let the driver calculate mipmap levels from the base mip. You can adjust several attributes by calling the `glTextureParameter(i/f/iv/fv/...)`. Some of the most useful are: - `GL_TEXTURE_MIN_FILTER` `GL_TEXTURE_MAG_FILTER` which can set the interpolation method. Typically `GL_LINEAR_MIPMAP_LINEAR` and `GL_LINEAR` respectively for the best quality. - Wrapping `GL_TEXTURE_WRAP_S/T/R` which set the wrapping behaviour in x, y, z. Task: Apply Texture ------------------------------------------------------------------------------- - [glBindTextureUnit](http://docs.gl/gl4/glBindTextureUnit) - [GLSL texture()](https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/texture.xhtml) Add a texture binding to the fragment shader and use `texture_coordinates` to sample from it using function `texture(sampler, tex_coord)`. Make the fetched sample a final output color. ``` layout(binding = 0) uniform sampler2D diffuse_texture; void main() { vec4 texture_color = texture(diffuse_texture, texture_coordinates); } ``` !!! WARNING Note that samplers use **bindings** and NOT locations. Bind the created textures to a sampler binding point you created in the fragment shader using `glBindTextureUnit`. Task: Turn Material Properties to Textures ------------------------------------------------------------------------------- Add a diffuse texture to code from previous lessson. Multiply the diffuse texture by diffuse value to create textured lighted object. ![Objects with diffuse material multiplied by texture value.](./images/textures.png) Framebuffers =============================================================================== - OpenGL SuperBible: Chapter 9. Off-Screen Rendering - OpenGL Programming Guide: Chapter 4. Framebuffers - Learn OpenGL: [Framebuffers](https://learnopengl.com/Advanced-OpenGL/Framebuffers) Task: Create a framebuffer ------------------------------------------------------------------------------- - [glCreateFramebuffers](http://docs.gl/gl4/glCreateFramebuffers) - [glDeleteFramebuffers](http://docs.gl/gl4/glDeleteFramebuffers) - [glNamedFramebufferDrawBuffers](http://docs.gl/gl4/glDrawBuffers) - [glNamedFramebufferTexture](https://docs.gl/gl4/glFramebufferTexture) Create your own framebuffer along with 2 textures: one for the render output and one for the depth writing. Bind both textures to the framebuffer. Task: Pass the output of main shader through the post-process shader ------------------------------------------------------------------------------- - [glClearNamedFramebufferfv](https://docs.gl/gl4/glClearBuffer) A shader that simply renders a texture over the entire screen is already prepared for you. Give the shader your texture as an input and call it on the default framebuffer. Task: Apply Post-Process Functions ------------------------------------------------------------------------------- - [Conversion to Grayscale](https://en.wikipedia.org/wiki/Grayscale#Luma_coding_big_gameas) - https://en.wikipedia.org/wiki/Kernel_(image_processing) - [texelFetch](https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/texelFetch.xhtml) Now you can apply any post-process effects in the `postprocess` shader to your liking. Try making the screen grayscale as a start. Then try to apply different kernel functions such as blur, sharpen, etc.. ![Scene with edge detection applied.](./images/postprocess.png) Interface Layouts, UBO, SSBO =============================================================================== - OpenGL Programming Guide: Chapter 2. Interface Blocks Task: Bind UBO buffers ------------------------------------------------------------------------------- - [glBindBufferBase](http://docs.gl/gl4/glBindBufferBase) Uniform inputs can be logically grouped into UBOs in the shaders. For example, all material properties can be grouped into a single struct: ``` layout(binding = 0) uniform Material { vec4 ambient; vec4 diffuse; ... } material; ``` These UBOs can be mirrored on the C++ side: ``` struct Material { glm::vec4 ambient; glm::vec4 diffuse; ... } ``` - Create buffers for Camera, Light and Object. (glCreateBuffer, [glNamedBufferStorage](http://docs.gl/gl4/glBufferStorage) with GL_DYNAMIC_STORAGE_BIT) - Bind them all using glBindBufferBase (as GL_UNIFORM_BUFFER) - Draw any object with them Task: Shader Storage Buffer Objects (SSBO) - Many Lights ------------------------------------------------------------------------------- Replace the single light in shader with SSBO containing dynamic array: ``` struct Light { vec4 position; vec4 ambient_color; vec4 diffuse_color; vec4 specular_color; }; layout(binding = 1, std430) buffer Lights { Light lights[]; }; ``` You can get the length of the array by calling `.length()` method on it. - Bind the lightS buffer before drawing - Iterate over every light in the shader and accumulate the lighting Task: Instanced lights ------------------------------------------------------------------------------- - [glDrawElementsInstanced](http://docs.gl/gl4/glDrawElementsInstanced) Use the `draw_light_program` to visualize where the lights are. - Bind the lightS buffer as before - Call `glDrawElementsInstanced` with the size of the vector and arbitrary object. Use members and methods of the Mesh class to bind and fill the parameters of the draw call. (mesh.bind_vao(), mesh.mode, mesh.draw_elements_count) Task: Instanced objects ------------------------------------------------------------------------------- Now draw many objects using the instanced draw call. Modify the vertex shader to retrieve the object from SSBO using its instance ID. ``` struct Object { mat4 model_matrix; vec4 ambient_color; vec4 diffuse_color; vec4 specular_color; }; layout(binding = 2, std430) buffer Objects { Object objects[]; }; ... Object object = objects[gl_InstanceID]; ``` To get the object in the fragment shader as well, you will need to pass the gl_InstanceID as an attribute because the variable is not available in the fragment shader. Since the attribute needs to stay the same for every vertex without any interpolation, the `flat` modifier must be used. ``` // vertex layout(location = 2) out flat int fs_instance_id; ... fs_instance_id = gl_InstanceID; // fragment layout(location = 2) in flat int fs_instance_id; ```