As you might know, about a year ago I finally broke through the barrier and managed to figure out OpenGL 3.x. This was no easy task, because the amount of information available on the internet about 3.x is completely shadowed by the vast articles, guides and documentation based on 2.x, using immediate mode no less. The few sources of information I could find at the time were inadequate, not using 3.x properly, or just plain misleading. The official OpenGL documentation wasn’t much help, either, because you can’t really piece together what things actually are.
That said, it was obviously possible for me, so it’s not impossible to do. But it could be made so much easier if every guide didn’t start with absolutely no explanation as to what things actually are. To that end, this won’t be a tutorial, it’ll be an explanation of the concepts involved in your first OpenGL program, and will be most useful to people who have already started trying to learn OpenGL 3.x, or are coming from OpenGL 2.x. On its own, it’s not a useful article. See the end of the article for links to articles to use as a base for this.
What is GL/GLEW/GLFW/GLUT/etc? This is something everyone I’ve seen starting has a bit of trouble with. These are a lot of intertwining similar-sounding libraries. To put it simply: GL is the library that handles drawing and the graphics pipeline, GLUT/freeGLUT/GLFW/etc are libraries that let you create the OpenGL contexts and handle things like timing or input, GLEW/GL3W are extension ‘wranglers’ that provide your program with access to OpenGL “extensions” (most functionality you’ll need to use is an extension).
Do I have to use shaders? Basically, yes, but it doesn’t have to be complicated. Short version, shaders dictate three things: what each point in 3D space could potentially be described by (“attributes” like: position, colour, light, etc), where each point in 3D space shows up on your 2D screen (vertex shader), and still using those attributes what colour it is (fragment shader). We’ll go more into detail after we go over other concepts.
What are VBOs/VAOs, and why should I use them? First real problem I had understanding 3.x code involved what VBOs and VAOs were, and why they were. I struggled with this up until I actually finished my first project, simply because there was no information on what they were, only on how to use them. As you might intuit, this makes it hard to actually use them, because you don’t know why things do what they do.
VAOs (or Vertex Array Objects) are what you would consider a fully-fledged graphical object. At creation, it knows nothing about itself, other than that it has several user-defined “attributes” which can be used to store, for example: vertex position, colour, texture coordinates, etc. These attributes are declared by yourself while you create your shaders, and apply to all the VAOs. However, if you don’t bind anything to a VAO’s attribute slot, it gets ignored. So just because you want some VAOs to have texturing, doesn’t mean you have to add texture coordinates for every VAO, just to the ones that are textured. VAOs do not contain the actual data for those attributes, but only store what attributes they have, and where they are (as a reference). For the actual data storage, we use..
VBOs (or Vertex Buffer Objects) are essentially what you would consider arrays, despite the name. When you create a buffer, you allocate memory in the graphical memory which can then store any data you want to give it, like vertex positions, colours, texture coordinates, etc. VBOs don’t know what their data is (see Gotcha below), only how it is stored. You have a starting offset, a ‘stride’ (stride 1 means you have: [element1, element2, ...], while stride 2 means you get: [element1, something_unrelated, element2, something_also_unrelated, element3, ...]), a type of data (GL_FLOAT, etc), and a size.
Gotcha: VBOs don’t know what their data is, however they do need to have a “target” set, which tells the drawing system whether to touch it, and in what way. It’s not as specific as ‘vertex positions’, or ‘colours’, but rather ‘use this as drawing data’, ‘use this as indexing data’, or ‘this is a memory-stored texture’. The usual targets are: GL_ARRAY_BUFFER for ‘drawing data’ (which includes anything from vertex positions to texture coordinates), and GL_ELEMENT_ARRAY_BUFFER for ‘indexing data’ (for indexed drawing).
Gotcha 2: From the official docs, this quote is misleading: “When a buffer object is bound to a target, the previous binding for that target is automatically broken.” You might expect this means you can only have one buffer per target on every VAO, however this isn’t true. All these targets are ‘slots’, to which only one buffer can be bound at any one time to these slots, but any number of buffers can be bound to a VAO. The quote actually only means that you have to bound same-target VBOs to the same VAO sequentially. You can’t, for example, bind VBO1 and VBO2 to the GL_ARRAY_BUFFER target at once, then bind VBO1 and VBO2 both to VAO1. What you have to do is bind VBO1 to GL_ARRAY_BUFFER, bind VBO1 to VAO1, and then bind VBO2 to GL_ARRAY_BUFFER, bind VBO2 to VAO1.
Why use them? What this system allows, first and foremost, is to share the same sets of data for multiple independent objects. For example, this would allow you to have 1 VAO per unit in a strategy game, however all of those VAOs share the same VBOs. But you can use instancing for that, I hear you say. Indeed, but something you can do with system is, for example, “palette swapping” units. Have two VBOs for colour data, one with red units, one with blue units, and attach one or the other to any VAO as applicable. This is a simple example, and can be done in other ways easily, but it shows that VAOs and VBOs offer flexibility.
What happens to the VAOs when you draw them? To start out, the system read the VBOs referenced by the VAO, and looks at how many vertices are in the data. For every one of these vertices in parallel, it “runs” the shaders on them. Basically, each point in space that you defined gets processed “at the same time” as all the others. “Runs” isn’t exactly the correct word for it, however. What drawing does is pass one point of data to the first shader (the vertex shader). This first shader gives out a new set of data based on the input, which the “graphics pipeline” (the system controlling this) passes to the next shader, and so on, until it gets the final answer regarding what colour that specific point on the screen is. At essentially the same time, it does this to all the points of data in the VBOs. None of the points know anything about any of the other points. (This trait is what allows the GPU to run everything at once, and process lots of data really fast.)
To represent what we’ve learned so far graphically:
Future articles in this series will cover, probably in this order: shaders and all associated concepts, lighting, texturing.
This article is meant to be only supplementary to proper guides for OpenGL 3.x. My recommended series is this one. Other references:
- List of some OpenGL examples.
- A great OpenGL 3.x guide.
- Good general portal for OpenGL things.
- Collection of guides.
- Good presentation on OpenGL 3.x specifics.
- Good repository of a simple GL 3.x graphics engine in development.
I appreciate any recommendations to add to this list!