Tutorial 22

Tutorial 22 - Loading models using the Open Asset Import Library

Get the source!


We have made it thus far using manually generated models. As you can imagine, the process of specifying the position and other attributes for each and every vertex in an object does not scale well. A box, pyramid and a simple tiled surface are OK, but what about something like a human face? In the real world of games and commercial applications the process of mesh creation is handled by artists that use modeling programs such as Blender, Maya and 3ds Max. These applications provide advanced tools that help the artist create extremely sophisticated models. When the model is complete it is saved to a file in one of the many available formats. The file contains the entire geometry definition of the model. It can now be loaded into a game engine (provided the engine supports the particular format) and its contents can be used to populate vertex and index buffers for rendering. Knowing how to parse the geometry definition file format and load professional models is crucial in order to take your 3D programming to the next level.

Developing the parser on your own can consume quite a lot of your time. If you want to be able to load models from different sources, you will need to study each format and develop a specific parser for it. Some of the formats are simple but some are very complex and you might end up spending too much time on something which is not exactly core 3D programming. Therefore, the approach persued by this tutorial is to use an external library to take care of parsing and loading the models from files.

The Open Asset Import Library, or Assimp, is an open source library that can handle many 3D formats, including the most popular ones. It is portable and available for both Linux and Windows. It is very easy to use and integrate into programs written in C/C++.

There is not much theory in this tutorial. Let's dive right in and see how we can integrate Assimp into our 3D programs.
(before you start, make sure you install Assimp from the link above).

Code Walkthru


class Mesh


    bool LoadMesh(const std::string& Filename);

    void Render();

    bool InitFromScene(const aiScene* pScene, const std::string& Filename);
    void InitMesh(unsigned int Index, const aiMesh* paiMesh);
    bool InitMaterials(const aiScene* pScene, const std::string& Filename);
    void Clear();


    struct MeshEntry {


        bool Init(const std::vector& Vertices,
        const std::vector& Indices);

        GLuint VB;
        GLuint IB;
        unsigned int NumIndices;
        unsigned int MaterialIndex;

    std::vector m_Entries;
    std::vector m_Textures;

The Mesh class represents the interface between Assimp and our OpenGL program. An object of this class takes a file name as a parameter to the LoadMesh() function, uses Assimp to load the model and then creates vertex buffers, index bufferss and Texture objects that contain the data of the model in the form that our program understands. In order to render the mesh we use the function Render(). The internal structure of the Mesh class matches the way that Assimp loads models. Assimp uses an aiScene object to represent the loaded mesh. The aiScene object contains mesh structures that encapsulate parts of the model. There must be at least one mesh structure in the aiScene object. Complex models can contain multiple mesh structures. The m_Entries member of the Mesh class is a vector of the MeshEntry struct where each structure corresponds to one mesh structure in the aiScene object. That structure contains the vertex buffer, index buffer and the index of the material. For now, a material is simply a texture and since mesh entries can share materials we have a seperate vector for them (m_Textures). MeshEntry::MaterialIndex points into one of the textures in m_Textures.


bool Mesh::LoadMesh(const std::string& Filename)
    // Release the previously loaded mesh (if it exists)

    bool Ret = false;
    Assimp::Importer Importer;

    const aiScene* pScene = Importer.ReadFile(Filename.c_str(), aiProcess_Triangulate | aiProcess_GenSmoothNormals | aiProcess_FlipUVs);

    if (pScene) {
        Ret = InitFromScene(pScene, Filename);
    else {
        printf("Error parsing '%s': '%s'\n", Filename.c_str(), Importer.GetErrorString());

    return Ret;

This function is the starting point of loading the mesh. We create an instance of the Assimp::Importer class on the stack and call its ReadFile function. This function takes two parameters: the full path of the model file and a mask of post processing options. Assimp is capable of performing many useful processing actions on the loaded models. For example, it can generate normals for models that lack them, optimize the structure of the model to improve performance, etc. The full list of options is availabe here. In this tutorial we use three options, aiProcess_Triangulate, which translate models that are made from non triangle polygons into triangle based meshes. For example, a quad mesh can be translated into a triangle mesh by creating two triangles out of each quad. The second option, aiProcess_GenSmoothNormals, generates vertex normals in the case that the original model does not already contain them. Note that the post processing options are basically non overlapping bitmasks so you can combine multiple options by simply ORing their values. The final option, aiProcess_FlipUVsv, flips the texture coordinates along the Y axis. This was required in order to render the Quake model that was used for the demo correctly. You will need to tailor the options that you use according to the input data. If the mesh was loaded successfully, we get a pointer to an aiScene object. This object contains the entire model contents, divided into aiMesh structures. Next we call the InitFromScene() function to initialize the Mesh object.


bool Mesh::InitFromScene(const aiScene* pScene, const std::string& Filename)

    // Initialize the meshes in the scene one by one
    for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
        const aiMesh* paiMesh = pScene->mMeshes[i];
        InitMesh(i, paiMesh);

    return InitMaterials(pScene, Filename);

We start the initialization of the Mesh object by setting up space in the mesh entries and texture vectors for all the meshes and materials we will need. The numbers are available in the aiScene object members mNumMeshes and mNumMaterials, respectively. Next we scan the mMeshes array in the aiScene object and initialize the mesh entries one by one. Finally, the materials are initialized.


void Mesh::InitMesh(unsigned int Index, const aiMesh* paiMesh)
    m_Entries[Index].MaterialIndex = paiMesh->mMaterialIndex;

    std::vector Vertices;
    std::vector Indices;

We start the initialization of the mesh by storing its material index. This will be used during rendering to bind the proper texture. Next we create two STL vectors to store the contents of the vertex and index buffers. A STL vector has a nice property of storing its contents in a continuous buffer. This makes it easy to load the data into the OpenGL buffer (using the glBufferData() function).


    const aiVector3D Zero3D(0.0f, 0.0f, 0.0f);

    for (unsigned int i = 0 ; i < paiMesh->mNumVertices ; i++) {
        const aiVector3D* pPos = &(paiMesh->mVertices[i]);
        const aiVector3D* pNormal = &(paiMesh->mNormals[i]) : &Zero3D;
        const aiVector3D* pTexCoord = paiMesh->HasTextureCoords(0) ? &(paiMesh->mTextureCoords[0][i]) : &Zero3D;

        Vertex v(Vector3f(pPos->x, pPos->y, pPos->z),
                Vector2f(pTexCoord->x, pTexCoord->y),
                Vector3f(pNormal->x, pNormal->y, pNormal->z));


Here we prepare the contents of the vertex buffer by populating the Vertices vector. We use the following attributes of the aiMesh class:

  1. mNumVertices - the number of vertices.
  2. mVertices - an array of mNumVertices vectors that contain the position.
  3. mNormals - an array of mNumVertices vectors that contain the vertex normals.
  4. mTextureCoords - an array of mNumVertices vectors that contain the texture coordinates. This is actualy a two dimensional array because each vertex can hold several texture coordinates.

So basically we have three seperate arrays that contain everything we need for the vertices and we need to pick out each attribute from its corresponding array in order to build the final Vertex structure. This structure is pushed back to the vertex vector (maintaining the same index as in the three aiMesh arrays). Note that some models do not have texture coordinates so before accessing the mTextureCoords array (and possibly causing a segmentation fault) we check whether texture coordinates exist by calling HasTextureCoords(). In addition, a mesh can contain multiple texture coordinates per vertex. In this tutorial we take the simple way of using only the first texture coordinate. So the mTextureCoords array (which is 2 dimensional) is always accessed on its first row. Therefore, the HasTextureCoords() function is always called for the first row. If a texture coordinate does not exist the Vertex structure will be initialized with the zero vector.


    for (unsigned int i = 0 ; i < paiMesh->mNumFaces ; i++) {
        const aiFace& Face = paiMesh->mFaces[i];
        assert(Face.mNumIndices == 3);

Next we create the index buffer. The mNumFaces member in the aiMesh class tells us how many polygons exist and the array mFaces contains their data (which is indices of the vertices). First we verify that the number of indices in the polygon is indeed 3 (when loading the model we requested that it will get triangulated but it is always good to check this). Then we extract the indices from the mIndices array and push them into the Indices vector.


    m_Entries[Index].Init(Vertices, Indices);

Finally, the MeshEntry structure is initialized using the vertex and index vectors. There is nothing new in the MeshEntry::Init() function so it is not quoted here. It uses glGenBuffer(), glBindBuffer() and glBufferData() to create and populate the vertex and index buffers. See the source file for more details.


bool Mesh::InitMaterials(const aiScene* pScene, const std::string& Filename)
    for (unsigned int i = 0 ; i < pScene->mNumMaterials ; i++) {
        const aiMaterial* pMaterial = pScene->mMaterials[i];

This function loads all the textures that are used by the model. The mNumMaterials attribute in the aiScene object holds the number of materials and mMaterials is an array of pointers to aiMaterials structures (by that size). The aiMaterial structure is a complex beast, but it hides its complexity behind a small number of API calls. In general the material is organized as a stack of textures and between consecutive textures the configured blend and strength function must be applied. For example, the blend function can tell us to add the color from the two textures and the strength function can tell us to multiply the result by half. The blend and strength functions are part of the aiMaterial structure and can be retrieved. To make our life simpler and to match the way our lighting shader currently works we ignore the blend and strength function and simply use the texture as is.


        m_Textures[i] = NULL;
        if (pMaterial->GetTextureCount(aiTextureType_DIFFUSE) > 0) {
            aiString Path;

            if (pMaterial->GetTexture(aiTextureType_DIFFUSE, 0, &Path, NULL, NULL, NULL, NULL, NULL) == AI_SUCCESS) {
                std::string FullPath = Dir + "/" + Path.data;
                m_Textures[i] = new Texture(GL_TEXTURE_2D, FullPath.c_str());

                if (!m_Textures[i]->Load()) {
                    printf("Error loading texture '%s'\n", FullPath.c_str());
                    delete m_Textures[i];
                    m_Textures[i] = NULL;
                    Ret = false;

A material can contain multiple textures, and not all of them have to contain colors. For example, a texture can be a height map, normal map, displacement map, etc. Since our lighting shader currently uses a single texture for all the light types we are interested only in the diffuse texture. Therefore, we check how many diffuse textures exist using the aiMaterial::GetTextureCount() function. This function takes the type of the texture as a parameter and returns the number of textures of that specific type. If at least one diffuse texture is available we fetch it using the aiMaterial::GetTexture() function. The first parameter to that function is the type. Next comes the index and we always use 0. After that we need to specify the address of a string where the texture file name will go. Finally, there are five address parameters that allow us to fetch various configurations of the texture such as the blend factor, map mode, texture operation, etc. These are optional and we ignore them for now so we just pass NULL. We are interested only in the texture file name and we concatenate it to the directory where the model is located. The directory was retrieved at the start of the function (not quoted here) and the assumption is that the model and the texture are in the same subdirectory. If the directory structure is more complex you may need to search for the texture elsewhere. We create our texture object as usual and load it.


       if (!m_Textures[i]) {
          m_Textures[i] = new Texture(GL_TEXTURE_2D, "./white.png");
          Ret = m_Textures[i]->Load();

    return Ret;

The above piece of code is a small workaround to a problem you may encounter if you start loading models you find on the net. Sometimes a model does not include a texture and in cases like that you will not see anything because the color that will be sampled from a non existing texture is by default black. One way to deal with it is to detect this case and treat it with a special case in the shader or a dedicated shader. This tutorial takes a simpler approach of loading a texture that contains a single white texel (you will find this texture in the attached sources). This will make the basic color of all pixels white. It will probably not look great but at least you will see something. This texture takes very little space and allows us to use the same shader for both cases.


void Mesh::Render()

    for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
        glBindBuffer(GL_ARRAY_BUFFER, m_Entries[i].VB);
        glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
        glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)12);
        glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)20);

        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Entries[i].IB);

        const unsigned int MaterialIndex = m_Entries[i].MaterialIndex;

        if (MaterialIndex < m_Textures.size() && m_Textures[MaterialIndex]) {

        glDrawElements(GL_TRIANGLES, m_Entries[i].NumIndices, GL_UNSIGNED_INT, 0);


This function encapsulates the rendering of a mesh and seperates it from the main application (in previous tutorials it was part of the application code itself). The m_Entries array is scanned and the vertex buffer and index buffer in each node are bound. The material index of the node is used to fetch the texture object from the m_Texture array and the texture is also bound. Finally, the draw command is executed. Now you can have multiple mesh objects that have been loaded from files and render them one by one by calling the Mesh::Render() function.



The last thing we need to study is something that was left out in previous tutorials. If you go ahead and load models using the code above you will probably encounter visual anomalies with your scene. The reason is that triangles that are further from the camera are drawn on top of the closer ones. In order to fix this we need to enable the famous depth test (a.k.a Z-test). When the depth test is enabled the rasterizer compares the depth of each pixel prior to rendering with the existing pixel on the same location on the screen. The pixel whose color is eventually used is the one who "wins" the depth test (i.e. closer to the camera). The depth test is not enabled by default and the code above takes care of that (part of the OpenGL initialization code in the function GLUTBackendRun()). This is just one of three pieces of code that are required for the depth test (see below).



The second piece is the initialization of the depth buffer. In order to compare depth between two pixels the depth of the "old" pixel must be stored somewhere (the depth of the "new" pixel is available because it was passed from the vertex shader). For this purpose we have a special buffer known as the depth buffer (or Z buffer). It has the same proporations as the screen so that each pixel in the color buffer has a corresponding slot in the depth buffer. That slot always stores the depth of the closest pixel and it is used in the depth test for the comparison.



The last thing we need to do is to clear the depth buffer at the start of a new frame. If we don't do that the buffer will contain old values from the previous frame and the depth of the pixels from the new frame will be compared against the depth of the pixels from the previous frame. As you can imagine, this will cause serious corruptions (try!). The glClear() function takes a bitmask of the buffers it needs to operate on. Up until now we've only cleared the color buffer. Now it's time to clear the depth buffer as well.