Sunday, 10 February 2013

Displaying 3D polygon animations

To display 3D polygon animation you need a application (application programming interface) that actually display the 3D animation some of these are Direct3D and OpenGL. These programs create the shape that will then be displayed on the screen.
The graphics pipeline is a stage that requires a three dimensional object to be shown on a two dimensional screen, to do this it gathered information on the vertices or the main control points of the 3D object and basically crops the 3D model so that it only displays what can actually be seen, for example, if we have a model of a shoe and we are looking at the shoe from a side on angle, the application programming interface will give only the information concerning the side visible of the shoe and will cut out the parts that should not be visible.there are a few different bits of information provided by the vertex which include its position on the x-y-z coordinates, the texture, reflectivity (or specular values) and the RGB (red, green, blue) values. The general primitives within a 3D model or 3D graphic are lines and triangles that create the shape and illusion of depth. however there are a few steps that the program does before simply creating and displaying the 3D model or graphic. These are; modelling, lighting, viewing, projection, clipping, viewport transformation, scan conversion, texturing & shading and then finally the display.




Each of these stages has very important jobs to create the image displayed.
First of all modelling. In the modelling stage the whole scene is generated using the vertices, edges and faces.
secondly the lighting stage. this is where the surfaces in the scene are lit accordingly to the position and location of the light sources in the scene.
Then there is the viewing stage. this is where the virtual camera is placed and based on the position of the camera the 3D environment is then transformed into a 3D co-ordinate system.
After the viewing stage there is the projection stage. this is where the 3D illusion is created using perspective projection meaning that the more distant objects appear smaller.
the fifth stage is clipping. this is the stage where objects that are outside of view will not be generated, this isn't needed but helps improve overall performance of rendering.
Once the clipping process is done the next stage is viewport transformation. In this stage co-ordinates are given to the post-clip vertices, these co-ordinates will relate to the next stage which is called scan conversion. in the scan conversion stage rasterisation is used to determine the end pixel values.
once the scan conversion stage is done the individual pixels are given colours depending on their values givien from the rasterised vertices or from images created to be applied to the specific areas. this stage is the texturing and shading stage. then the final stage is the display where the final image of the scene with all the coloured pixels is shown on a 2D screen.

Now once a model is finished a final render will be made which will place all the information together and give the best quality image deponent on the selected settings. the four main ways in which these completed models and enviroments can be rendered are; Rasterisation, Ray-casting, Ray-tracing and Radiosity.

Rasterisation - this is basically where it takes the shapes and information from the scene and creates a raster image (image using pixels) which will then be displayed on a TV etc or can be saved as an image file.

Ray-Casting - this basically sends rays to each pixel and collects information from it such as RGB value and opacity value. From this information it projects an image of the scene which is based upon the perspective of the scene using little effects. This is basically a lower quality of Ray-tracing but however due to this lower quality it is more efficient and less intensive on hardware meaning it is much better for interactive purposes such as video games or 3D animated tours.

Ray-tracing - This method of rendering is a much more intensive version of the Ray-casting method. This is because not only does it collect information for pixels but it also replicates lighting, shadows and refractions. Due to this being so intensive it will be a slower process to render meaning it isn't as efficient as Ray-casting when needed for interactive applications.

Radiosity - This is used with other rendering techniques and this simulates more realistic lighting as it calculates the passing of the light through a scene and reacts how lighting would for example, it will illuminate an environment with more realistic shadows and lighting levels.
















Now talking about lighting there are 4 different types of light that affects the models and environments. These are; ambient light, diffuse light and specular light.
first of all is like the base colour of the model this is often a dull colour meaning that it is quite a plane colour on its own and so we can add the other types of lighing to an object. The next lighting is the Diffuse lighing and this again is a darker colour usually and adds the general texture to the object. then finally we have the specular light which gives its reflectiveness whether it be high or lower.



Textures can also be applied to models and these are basically a image that gets wrapped around the model. Textures can be extremely realistic when you take into account how the texture should behave e.g. any transparency or reflectivity, how rough or smooth the object should be. these can all be modified by creating different layers of the texture. For example, first we could put a basic diffuse bitmap on the model (this is basically the texture) but then we can make this look 3D by adding a bump map layer (for example makes a image of a brick wall look more 3D in the sense of the rough textures look clearer and more realistic. we could then also put a specular map on the image its opacity and so on.




















Another technique used to make interactive applications that are using all these methods more efficient is fogging. This method basically has a distance fog on it so that from the camera the polygons in the distance were not as high quality and hazed out unlike the textures closer to the camera, this would then display the polys as the camera got closer. This method is defiantly more efficient on hardware and processing speed of the environment and the polygons in the scene.

Shading is another technique that is used to help processing speed etc... and there are three main methods of shading first of all flat shading this basically generates one colour for each of the polygons in view for example if you have a sphere made up of squared polygons it will look more like a disco ball than a sphere meaning that it looks very unrealistic, however it is incredibly quicker at processing. The second method of shading is gouraud shading. this is similar to flat shading but rather than it calculating a colour for each poly it creates one for each of the veracities meaning there are a lot cleaner shading however this can actually miss some effects such as specific lighting. This method is particularly fast but as i just said it can still miss effects leading to slight unrealism or not the highest quality. The final method is phong shading, this is a might more precise approach to that works out vector normals at every vertex which if we use the sphere example again it is more accurate at creating the realistic curve of the sphere. However this means that it has more calculations meaning slower processing speeds when rendering.

The final method of increasing efficiency is the level of detail in the models, for example close to the camera a model of a human can be extremely detailed with facial features etc... however move the human model a specific distance and we can change its model to be less detailed but keep some of the main features then if we move the model even further away from the camera we could change the model again so it is a basic outline of a human with slight depth cues.









1 comment: