
Each of these stages has very important jobs to create the image displayed.
First of all modelling. In the modelling stage the whole scene is generated using the vertices, edges and faces.
secondly the lighting stage. this is where the surfaces in the scene are lit accordingly to the position and location of the light sources in the scene.
Then there is the viewing stage. this is where the virtual camera is placed and based on the position of the camera the 3D environment is then transformed into a 3D co-ordinate system.
the fifth stage is clipping. this is the stage where objects that are outside of view will not be generated, this isn't needed but helps improve overall performance of rendering.
Once the clipping process is done the next stage is viewport transformation. In this stage co-ordinates are given to the post-clip vertices, these co-ordinates will relate to the next stage which is called scan conversion. in the scan conversion stage rasterisation is used to determine the end pixel values.
once the scan conversion stage is done the individual pixels are given colours depending on their values givien from the rasterised vertices or from images created to be applied to the specific areas. this stage is the texturing and shading stage. then the final stage is the display where the final image of the scene with all the coloured pixels is shown on a 2D screen.
Now once a model is finished a final render will be made which will place all the information together and give the best quality image deponent on the selected settings. the four main ways in which these completed models and enviroments can be rendered are; Rasterisation, Ray-casting, Ray-tracing and Radiosity.
Rasterisation - this is basically where it takes the shapes and information from the scene and creates a raster image (image using pixels) which will then be displayed on a TV etc or can be saved as an image file.
Ray-Casting - this basically sends rays to each pixel and collects information from it such as RGB value and opacity value. From this information it projects an image of the scene which is based upon the perspective of the scene using little effects. This is basically a lower quality of Ray-tracing but however due to this lower quality it is more efficient and less intensive on hardware meaning it is much better for interactive purposes such as video games or 3D animated tours.
Ray-tracing - This method of rendering is a much more intensive version of the Ray-casting method. This is because not only does it collect information for pixels but it also replicates lighting, shadows and refractions. Due to this being so intensive it will be a slower process to render meaning it isn't as efficient as Ray-casting when needed for interactive applications.
Radiosity - This is used with other rendering techniques and this simulates more realistic lighting as it calculates the passing of the light through a scene and reacts how lighting would for example, it will illuminate an environment with more realistic shadows and lighting levels.


Now talking about lighting there are 4 different types of light that affects the models and environments. These are; ambient light, diffuse light and specular light.
first of all is like the base colour of the model this is often a dull colour meaning that it is quite a plane colour on its own and so we can add the other types of lighing to an object. The next lighting is the Diffuse lighing and this again is a darker colour usually and adds the general texture to the object. then finally we have the specular light which gives its reflectiveness whether it be high or lower.
Textures can also be applied to models and these are basically a image that gets wrapped around the model. Textures can be extremely realistic when you take into account how the texture should behave e.g. any transparency or reflectivity, how rough or smooth the object should be. these can all be modified by creating different layers of the texture. For example, first we could put a basic diffuse bitmap on the model (this is basically the texture) but then we can make this look 3D by adding a bump map layer (for example makes a image of a brick wall look more 3D in the sense of the rough textures look clearer and more realistic. we could then also put a specular map on the image its opacity and so on.
Another technique used to make interactive applications that are using all these methods more efficient is fogging. This method basically has a distance fog on it so that from the camera the polygons in the distance were not as high quality and hazed out unlike the textures closer to the camera, this would then display the polys as the camera got closer. This method is defiantly more efficient on hardware and processing speed of the environment and the polygons in the scene.
The final method of increasing efficiency is the level of detail in the models, for example close to the camera a model of a human can be extremely detailed with facial features etc... however move the human model a specific distance and we can change its model to be less detailed but keep some of the main features then if we move the model even further away from the camera we could change the model again so it is a basic outline of a human with slight depth cues.
This comment has been removed by the author.
ReplyDelete