Computer Graphics
Register
Advertisement
File:TextureMapping.png

Spherical texture mapping

Texture mapping is a method of adding realism to a computer-generated graphic. An image (the texture) is added (mapped) to a simpler shape that is generated in the scene, like a decal pasted to a flat surface. This reduces the amount of computing needed to create the shapes and textures in the scene. For instance, a sphere may be generated and a face texture mapped, to remove the need for processing the shape of the nose and eyes.

As graphics cards become more powerful, in theory, texture mapping for lighting becomes less necessary and bump mapping or increased polygon counts take over. However, in practice, the trend has recently been towards larger and more varied texture images, together with increasingly sophisticated ways to combine multiple textures for different aspects of the same object. (This is more significant in real-time graphics, where the number of textures that may be displayed simultaneously is a function of the available graphics memory.)

The way the resulting pixels on the screen are calculated from the texels (texture pixels), is governed by texture filtering. The fastest method is to use exactly one texel for every pixel, but more sophisticated techniques exist.

Example Code[]

The following is a Java snippet which should produce correct texture coordinates for a good-looking sphere, like in the picture. Please note this does not implement perspective-correct texture mapping, which is a more involved operation.

The result from this function, the u, v vector shall be fetched to a further procedure, which looks up an image (or mipmap chain) to extract a color associated with the given point.

It is commonplace to not write this function but instead to rely on hardware facilities for maximum performance .


   public double[] sphereMap(double x, double y, double z, int radius)
   {
       /* x,y,z are normals of the sphere intersect multiplied by its radius */
       /* i.e. vec3 (intersect_pointv3-sphere_centre_pointv3)   *****************************/
       double u, v;
       v = Math.acos(z/radius) / PI;
       if (y > 0.0) {
           u = Math.acos(x/(radius * Math.sin(PI * v))) / (PI*2);
       }
       else {
           u = (PI + Math.acos(x/(radius * Math.sin(PI * v)))) / (PI*2);
       }
       
       return new double[] { u, v };
   }

Duh-Duh-duh History of Real-Time Texture Mapping[]

Before about 1990, affine texture mapping was common place, which goes well with using fractions. Any polygon is first split into triangles, the vertices are aggressively rounded and then converted into fractions. Bresenham's line algorithm is used to draw first the edges of the triangle, then interpolating the texture along the edges and, finally, interpolating the texture within each span.

Between 1990 and 2000, various hybrid methods existed mixing floating point and fractions and additionally fixed point numbers, and mixing affine and perspective-correct texture-mapping. The mix basically uses perspective-correct texture-mapping on a large scale, but divides every polygon in 2D image-space in either quadrants (Terminal Velocity:8x8), small spans (Descent:4x1, Quake:16x1) or lines of constant z (Duke Nukem 3D, System Shock and Flight Unlimited). The constant z approach is known from Pseudo-3D. Pseudo-3D does not allow rotation of the camera, while Doom and Wacky Wheels restrict it only to one axis. Before Descent and Duke Nukem 3D, successfully used portal rendering and arbitrary orientation of the walls. 2D raytracing of a grid was added to the mix and called ray-casting. This was used in Wolfenstein 3D and Ultima Underworld. Demos often used static screen-to-texture look-up tables generated by a ray tracer to render and rotate simple symmetrical objects such as spheres and cylindrical tunnels.

After 2000, perspective-correct texture mapping became widely used via floating point numbers. Perspective-correct texture mapping adds complexity, which can easily be paralleled and pipelined costing only silicon. And it adds one divide per pixel. In this respect, a graphics card has two advantages over a CPU. First, it can trade high throughput for low latency. Second, it often has a similar z and 1/z from a former calculation. Floating point numbers have the advantage that some of the bits belong to the exponent and only need to be added. The improvement from using long floating point numbers is immense, as rounding error causes several problems during rendering. For instance (this is not a collection of examples, but a complete list for the basic texture mapper), in the transformation stage, polygons do not stay convex and have to be split into trapezoids afterwards. In the edge interpolation, the polygons do not stay flat and back face culling has to be repeated every for span, otherwise the renderer may crash (with long variables, this bug may need hours to show up--or even years). Also, because of rounding in the span interpolation, the texture coordinates may overflow, so a guard band and/or tiling is used.

Ray tracers are able to run real-time or high resolution. They use Barycentric coordinates, which produce holes at the vertices. But due to the high precision used in ray-tracing, it is unlikely that any ray will pass through these holes.

See Also[]

External links[]

Advertisement