Computer Graphics
Register
Advertisement

Template:Otheruses4

Computer graphics (CG) is the field of visual computing, where one utilizes computers both to generate visual images synthetically and to integrate or alter visual and spatial information sampled from the real world.

The first major advance in computer graphics was the development of Sketchpad in 1962 by Ivan Sutherland.

This field can be divided into several areas: real-time 3D rendering (often used in video games), computer animation, video capture and video creation rendering, special effects editing (often used for movies and television), image editing, and modeling (often used for engineering and medical purposes). Development in computer graphics was first fueled by academic interests and government sponsorship. However, as real-world applications of computer graphics in broadcast television and movies proved a viable alternative to more traditional special effects and animation techniques, commercial parties have increasingly funded advances in the field.

It is often thought that the first feature film to use computer graphics was 2001: A Space Odyssey (1968), which attempted to show how computers would be much more graphical in the future. However, all the "computer graphic" effects in that film were hand-drawn animation, and the special effects sequences were produced entirely with conventional optical and model effects.

Perhaps the first use of computer graphics specifically to illustrate computer graphics was in Futureworld (1976), which included an animation of a human face and hand--produced by Ed Catmull and Fred Parke at the University of Utah.

2D[]

The first advance in computer graphics was in the use of CRTs. There are two approaches to 2D computer graphics: vector and raster graphics. Vector graphics stores precise geometric data, topology and style such as: coordinate positions of points, the connections between points (to form lines or paths), and the color, thickness, and possible fill of the shapes. Most vector graphic systems can also use primitives of standard shapes such as circles, rectangles, etc. In most cases, a vector graphic image has to be converted to a raster image to be viewed. Raster graphics is a uniform 2-dimensional grid of pixels. Each pixel has a specific value such as, for instance, brightness, color, transparency, or a combination of such values. A raster image has a finite resolution of a specific number of rows and columns. Standard computer displays shows a raster image of resolutions such as 1280(columns)x1024(rows) of pixels. Today, one often combines raster and vector graphics in compound file formats (pdf, swf).

3D[]

With the birth of workstation computers (like LISP machines, paintbox computers and Silicon Graphics workstations) came 3D computer graphics, based on vector graphics. Instead of the computer storing information about points, lines, and curves on a 2-dimensional plane, the computer stores the location of points, lines, and, typically, faces (to construct a polygon) in 3-dimensional space.

3-dimensional polygons are the lifeblood of virtually all 3D computer graphics. As a result, most 3D graphics engines are based around storing points (single 3-dimensional coordinates), lines that connect those points together, faces defined by the lines, and then a sequence of faces to create 3D polygons.

Modern-day computer graphics software goes far beyond just the simple storage of polygons in computer memory. Today's graphics are not only the product of massive collections of polygons into recognizable shapes, but they also result from techniques in shading, texturing, and rasterization.

Shading[]

The process of shading (in the context of 3D computer graphics) involves the computer simulating (or, more accurately, calculating) how the faces of a polygon will look when illuminated by a virtual light source. The exact calculation varies depending on not only what data is available about the face being shaded, but also the shading.

Image-Based Rendering[]

Computer graphics is all about obtaining 2D images from 3D models. In order to get highly accurate and photo-realistic images, the input 3D models should be very accurate in terms of geometry and colors. Simulating the real 3D world scene using Computer Graphics is difficult, because obtaining accurate 3D geometry of the world is difficult. Instead of obtaining 3D models, image-based rendering (IBR) uses the images taken from particular view points and tries to obtain new images from other view points. Though the term "image-based rendering" was coined recently, it has been in practice since the inception of research in computer vision. In 1996, two image-based rendering techniques were presented in SIGGRAPH: light field rendering and Lumigraph rendering. These techniques received special attention in the research community. Since then, many representations for IBR were proposed. One popular method is view-dependent texture mapping, an IBR technique from University of Southern California. Andrew Zisserman, et. al from Oxford University used machine learning concepts for IBR.

  • Flat shading: A technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source.
  • Gouraud shading: Invented by Henri Gouraud in 1971, a fast and resource-conscious technique used to simulate smoothly shaded surfaces by interpolating vertex colors across a polygon's surface.
  • Texture mapping: A technique for simulating surface detail by mapping images (textures) onto polygons.
  • Phong shading: Invented by Bui Tuong Phong, a smooth shading technique that approximates curved-surface lighting by interpolating the vertex normals of a polygon across the surface; the lighting model includes glossy reflection with a controllable level of gloss.
  • Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate bumpy or wrinkled surfaces.
  • Ray tracing: A method based on the physical principles of geometric optics that can simulate multiple reflections and transparency.
  • Radiosity: a technique for global illumination that uses radiative transfer theory to simulate indirect (reflected) illumination in scenes with diffuse surfaces.
  • Blobs: a technique for representing surfaces without specifying a hard boundary representation, usually implemented as a procedural surface like a Van der Waals equipotential (in chemistry).

Texturing[]

Polygon surfaces (the sequence of faces) can contain data corresponding to not only a color but, in more advanced software, can be a virtual canvas for a picture, or other rasterized image. Such an image is placed onto a face, or series of faces, and is called a texture.

Textures add a new degree of customization as to how a faces and polygons will ultimately look after being shaded, depending on the shading method, and how the image is interpreted during shading.

See Also[]

Several important topics in 2D and 3D graphics include:

  • Color theory
  • Raster graphics
  • Vector graphics
  • Geometric surface representations
    • including, polygons, Bézier surfaces, splines, subdivision surfaces, implicit surfaces, point-set surfaces, and NURBS
  • Material properties, including BRDFs
  • Image compression
  • Animation
  • Rendering
  • Compositing
  • Projection
  • 3D projection
  • Hidden surface determination
  • Vertex shaders and pixel shaders
  • Full screen effects
  • Non-photorealistic rendering
  • Real-time computer graphics

Toolkits & APIs[]

For an application relying heavily on computer graphics, the following could be useful:

  • Adobe Systems
  • Autodesk
  • Blender3d
  • BRL-CAD
  • Computer Graphics Metafile (CGM)
  • Crystal Space
  • Power Render
  • DirectX
  • GLUT
  • Graphical Kernel System (GKS)
  • Macromedia Flash
  • Macromedia Shockwave
  • Open Inventor
  • OpenGL
  • Pixia
  • PostScript
  • Scalable Vector Graphics (SVG)
  • svgalib
  • X Window System

Miscellaneous[]

  • Bresenham's line algorithm
  • Computer-generated imagery
  • Digital image editing
  • Timeline of CGI in films
  • Computer vision
  • Digital image processing
  • Digital geometry
  • Graphics processing unit
  • POV-ray
  • Graphical output devices
  • List of computer graphics and descriptive geometry topics
  • Utah Teapot
  • Stanford Bunny
  • SIGGRAPH
  • ASCII art

External Links[]

Advertisement