3D Modeling WikiBook Mike Van Voorhis 2/3/2014
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Mon, 03 Feb 2014 19:37:10 UTC
Contents Articles 3D modeling
1
3D computer graphics
5
3D computer graphics software
8
3D computer vision
10
3D data acquisition and object reconstruction
12
3D reconstruction
15
Binary space partitioning
16
Bounding interval hierarchy
21
Bounding volume
24
Bounding volume hierarchy
27
Box modeling
29
Catmull–Clark subdivision surface
30
Cloth modeling
32
COLLADA
34
Computed Corpuscle Sectioning
39
Computer representation of surfaces
40
Constructive solid geometry
44
Conversion between quaternions and Euler angles
47
Crowd simulation
50
Cutaway drawing
52
Demoparty
55
Depth map
59
Digital puppetry
61
Dilution of precision (computer graphics)
64
Doo–Sabin subdivision surface
64
Draw distance
65
Edge loop
67
Euler operator
68
Explicit modeling
69
False radiosity
70
Fiducial marker
71
Fluid simulation
73
Forward kinematic animation
75
Forward kinematics
75
Freeform surface modelling
77
Geometry instancing
80
Geometry pipelines
81
Geometry processing
82
Gimbal lock
83
Glide API
88
GloriaFX
90
Hemicube (computer graphics)
95
Image plane
95
Image-based meshing
96
Inflatable icons
98
Interactive Digital Centre Asia
98
Interactive skeleton-driven simulation
99
Inverse kinematics
101
Isosurface
105
Joint constraints
106
Kinematic chain
106
Lambert's cosine law
109
Light stage
112
Light transport theory
113
Loop subdivision surface
114
Low poly
115
Marching cubes
117
Mesh parameterization
119
Metaballs
120
Micropolygon
122
Morph target animation
122
Motion capture
124
Newell's algorithm
133
Non-uniform rational B-spline
134
Nonobtuse mesh
143
Normal (geometry)
143
Painter's algorithm
147
Parallax barrier
149
Parallel rendering
154
Particle system
156
Point cloud
159
Polygon (computer graphics)
161
Polygon mesh
161
Polygon soup
170
Polygonal modeling
170
Pre-rendering
174
Precomputed Radiance Transfer
176
Procedural modeling
177
Procedural texture
178
Progressive meshes
182
3D projection
183
Projective texture mapping
186
Pyramid of vision
188
Quantitative Invisibility
188
Quaternions and spatial rotation
189
Andreas Raab
200
RealityEngine
202
Reflection (computer graphics)
204
Relief mapping (computer graphics)
206
Retained mode
207
Scene description language
207
Schlick's approximation
209
Sculpted prim
210
Silhouette edge
212
Skeletal animation
213
Sketch-based modeling
215
Smoothing group
216
Soft body dynamics
217
Solid modeling
221
Sparse voxel octree
227
Specularity
227
Static mesh
228
Stereoscopic acuity
229
Subdivision surface
231
Supinfocom
234
Surface caching
235
Surfel
235
Suzanne Award
236
Time-varying mesh
243
Timewarps
243
Triangle mesh
244
Vector slime
245
Vertex (geometry)
247
Vertex Buffer Object
249
Vertex (computer graphics)
254
Vertex pipeline
255
Viewing frustum
256
Viewport
257
Virtual actor
258
Virtual environment software
260
Virtual replay
262
Volume mesh
262
Voxel
262
Web3D
266
References Article Sources and Contributors
268
Image Sources, Licenses and Contributors
274
Article Licenses License
278
3D modeling
1
3D modeling 3D computer graphics
Basics • • •
3D modeling / 3D scanning 3D rendering / 3D printing 3D computer graphics software Primary Uses
• • • • •
3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Virtual cinematography Related concepts
• • • • • • • •
CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering
v t
e [1]
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. Recently, new concepts in 3D modeling have started to emerge. Recently, a new technology departing from the traditional techniques starts to emerge, such as Curve Controlled Modeling [2] that emphasizes the modeling of the movement of a 3D object instead of the traditional modeling of the static shape.
3D modeling
2
Models 3D models represent a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. 3D models are widely used anywhere in 3D graphics. Actually, their use predates the widespread use of 3D graphics on personal computers. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. 3D model of a spectrograph
Today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs. The movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games. The science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes through Software Architectural Models. The engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice. 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines.
Representation Almost all 3D models can be divided into two categories. • Solid - These models define the volume of the object they represent (like a rock). These are more realistic, but more difficult to build. Solid models are mostly used for nonvisual simulations such as medical and engineering simulations, for CAD and specialized visual applications such as ray tracing and constructive solid geometry • Shell/boundary - these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid models. Almost all visual models used in games and film are shell models.
A modern render of the iconic Utah teapot model developed by Martin Newell (1975). The Utah teapot is one of the most common models used in 3D graphics education.
Because the appearance of an object depends largely on the exterior of the object, boundary representations are common in computer graphics. Two dimensional surfaces are a good analogy for the objects used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. Level sets are a useful representation for deforming surfaces which undergo many topological changes such as fluids. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering.[3] Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the
3D modeling transition from abstract representation to rendered scene.
Modeling process There are three popular ways to represent a model: 1. Polygonal modeling - Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons. 2. Curve modeling - Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a 3D polygonal modeling of a human face. point will pull the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches and geometric primitives 3. Digital sculpting - Still a fairly new method of modeling, 3D sculpting has become very popular in the few years it has been around.[citation needed] There are currently 3 types of digital sculpting: Displacement, which is the most widely used among applications at this moment, volumetric and dynamic tessellation. Displacement uses a dense model (often generated by Subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tesselation Is similar to Voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine. The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including: • constructive solid geometry • implicit surfaces • subdivision surfaces Modeling can be performed by means of a dedicated program (e.g., Cinema 4D, form•Z, Maya, 3DS Max, Blender, Lightwave, Modo, solidThinking) or an application component (Shaper, Lofter in 3DS Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D). Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats, or sprites assigned to them.
3
3D modeling
4
Compared to 2D methods 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Advantages of wireframe 3D exclusively 2D methods include:
modeling
over
• Flexibility, ability to change angles or animate images with quicker rendering of the changes; • Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating; • Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect.
A fully textured and lit rendering of a 3D model.
Disadvantages compare to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model.
3D model market A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) still exists - either for individual models or large collections. Online marketplaces for 3D content, such as TurboSquid, The3DStudio, CreativeCrash, CGTrader, NoneCG, CGPeopleNetwork and DAZ 3D, allow individual artists to sell content that they have created. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according the marketplace. In most cases, the artist retains ownership of the 3d model; the customer only buys the right to use and present the model. Some artists sell their products directly in its own stores offering their products at a lower price by not using intermediaries.
3D printing 3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material.
Human models The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example).
3D modeling
5
Uses 3D modelling is used in various industries like films, animation and gaming, interior designing and architecture. They are also used in the medical industry for the interactive representations of anatomy. A wide number of 3D softwares are also used in constructing digital representation of mechanical models or parts before they are actually manufactured. CAD/CAM related softwares are used in such fields, and with these softwares, not only can you construct the parts, but also assemble them, and observe their functionality. 3D modelling is also used in the field of Industrial Design, wherein products are 3D modeled before representing them to the clients. In Media and Event industries, 3D modelling is used in Stage/Set Design.
References [1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:3D_computer_graphics& action=edit [2] Ding, H., Hong, Y. (2003), NURBS curve controlled modeling for facial animation, Computers and Graphics, 27(3):373-385 [3] Jon Radoff, Anatomy of an MMORPG (http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/ ), August 22, 2008
3D computer graphics 3D computer graphics
Basics • • •
3D modeling / 3D scanning 3D rendering / 3D printing 3D computer graphics software Primary Uses
• • • • •
3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Virtual cinematography Related concepts
• • • • •
CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering
3D computer graphics
6
• • •
v t
e [1]
3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.
History William Fetter was credited with coining the term computer graphics in 1961[1] to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.
Overview 3D computer graphics creation falls into three basic phases: • 3D modeling – the process of forming a computer model of an object's shape • Layout and animation – the motion and placement of objects within a scene • 3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate the image
Modeling The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A four-point polygon is a quad, and a polygon of more than four points is an n-gon[citation needed]. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.
3D computer graphics
7
Layout and animation Before rendering into an image, objects must be placed (laid out) in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture. These techniques are often used in combination. As with modeling, physical simulation also specifies motion.
Rendering Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Examples of 3D rendering
Left: A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay. Center: A 3d model of a Dunkerque-class battleship rendered with flat shading. Right: During the 3D rendering step, the number of reflections “light rays” can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.
3D computer graphics
8
Communities There are a multitude of websites designed to help educate and support 3D graphic artists. Some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, provide product reviews or post examples of their own work.
Distinction from photorealistic 2D graphics Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters. See also still life. [citation needed]
References [1] Computer Graphics, comphist.org (http:/ / www. comphist. org/ computing_history/ new_page_6. htm)
External links • A Critical History of Computer Graphics and Animation (http://accad.osu.edu/~waynec/history/lessons.html) • How Stuff Works - 3D Graphics (http://computer.howstuffworks.com/3dgraphics.htm) • History of Computer Graphics series of articles (http://hem.passagen.se/des/hocg/hocg_1960.htm)
3D computer graphics software 3D computer graphics
Basics • • •
3D modeling / 3D scanning 3D rendering / 3D printing 3D computer graphics software Primary Uses
• • • • •
3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Virtual cinematography Related concepts
3D computer graphics software
9 • • • • • • • •
CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering
v t
e [1]
3D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering.
Classification Modeling 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers. 3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out. 3D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications. Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
Rendering Although 3D modeling and CAD software may perform 3D rendering as well (e.g.Autodesk 3ds Max or Blender), exclusive 3D rendering software also exist.
Computer-aided design Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and Computer-aided architectural design.
3D computer graphics software
10
Complementary tools After producing video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the low end, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves. Use of real-time computer graphics engines to create a cinematic production is called machinima.
References External links • 3D Tools table (http://wiki.cgsociety.org/index.php/Comparison_of_3d_tools) from the CGSociety wiki • Comparison of 10 most popular modeling software (http://tideart.com/?id=4e26f595) from TideArt
3D computer vision 3D computer vision, or human-like computer vision, is the ability of double camera processor powered devices to acquire a real time picture of the world in three dimensions. It was not possible to achieve such instant speeds of 3D info capturing using the traditional technology of stereo cameras due to huge resources required to process and compare/combine images received from two misaligned image sensors. In 2006 Science Bureau Inc. came up with an idea how to seamlessly transition from 2D to 3D technology in personal computers and other mobile devices to enable them to see the world as humans. The idea behind the invention was quite simple. In order to avoid enormous processing resources for compensation of misalignment of two image sensors said image sensors need to be precisely aligned so that the rows of their sensing elements are parallel to the line connecting their optical centers. Then the rows can be easily compared on the fly. There is no further need in powerful image processors what makes the technology very inexpensive and suitable for its low budget mass implementation. The idea was introduced to all major companies playing in the market of home electronics and computer games back in 2007 but was not 3D Computer Vision System acquired by any of them. In 2010 US Patent 7,729,530 [1] was issued to protect the intellectual rights. The same year all kinds of 3D devices began flooding the market in the North America. Despite this recent breakthrough in 3D technologies there is still a lack of real time 3D vision computer systems on the market. There are a few high profile products that are close to achieving instant 3D image reconstruction. Nevertheless, they are still far from providing real time image and gesture recognition for computer games and device control. Let’s take a closer look at them. 1. Microsoft’s Kinect for Xbox 360. The product uses the suggested advanced technology in part of having two image sensors with aligned rows of sensing elements. However, Microsoft utilizes a special source of light producing a large pattern on surrounding objects to get captured and recognized by the imaging part. Due to specifics of the pattern the image resolution is very low and the device is only capable of recognizing major body movements. The device uses low resolution image sensors and still not fast enough to process received images.
3D computer vision 2. Fuji’s stereo camera. Precisely aligned sensors with high grade optics. Could provide a great 3D real time image if connected and controlled by computer. 3. Panasonic’s 3D camcorder. Great idea with mechanically alignable sensors to get 3D video images. 4. HTC has unveiled the EVO 3D, a follow-up to Sprint Nextel's breakout smartphone. It has a 4.3-inch (110 mm) touchscreen, which can display eye-popping 3D without needing glasses. Users will also be able to capture photos and videos in 3D using a pair of cameras on the back. 5. LG Electronics has been working for a year and half on a 3D smartphone of its own.The Optimus 3D, as it's been called, will launch on AT&T Mobility's network with the name Thrill 4G. LG developers spent a great deal of time fine-tuning the pair of 5-megapixel cameras to accurately capture 3D media. Calibrating the cameras to produce good-looking stills and video is more difficult than pulling off a glasses-free display. 6. Nintendo's 3DS also has a pair of cameras for capturing scenes in 3D, and it works quite well. Being the first out of the gate to offer a mainstream glasses-free 3D gadget, Nintendo expected to find competitors, and it soon did when LG announced its phone. 7. Both LG and HTC are planning to debut tablet computers that should be able, like their phones, capture 3D with a pair of cameras. It is obvious that all of the above companies are on the right track building their products based on the technology to align two image sensors as precisely as possible. Therefore, if the technology keeps going in the defined direction, we are to soon witness computers recognizing and communicating with their users; robots being everywhere and doing everything from surgeries to driving cars; 3D virtual games with instant Avatar image creation of the players; 3D technologies everywhere from smartphones to TV.
References [1] http:/ / patft. uspto. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=%2Fnetahtml%2FPTO%2Fsrchnum. htm& r=1& f=G& l=50& s1=7729530. PN. & OS=PN/ 7729530& RS=PN/ 7729530
• Faugeras, Olivier (1999). Three-dimensional computer vision : a geometric viewpoint (http://mitpress.mit.edu/ catalog/item/default.asp?ttype=2&tid=8427) (3. print. ed.). Cambridge, Mass. [u.a.]: MIT Press. ISBN 978-0-262-06158-2. Retrieved 21 August 2012. • Trucco, Emanuele; Verri, Alessandro (1998). Introductory techniques for 3-D computer vision. Upper Saddle River, NJ [u.a.]: Prentice Hall. ISBN 9780132611084. • Klette, Reinhard; Schlüns, Karsten; Koschan, Andreas (1998). Computer vision : three-dimensional data from images (http://www.springer.com/computer/ai/book/978-981-3083-71-4). Singapore [u.a.]: Springer. ISBN 978-9813083714. Retrieved 21 August 2012.
External links • "3-D smartphones ditch the glasses, CNN, 03/24/2011" (http://www.cnn.com/2011/TECH/mobile/03/24/3d. phones.tablets/index.html?hpt=Sbin) • "Finepix Real 3DW1 Stereo Camera by Fuji" (http://www.fujifilm.com/products/3d/camera/ finepix_real3dw1/) • "3D Camcorder by Panasonic" (http://www2.panasonic.com/consumer-electronics/shop/ Cameras-Camcorders/Camcorders/model.HDC-SDT750K_11002_7000000000000005702) • "Kinect Xbox 360" (http://www.xbox.com/en-ca/kinect/?WT.srch=1) • "United States Patent and Trademark Office: US Patent 7,729,530" (http://patft.uspto.gov/netacgi/ nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G& l=50&s1=7729530.PN.&OS=PN/7729530&RS=PN/7729530)
11
3D data acquisition and object reconstruction
3D data acquisition and object reconstruction 3D data acquisition and reconstruction is the generation of three dimensional or spatiotemporal models from sensor data. The techniques and theories, generally speaking, work with most or all sensor types including optical, acoustic, laser scanning, radar, thermal, seismic.[1][2]
Acquisition Acquisition can occur from a multitude of methods including 2D images, acquired sensor data and on site sensors.
Acquisition from 2D images 3D data acquisition and object reconstruction can be performed using stereo image pairs. Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images. Close-range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close-look images of objects, e.g., buildings, and reconstruct them using the very same theory as the aerial photogrammetry. An example of software which could do this is Vexcel FotoG 5.[3][4] This software has now been replaced by Vexcel GeoSynth.[5] Another similar software program is Microsoft Photosynth.[6][7] A semi-automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by Sisi Zlatanova. The process involves the manual digitizing of a number of points necessary for automatically reconstructing the 3D objects. Each reconstructed object is validated by superimposition of its wire frame graphics in the stereo model. The topologically structured 3D data is stored in a database and are also used for visualization of the objects. Software used for 3D data acquisition using 2D images include e.g. ENSAIS Engineering College TIPHON (Traitement d'Image et PHOtogrammétrie Numérique).[8] CyberCity 3D Modeler, ORPHEUS, ... A method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner. His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme. Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators. The internal data structure of both the primitives and the compound building models are based on the boundary representation methods[9][10] Multiple images are used in Zeng's approach to surface reconstruction from multiple images. A central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images. The idea is thus to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images. Multi-spectral images are also used for 3D building detection. The first and last pulse data and the normalized difference vegetation index are used in the process.[11] New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection, or the shadow as well as their combination. This technology is gaining attention given its fast processing time, and far lower cost than stereo measurements. GeoTango SilverEye technology is the first of this kind commercial product that can produce very realistic city models and buildings from single satellite and aerial images.
12
3D data acquisition and object reconstruction
Acquisition from acquired sensor data Semi-Automatic building extraction from LIDAR Data and High-Resolution Images is also a possibility. Again, this approach allows modelling without physically moving towards the location or object.[12] From airborne LIDAR data, digital surface model (DSM) can be generated and then the objects higher than the ground are automatically detected from the DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are then used to separate the buildings from other objects. The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality. Watershed analysis can be conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify the buildings per type. The buildings are then reconstructed using three parametric building models (flat, gabled, hipped).[13]
Acquisition from on-site sensors LIDAR and other terrestrial laser scanning technology[14] offers the fastest, automated way to collect height or distance information. LIDAR or laser for height measurement of buildings is becoming very promising.[15] Commercial applications of both airborne LIDAR and ground laser scanning technology have proven to be fast and accurate methods for building height extraction. The building extraction task is needed to determine building locations, ground elevation, orientations, building size, rooftop heights, etc. Most buildings are described to sufficient details in terms of general polyhedra, i.e., their boundaries can be represented by a set of planar surfaces and straight lines. Further processing such as expressing building footprints as polygons is used for data storing in GIS databases. Using laser scans and images taken from ground level and a bird's-eye perspective, Fruh and Zakhor present an approach to automatically create textured 3D city models. This approach involves registering and merging the detailed facade models with a complementary airborne model. The airborne modeling process generates a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. Ground-based modeling process results in a detailed model of the building facades.Using the DSM obtained from airborne laser scans, they localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). Finally, the two models are merged with different resolutions to obtain a 3D model. Using an airborne laser altimeter, Haala, Brenner and Anders combined height data with the existing ground plans of buildings. The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS. The project was done in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated in the project by texture processing, e.g. by mapping of terrestrial images. The project demonstrated the feasibility of rapid acquisition of 3D urban GIS. Ground plans proved are another very important source of information for 3D building reconstruction. Compared to results of automatic procedures, these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation. For this reason, ground plans, can considerably reduce costs in a reconstruction project. An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map, which provides information on the distribution of property, including the borders of all agricultural areas and the ground plans of existing buildings. Additionally information as street names and the usage of buildings (e.g. garage, residential building, office block, industrial building, church) is provided in the form of text symbols. At the moment the Digital Cadastral map is build up as a data base covering an area, mainly composed by digitizing preexisting maps or plans.
13
3D data acquisition and object reconstruction
Software Software used for airborne laser scanning includes OPALS (Orientation and Processing of Airborne Laser Scanning data), ...[16]
Cost • Terrestric laserscandevices (pulse or phase devices) + processing software generally start at a price of 150,000 €. Some less precise devices (as the Trimble VX) cost around 75,000€. • Terrestric LIDAR systems cost around 300,000 €. • Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost around 25,000€. Systems that use still cameras with balloons are even cheaper (around 2,500 €), but require additional manual processing. As the manual processing takes around 1 month of labor for every day of taking pictures, this is thus also still an expensive solution in the long run. • Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost around 11,000€. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around 5,500€. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2m resolution) cost around 1.000€ per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.[17]
Object reconstruction After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and imported into another program for further refining, and/or to add additional data. Such additional data could be gps-location data, ... Also, after the reconstruction, the data might be directly implemented into a local (GIS) map[18][19] or a worldwide map such as Google Earth.
Software Several software packets are used in which the acquired (and sometimes already processed) data from images or sensors is imported. The software packets include[20] (in alphabetical order) : • • • • • • • • • • •
3DF Zephyr Canoma Cyclone Leica Photogrammetry Suite MountainsMap SEM (microscopy applications only) Neitra 3D pro Orthoware PhotoModeler SketchUp Smart3Dcapture (acute3D) Rhinophoto
14
3D data acquisition and object reconstruction
References [1] [2] [3] [4] [5] [6] [7] [8]
Seismic 3D data acquisition (http:/ / www. georgedreher. com/ 3D_Seismic. html) Optical and laser remote sensing (http:/ / www. lr. tudelft. nl/ live/ pagina. jsp?id=17783744-e048-4707-a38a-b3b9e2574d03& lang=en) Vexcel FotoG (http:/ / www. highbeam. com/ doc/ 1G1-88825431. html) 3D data acquisition (http:/ / www. directionsmag. com/ article. php?article_id=628) Vexcel GeoSynth (http:/ / www. vexcel. com/ geospatial/ geosynth/ index. asp) Photosynth (http:/ / photosynth. net/ about. aspx) 3D data acquisition and object reconstruction using photos (http:/ / grail. cs. washington. edu/ rome/ ) 3D data acquisition and modeling in a Topographic Information System (http:/ / www. ifp. uni-stuttgart. de/ publications/ commIV/ koehl2neu. pdf) [9] Franz Rottensteiner article (http:/ / www. commission3. isprs. org/ pia/ papers/ pia03_s2p1. pdf) [10] Semi-automatic extraction of buildings based on hybrid adjustment using 3D surface models and management of building data in a TIS by F. Rottensteiner [11] Multi-spectral images for 3D building detection (http:/ / www. cmis. csiro. au/ Hugues. Talbot/ dicta2003/ cdrom/ pdf/ 0673. pdf) [12] Semi-Automatic building extraction from LIDAR Data and High-Resolution Image (http:/ / www. gisdevelopment. net/ application/ urban/ products/ mi08_226. htm) [13] Building extraction from airborne LIDAR data (http:/ / www. icrest. missouri. edu/ Projects/ NASA/ FeatureExtraction-Buildings/ Building Extraction) [14] Terrestrial laser scanning (http:/ / geoweb. ugent. be/ 3dda/ areas/ ) [15] Terrestrial laser scanning project (http:/ / www. ifp. uni-stuttgart. de/ publications/ 1998/ ohio_laser. pdf) [16] OPALS (http:/ / www. ipf. tuwien. ac. at/ opals/ opals_docu/ index. php) [17] [18] [19] [20]
Ghent University, Department of Geography Implementing data to GIS map (http:/ / www. geo. tudelft. nl/ frs/ papers/ 2001/ ildi_manchester. pdf) 3D data implementation to GIS maps (http:/ / www. itc. nl/ personal/ vosselman/ papers/ suveg2001. bmvc. pdf) Reconstruction software (http:/ / www. springerlink. com/ content/ v48q80865254jl08/ )
3D reconstruction In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
Active methods 3D reconstruction of the general anatomy of the These methods actively interfere with the reconstructed object, either right side view of a small marine slug Pseudunela mechanically or radiometrically. A simple example of a mechanical viatoris. method would use a depth gauge to measure a distance to a rotating object put on a turntable. More applicable radiometric methods emit radiance towards the object and then measure its reflected part. Examples range from moving light sources, colored visible light, time-of-flight lasers to microwaves or ultrasound. See 3D scanning for more details.
15
3D reconstruction
Passive methods Passive methods of 3D reconstruction do not interfere with the reconstructed object, they only use a sensor to measure the radiance reflected or emitted by the object's surface to infer its 3D structure. Typically, the sensor is an image sensor in a camera sensitive to visible light and the input to the method is a set of digital images (one, two or more) or video. In this case we talk about image-based reconstruction and the output is a 3D model.
External links • 3D Reconstruction from Multiple Images [1]
References [1] http:/ / homepages. inf. ed. ac. uk/ rbf/ CVonline/ LOCAL_COPIES/ MOHR_TRIGGS/ node51. html
Binary space partitioning In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of objects within the space by means of a tree data structure known as a BSP tree. Binary space partitioning was developed in the context of 3D computer graphics, where the structure of a BSP tree allows spatial information about the objects in a scene that is useful in rendering, such as their ordering from front-to-back with respect to a viewer at a given location, to be accessed rapidly. Other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3-D video games, ray tracing and other computer applications that involve handling of complex spatial scenes.
Overview Binary space partitioning is a generic process of recursively dividing a scene into two until the partitioning satisfies one or more requirements. It can be seen as a generalisation of other spatial tree structures such as k-d trees and quadtrees, one where hyperplanes that partition the space may have any orientation, rather than being aligned with the coordinate axes as they are in k-d trees or quadtrees. When used in computer graphics to render scenes composed of planar polygons, the partitioning planes are frequently (but not always) chosen to coincide with the planes defined by polygons in the scene. The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on the purpose of the BSP tree. For example, in computer graphics rendering, the scene is divided until each node of the BSP tree contains only polygons that can render in arbitrary order. When back-face culling is used, each node therefore contains a convex set of polygons, whereas when rendering double-sided polygons, each node of the BSP tree contains only polygons in a single plane. In collision detection or ray tracing, a scene may be divided up into primitives on which collision or ray intersection tests are straightforward. Binary space partitioning arose from the computer graphics need to rapidly draw three dimensional scenes composed of polygons. A simple way to draw such scenes is the painter's algorithm, which produces polygons in order of distance from the viewer, back to front, painting over the background and previous polygons with each closer object. This approach has two disadvantages: time required to sort polygons in back to front order, and the possibility of errors in overlapping polygons. Fuchs and co-authors showed that constructing a BSP tree solved both of these problems by providing a rapid method of sorting polygons with respect to a given viewpoint (linear in the number of polygons in the scene) and by subdividing overlapping polygons to avoid errors that can occur with the painter's algorithm. A disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming.
16
Binary space partitioning Typically, it is therefore performed once on static geometry, as a pre-calculation step, prior to rendering or other realtime operations on a scene. The expense of constructing a BSP tree makes it difficult and inefficient to directly implement moving objects into a tree. BSP trees are often used by 3D video games, particularly first-person shooters and those with indoor environments. Game engines utilising BSP trees include the Doom engine (probably the earliest game to use a BSP data structure was Doom), the Quake engine and its descendants. In video games, BSP trees containing the static geometry of a scene are often used together with a Z-buffer, to correctly merge movable objects such as doors and characters onto the background scene. While binary space partitioning provides a convenient way to store and retrieve spatial information about polygons in a scene, it does not solve the problem of visible surface determination.
Generation The canonical use of a BSP tree is for rendering polygons (that are double-sided, that is, without back-face culling) with the painter's algorithm. Each polygon is designated with a front side and a back side which could be chosen arbitrarily and only affects the structure of the tree but not the required result. Such a tree is constructed from an unsorted list of all the polygons in a scene. The recursive algorithm for construction of a BSP tree from that list of polygons is: 1. Choose a polygon P from the list. 2. Make a node N in the BSP tree, and add P to the list of polygons at that node. 3. For each other polygon in the list: 1. If that polygon is wholly in front of the plane containing P, move that polygon to the list of nodes in front of P. 2. If that polygon is wholly behind the plane containing P, move that polygon to the list of nodes behind P. 3. If that polygon is intersected by the plane containing P, split it into two polygons and move them to the respective lists of polygons behind and in front of P. 4. If that polygon lies in the plane containing P, add it to the list of polygons at node N. 4. Apply this algorithm to the list of polygons in front of P. 5. Apply this algorithm to the list of polygons behind P. The following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree. At each of the eight steps (i.-viii.), the algorithm above is applied to a list of lines, and one new node is added to the tree. Start with a list of lines, (or in 3-D, polygons) making up the scene. In the tree diagrams, lists are denoted by rounded rectangles and nodes in the BSP tree by circles. In the spatial diagram of the lines, direction chosen to be the 'front' of a line is denoted by an arrow. i.
Following the steps of the algorithm above, 1. We choose a line, A, from the list and,... 2. ...add it to a node. 3. We split the remaining lines in the list into those in front of A (i.e. B2, C2, D2), and those behind (B1, C1, D1). 4. We first process the lines in front of A (in steps ii–v),... 5. ...followed by those behind (in steps vi–vii).
ii.
We now apply the algorithm to the list of lines in front of A (containing B2, C2, D2). We choose a line, B2, add it to a node and split the rest of the list into those lines that are in front of B2 (D2), and those that are behind it (C2, D3).
iii.
Choose a line, D2, from the list of lines in front of B2. It is the only line in the list, so after adding it to a node, nothing further needs to be done.
17
Binary space partitioning
iv.
We are done with the lines in front of B2, so consider the lines behind B2 (C2 and D3). Choose one of these (C2), add it to a node, and put the other line in the list (D3) into the list of lines in front of C2.
v.
Now look at the list of lines in front of C2. There is only one line (D3), so add this to a node and continue.
vi.
We have now added all of the lines in front of A to the BSP tree, so we now start on the list of lines behind A. Choosing a line (B1) from this list, we add B1 to a node and split the remainder of the list into lines in front of B1 (i.e. D1), and lines behind B1 (i.e. C1).
vii.
Processing first the list of lines in front of B1, D1 is the only line in this list, so add this to a node and continue.
viii. Looking next at the list of lines behind B1, the only line in this list is C1, so add this to a node, and the BSP tree is complete.
The final number of polygons or lines in a tree is often larger (sometimes much larger) than the original list, since lines or polygons that cross the partitioning plane must be split into two. It is desirable to minimize this increase, but also to maintain reasonable balance in the final tree. The choice of which polygon or line is used as a partitioning plane (in step 1 of the algorithm) is therefore important in creating an efficient BSP tree.
Traversal A BSP tree is traversed in a linear time, in an order determined by the particular function of the tree. Again using the example of rendering double-sided polygons using the painter's algorithm, to draw a polygon P correctly requires that all polygons behind the plane P lies in must be drawn first, then polygon P, then finally the polygons in front of P. If this drawing order is satisfied for all polygons in a scene, then the entire scene renders in the correct order. This procedure can be implemented by recursively traversing a BSP tree using the following algorithm. From a given viewing location V, to render a BSP tree, 1. If the current node is a leaf node, render the polygons at the current node. 2. Otherwise, if the viewing location V is in front of the current node: 1. Render the child BSP tree containing polygons behind the current node 2. Render the polygons at the current node 3. Render the child BSP tree containing polygons in front of the current node 3. Otherwise, if the viewing location V is behind the current node: 1. Render the child BSP tree containing polygons in front of the current node 2. Render the polygons at the current node 3. Render the child BSP tree containing polygons behind the current node 4. Otherwise, the viewing location V must be exactly on the plane associated with the current node. Then: 1. Render the child BSP tree containing polygons in front of the current node 2. Render the child BSP tree containing polygons behind the current node
Applying this algorithm recursively to the BSP tree generated above results in the following steps: • The algorithm is first applied to the root node of the tree, node A. V is in front of node A, so we apply the algorithm first to the child BSP tree containing polygons behind A
18
Binary space partitioning • This tree has root node B1. V is behind B1 so first we apply the algorithm to the child BSP tree containing polygons in front of B1: • This tree is just the leaf node D1, so the polygon D1 is rendered. • We then render the polygon B1. • We then apply the algorithm to the child BSP tree containing polygons behind B1: • This tree is just the leaf node C1, so the polygon C1 is rendered. • We then draw the polygons of A • We then apply the algorithm to the child BSP tree containing polygons in front of A • This tree has root node B2. V is behind B2 so first we apply the algorithm to the child BSP tree containing polygons in front of B2: • This tree is just the leaf node D2, so the polygon D2 is rendered. • We then render the polygon B2. • We then apply the algorithm to the child BSP tree containing polygons behind B2: • This tree has root node C2. V is in front of C2 so first we would apply the algorithm to the child BSP tree containing polygons behind C2. There is no such tree, however, so we continue. • We render the polygon C2. • We apply the algorithm to the child BSP tree containing polygons in front of C2 • This tree is just the leaf node D3, so the polygon D3 is rendered. The tree is traversed in linear time and renders the polygons in a far-to-near ordering (D1, B1, C1, A, D2, B2, C2, D3) suitable for the painter's algorithm.
Timeline • 1969 Schumacker et al. published a report that described how carefully positioned planes in a virtual environment could be used to accelerate polygon ordering. The technique made use of depth coherence, which states that a polygon on the far side of the plane cannot, in any way, obstruct a closer polygon. This was used in flight simulators made by GE as well as Evans and Sutherland. However, creation of the polygonal data organization was performed manually by scene designer. • 1980 Fuchs et al. extended Schumacker’s idea to the representation of 3D objects in a virtual environment by using planes that lie coincident with polygons to recursively partition the 3D space. This provided a fully automated and algorithmic generation of a hierarchical polygonal data structure known as a Binary Space Partitioning Tree (BSP Tree). The process took place as an off-line preprocessing step that was performed once per environment/object. At run-time, the view-dependent visibility ordering was generated by traversing the tree. • 1981 Naylor's Ph.D thesis containing a full development of both BSP trees and a graph-theoretic approach using strongly connected components for pre-computing visibility, as well as the connection between the two methods. BSP trees as a dimension independent spatial search structure was emphasized, with applications to visible surface determination. The thesis also included the first empirical data demonstrating that the size of the tree and the number of new polygons was reasonable (using a model of the Space Shuttle). • 1983 Fuchs et al. describe a micro-code implementation of the BSP tree algorithm on an Ikonas frame buffer system. This was the first demonstration of real-time visible surface determination using BSP trees. • 1987 Thibault and Naylor described how arbitrary polyhedra may be represented using a BSP tree as opposed to the traditional b-rep (boundary representation). This provided a solid representation vs. a surface based-representation. Set operations on polyhedra were described using a tool, enabling Constructive Solid Geometry (CSG) in real-time. This was the fore runner of BSP level design using brushes, introduced in the Quake editor and picked up in the Unreal Editor.
19
Binary space partitioning • 1990 Naylor, Amanatides, and Thibault provide an algorithm for merging two BSP trees to form a new BSP tree from the two original trees. This provides many benefits including: combining moving objects represented by BSP trees with a static environment (also represented by a BSP tree), very efficient CSG operations on polyhedra, exact collisions detection in O(log n * log n), and proper ordering of transparent surfaces contained in two interpenetrating objects (has been used for an x-ray vision effect). • 1990 Teller and Séquin proposed the offline generation of potentially visible sets to accelerate visible surface determination in orthogonal 2D environments. • 1991 Gordon and Chen [CHEN91] described an efficient method of performing front-to-back rendering from a BSP tree, rather than the traditional back-to-front approach. They utilised a special data structure to record, efficiently, parts of the screen that have been drawn, and those yet to be rendered. This algorithm, together with the description of BSP Trees in the standard computer graphics textbook of the day (Computer Graphics: Principles and Practice) was used by John Carmack in the making of Doom. • 1992 Teller’s PhD thesis described the efficient generation of potentially visible sets as a pre-processing step to acceleration real-time visible surface determination in arbitrary 3D polygonal environments. This was used in Quake and contributed significantly to that game's performance. • 1993 Naylor answers the question of what characterizes a good BSP tree. He used expected case models (rather than worst case analysis) to mathematically measure the expected cost of searching a tree and used this measure to build good BSP trees. Intuitively, the tree represents an object in a multi-resolution fashion (more exactly, as a tree of approximations). Parallels with Huffman codes and probabilistic binary search trees are drawn. • 1993 Hayder Radha's PhD thesis described (natural) image representation methods using BSP trees. This includes the development of an optimal BSP-tree construction framework for any arbitrary input image. This framework is based on a new image transform, known as the Least-Square-Error (LSE) Partitioning Line (LPE) transform. H. Radha' thesis also developed an optimal rate-distortion (RD) image compression framework and image manipulation approaches using BSP trees.
References Additional references • [NAYLOR90] B. Naylor, J. Amanatides, and W. Thibualt, "Merging BSP Trees Yields Polyhedral Set Operations", Computer Graphics (Siggraph '90), 24(3), 1990. • [NAYLOR93] B. Naylor, "Constructing Good Partitioning Trees", Graphics Interface (annual Canadian CG conference) May, 1993. • [CHEN91] S. Chen and D. Gordon. “Front-to-Back Display of BSP Trees.” (http://cs.haifa.ac.il/~gordon/ ftb-bsp.pdf) IEEE Computer Graphics & Algorithms, pp 79–85. September 1991. • [RADHA91] H. Radha, R. Leoonardi, M. Vetterli, and B. Naylor “Binary Space Partitioning Tree Representation of Images,” Journal of Visual Communications and Image Processing 1991, vol. 2(3). • [RADHA93] H. Radha, "Efficient Image Representation using Binary Space Partitioning Trees.", Ph.D. Thesis, Columbia University, 1993. • [RADHA96] H. Radha, M. Vetterli, and R. Leoonardi, “Image Compression Using Binary Space Partitioning Trees,” IEEE Transactions on Image Processing, vol. 5, No.12, December 1996, pp. 1610–1624. • [WINTER99] AN INVESTIGATION INTO REAL-TIME 3D POLYGON RENDERING USING BSP TREES. Andrew Steven Winter. April 1999. available online • Mark de Berg, Marc van Kreveld, Mark Overmars, and Otfried Schwarzkopf (2000). Computational Geometry (2nd revised ed.). Springer-Verlag. ISBN 3-540-65620-0. Section 12: Binary Space Partitions: pp. 251–265. Describes a randomized Painter's Algorithm.
20
Binary space partitioning • Christer Ericson: Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology). Verlag Morgan Kaufmann, S. 349-382, Jahr 2005, ISBN 1-55860-732-3
External links • BSP trees presentation (http://www.cs.wpi.edu/~matt/courses/cs563/talks/bsp/bsp.html) • Another BSP trees presentation (http://web.archive.org/web/20110719195212/http://www.cc.gatech.edu/ classes/AY2004/cs4451a_fall/bsp.pdf) • A Java applet that demonstrates the process of tree generation (http://symbolcraft.com/graphics/bsp/) • A Master Thesis about BSP generating (http://archive.gamedev.net/archive/reference/programming/features/ bsptree/bsp.pdf) • BSP Trees: Theory and Implementation (http://www.devmaster.net/articles/bsp-trees/) • BSP in 3D space (http://www.euclideanspace.com/threed/solidmodel/spatialdecomposition/bsp/index.htm)
Bounding interval hierarchy A bounding interval hierarchy (BIH) is a partitioning data structure similar to that of bounding volume hierarchies or kd-trees. Bounding interval hierarchies can be used in high performance (or real-time) ray tracing and may be especially useful for dynamic scenes. The BIH was first presented under the name of SKD-Trees,[1] presented by Ooi et al., and BoxTrees,[2] independently invented by Zachmann.
Overview Bounding interval hierarchies (BIH) exhibit many of the properties of both bounding volume hierarchies (BVH) and kd-trees. Whereas the construction and storage of BIH is comparable to that of BVH, the traversal of BIH resemble that of kd-trees. Furthermore, BIH are also binary trees just like kd-trees (and in fact their superset, BSP trees). Finally, BIH are axis-aligned as are its ancestors. Although a more general non-axis-aligned implementation of the BIH should be possible (similar to the BSP-tree, which uses unaligned planes), it would almost certainly be less desirable due to decreased numerical stability and an increase in the complexity of ray traversal. The key feature of the BIH is the storage of 2 planes per node (as opposed to 1 for the kd tree and 6 for an axis aligned bounding box hierarchy), which allows for overlapping children (just like a BVH), but at the same time featuring an order on the children along one dimension/axis (as it is the case for kd trees). It is also possible to just use the BIH data structure for the construction phase but traverse the tree in a way a traditional axis aligned bounding box hierarchy does. This enables some simple speed up optimizations for large ray bundles [3] while keeping memory/cache usage low. Some general attributes of bounding interval hierarchies (and techniques related to BIH) as described by [4] are: • • • • • •
Very fast construction times Low memory footprint Simple and fast traversal Very simple construction and traversal algorithms High numerical precision during construction and traversal Flatter tree structure (decreased tree depth) compared to kd-trees
21
Bounding interval hierarchy
Operations Construction To construct any space partitioning structure some form of heuristic is commonly used. For this the surface area heuristic, commonly used with many partitioning schemes, is a possible candidate. Another, more simplistic heuristic is the "global" heuristic described by which only requires an axis-aligned bounding box, rather than the full set of primitives, making it much more suitable for a fast construction. The general construction scheme for a BIH: • calculate the scene bounding box • use a heuristic to choose one axis and a split plane candidate perpendicular to this axis • sort the objects to the left or right child (exclusively) depending on the bounding box of the object (note that objects intersecting the split plane may either be sorted by its overlap with the child volumes or any other heuristic) • calculate the maximum bounding value of all objects on the left and the minimum bounding value of those on the right for that axis (can be combined with previous step for some heuristics) • store these 2 values along with 2 bits encoding the split axis in a new node • continue with step 2 for the children Potential heuristics for the split plane candidate search: • Classical: pick the longest axis and the middle of the node bounding box on that axis • Classical: pick the longest axis and a split plane through the median of the objects (results in a leftist tree which is often unfortunate for ray tracing though) • Global heuristic: pick the split plane based on a global criterion, in the form of a regular grid (avoids unnecessary splits and keeps node volumes as cubic as possible) • Surface area heuristic: calculate the surface area and amount of objects for both children, over the set of all possible split plane candidates, then choose the one with the lowest costs (claimed to be optimal, though the cost function poses unusual demands to proof the formula, which can not be fulfilled in real life. also an exceptionally slow heuristic to evaluate)
Ray traversal The traversal phase closely resembles a kd-tree traversal: One has to distinguish 4 simple cases, where the ray • • • •
just intersects the left child just intersects the right child intersects both children intersects neither child (the only case not possible in a kd traversal)
For the third case, depending on the ray direction (negative or positive) of the component (x, y or z) equalling the split axis of the current node, the traversal continues first with the left (positive direction) or the right (negative direction) child and the other one is pushed onto a stack. Traversal continues until a leaf node is found. After intersecting the objects in the leaf, the next element is popped from the stack. If the stack is empty, the nearest intersection of all pierced leafs is returned. It is also possible to add a 5th traversal case, but which also requires a slightly complicated construction phase. By swapping the meanings of the left and right plane of a node, it is possible to cut off empty space on both sides of a node. This requires an additional bit that must be stored in the node to detect this special case during traversal. Handling this case during the traversal phase is simple, as the ray • just intersects the only child of the current node or • intersects nothing
22
Bounding interval hierarchy
Properties Numerical stability All operations during the hierarchy construction/sorting of the triangles are min/max-operations and comparisons. Thus no triangle clipping has to be done as it is the case with kd-trees and which can become a problem for triangles that just slightly intersect a node. Even if the kd implementation is carefully written, numerical errors can result in a non-detected intersection and thus rendering errors (holes in the geometry) due to the missed ray-object intersection.
Extensions Instead of using two planes per node to separate geometry, it is also possible to use any number of planes to create a n-ary BIH or use multiple planes in a standard binary BIH (one and four planes per node were already proposed in and then properly evaluated in [5]) to achieve better object separation.
References Papers [1] Nam, Beomseok; Sussman, Alan. A comparative study of spatial indexing techniques for multidimensional scientific datasets (http:/ / ieeexplore. ieee. org/ Xplore/ login. jsp?url=/ iel5/ 9176/ 29111/ 01311209. pdf) [2] Zachmann, Gabriel. Minimal Hierarchical Collision Detection (http:/ / zach. in. tu-clausthal. de/ papers/ vrst02. html) [3] Wald, Ingo; Boulos, Solomon; Shirley, Peter (2007). Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies (http:/ / www. sci. utah. edu/ ~wald/ Publications/ 2007/ / / BVH/ download/ / togbvh. pdf) [4] Wächter, Carsten; Keller, Alexander (2006). Instant Ray Tracing: The Bounding Interval Hierarchy (http:/ / ainc. de/ Research/ BIH. pdf) [5] Wächter, Carsten (2008). Quasi-Monte Carlo Light Transport Simulation by Efficient Ray Tracing (http:/ / vts. uni-ulm. de/ query/ longview. meta. asp?document_id=6265)
External links • BIH implementations: Javascript (http://github.com/imbcmdth/jsBIH).
23
Bounding volume
24
Bounding volume For building code compliance, see Bounding. In computer graphics and computational geometry, a bounding volume for a set of objects is a closed volume that completely contains the union of the objects in the set. Bounding volumes are used to improve the efficiency of geometrical operations by using simple volumes to contain more complex objects. Normally, simpler volumes have simpler ways to test for overlap. A bounding volume for a set of objects is also a bounding volume for the single object consisting of their union, and the other way around. Therefore it is possible to confine the description to the case of a single object, which is assumed to be non-empty and bounded (finite). A three dimensional model with its bounding box drawn in dashed lines.
Uses of bounding volumes Bounding volumes are most often used to accelerate certain kinds of tests. In ray tracing, bounding volumes are used in ray-intersection tests, and in many rendering algorithms, they are used for viewing frustum tests. If the ray or viewing frustum does not intersect the bounding volume, it cannot intersect the object contained in the volume. These intersection tests produce a list of objects that must be displayed. Here, displayed means rendered or rasterized. In collision detection, when two bounding volumes do not intersect, then the contained objects cannot collide, either. Testing against a bounding volume is typically much faster than testing against the object itself, because of the bounding volume's simpler geometry. This is because an 'object' is typically composed of polygons or data structures that are reduced to polygonal approximations. In either case, it is computationally wasteful to test each polygon against the view volume if the object is not visible. (Onscreen objects must be 'clipped' to the screen, regardless of whether their surfaces are actually visible.) To obtain bounding volumes of complex objects, a common way is to break the objects/scene down using a scene graph or more specifically bounding volume hierarchies like e.g. OBB trees. The basic idea behind this is to organize a scene in a tree-like structure where the root comprises the whole scene and each leaf contains a smaller subpart.
Common types of bounding volume The choice of the type of bounding volume for a given application is determined by a variety of factors: the computational cost of computing a bounding volume for an object, the cost of updating it in applications in which the objects can move or change shape or size, the cost of determining intersections, and the desired precision of the intersection test. The precision of the intersection test is related to the amount of space within the bounding volume not associated with the bounded object, called void space. Sophisticated bounding volumes generally allow for less void space but are more computationally expensive. It is common to use several types in conjunction, such as a cheap one for a quick but rough test in conjunction with a more precise but also more expensive type. The types treated here all give convex bounding volumes. If the object being bounded is known to be convex, this is not a restriction. If non-convex bounding volumes are required, an approach is to represent them as a union of a number of convex bounding volumes. Unfortunately, intersection tests become quickly more expensive as the
Bounding volume bounding boxes become more sophisticated. A bounding box is a cuboid, or in 2-D a rectangle, containing the object. In dynamical simulation, bounding boxes are preferred to other shapes of bounding volume such as bounding spheres or cylinders for objects that are roughly cuboid in shape when the intersection test needs to be fairly accurate. The benefit is obvious, for example, for objects that rest upon other, such as a car resting on the ground: a bounding sphere would show the car as possibly intersecting with the ground, which then would need to be rejected by a more expensive test of the actual model of the car; a bounding box immediately shows the car as not intersecting with the ground, saving the more expensive test. A bounding capsule is a swept sphere (i.e. the volume that a sphere takes as it moves along a straight line segment) containing the object. Capsules can be represented by the radius of the swept sphere and the segment that the sphere is swept across). It has traits similar to a cylinder, but is easier to use, because the intersection test is simpler. A capsule and another object intersect if the distance between the capsule's defining segment and some feature of the other object is smaller than the capsule's radius. For example, two capsules intersect if the distance between the capsules' segments is smaller than the sum of their radii. This holds for arbitrarily rotated capsules, which is why they're more appealing than cylinders in practice. A bounding cylinder is a cylinder containing the object. In most applications the axis of the cylinder is aligned with the vertical direction of the scene. Cylinders are appropriate for 3-D objects that can only rotate about a vertical axis but not about other axes, and are otherwise constrained to move by translation only. Two vertical-axis-aligned cylinders intersect when, simultaneously, their projections on the vertical axis intersect – which are two line segments – as well their projections on the horizontal plane – two circular disks. Both are easy to test. In video games, bounding cylinders are often used as bounding volumes for people standing upright. A bounding ellipsoid is an ellipsoid containing the object. Ellipsoids usually provide tighter fitting than a sphere. Intersections with ellipsoids are done by scaling the other object along the principal axes of the ellipsoid by an amount equal to the multiplicative inverse of the radii of the ellipsoid, thus reducing the problem to intersecting the scaled object with a unit sphere. Care should be taken to avoid problems if the applied scaling introduces skew. Skew can make the usage of ellipsoids impractical in certain cases, for example collision between two arbitrary ellipsoids. A bounding slab is related to the AABB and used to speed up ray tracing[1] A bounding sphere is a sphere containing the object. In 2-D graphics, this is a circle. Bounding spheres are represented by centre and radius. They are very quick to test for collision with each other: two spheres intersect when the distance between their centres does not exceed the sum of their radii. This makes bounding spheres appropriate for objects that can move in any number of dimensions. In many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an axis-aligned bounding box (AABB). To distinguish the general case from an AABB, an arbitrary bounding box is sometimes called an oriented bounding box (OBB). AABBs are much simpler to test for intersection than OBBs, but have the disadvantage that when the model is rotated they cannot be simply rotated with it, but need to be recomputed. A bounding triangle in 2-D is quite useful to speedup the clipping or visibility test of a B-Spline curve. See "Circle and B-Splines clipping algorithms" under the subject Clipping (computer graphics) for an example of use. A convex hull is the smallest convex volume containing the object. If the object is the union of a finite set of points, its convex hull is a polytope. A discrete oriented polytope (DOP) generalizes the AABB. A DOP is a convex polytope containing the object (in 2-D a polygon; in 3-D a polyhedron), constructed by taking a number of suitably oriented planes at infinity and moving them until they collide with the object. The DOP is then the convex polytope resulting from intersection of the half-spaces bounded by the planes. Popular choices for constructing DOPs in 3-D graphics include the
25
Bounding volume
26
axis-aligned bounding box, made from 6 axis-aligned planes, and the beveled bounding box, made from 10 (if beveled only on vertical edges, say), 18 (if beveled on all edges), or 26 planes (if beveled on all edges and corners). A DOP constructed from k planes is called a k-DOP; the actual number of faces can be less than k, since some can become degenerate, shrunk to an edge or a vertex. A minimum bounding rectangle or MBR – the least AABB in 2-D – is frequently used in the description of geographic (or "geospatial") data items, serving as a simplified proxy for a dataset's spatial extent (see geospatial metadata) for the purpose of data search (including spatial queries as applicable) and display. It is also a basic component of the R-tree method of spatial indexing.
Basic intersection checks For some types of bounding volume (OBB and convex polyhedra), an effective check is that of the separating axis theorem. The idea here is that, if there exists an axis by which the objects do not overlap, then the objects do not intersect. Usually the axes checked are those of the basic axes for the volumes (the unit axes in the case of an AABB, or the 3 base axes from each OBB in the case of OBBs). Often, this is followed by also checking the cross-products of the previous axes (one axis from each object). In the case of an AABB, this tests becomes a simple set of overlap tests in terms of the unit axes. For an AABB defined by M,N against one defined by O,P they do not intersect if (Mx>Px) or (Ox>Nx) or (My>Py) or (Oy>Ny) or (Mz>Pz) or (Oz>Nz). An AABB can also be projected along an axis, for example, if it has edges of length L and is centered at C, and is being projected along the axis N: , and or , and where m and n are the minimum and maximum extents. An OBB is similar in this respect, but is slightly more complicated. For an OBB with L and C as above, and with I, J, and K as the OBB's base axes, then:
For the ranges m,n and o,p it can be said that they do not intersect if m>p or o>n. Thus, by projecting the ranges of 2 OBBs along the I, J, and K axes of each OBB, and checking for non-intersection, it is possible to detect non-intersection. By additionally checking along the cross products of these axes (I0×I1, I0×J1, ...) one can be more certain that intersection is impossible. This concept of determining non-intersection via use of axis projection also extends to convex polyhedra, however with the normals of each polyhedral face being used instead of the base axes, and with the extents being based on the minimum and maximum dot products of each vertex against the axes. Note that this description assumes the checks are being done in world space.
References [1] POV-Ray Documentation (http:/ / www. povray. org/ documentation/ view/ 3. 6. 1/ 323/ )
External links • Illustration of several DOPs for the same model, from epicgames.com (http://udn.epicgames.com/Two/rsrc/ Two/CollisionTutorial/kdop_sizes.jpg)
Bounding volume hierarchy
Bounding volume hierarchy A bounding volume hierarchy (BVH) is a tree structure on a set of geometric objects. All geometric objects are wrapped in bounding volumes that form the leaf nodes of the tree. These nodes are then grouped as small sets and enclosed within larger An example of a bounding volume hierarchy using rectangles as bounding volumes. bounding volumes. These, in turn, are also grouped and enclosed within other larger bounding volumes in a recursive fashion, eventually resulting in a tree structure with a single bounding volume at the top of the tree. Bounding volume hierarchies are used to support several operations on sets of geometric objects efficiently, such as in collision detection[1] or ray tracing Although wrapping objects in bounding volumes and performing collision tests on them before testing the object geometry itself simplifies the tests and can result in significant performance improvements, the same number of pairwise tests between bounding volumes are still being performed. By arranging the bounding volumes into a bounding volume hierarchy, the time complexity (the number of tests performed) can be reduced to logarithmic in the number of objects. With such a hierarchy in place, during collision testing, children do not have to be examined if their parent volumes are not intersected.
BVH design issues The choice of bounding volume is determined by a trade-off between two objectives. On the one hand, we would like to use bounding volumes that have a very simple shape. Thus, we need only a few bytes to store them, and intersection tests and distance computations are simple and fast. On the other hand, we would like to have bounding volumes that fit the corresponding data objects very tightly. One of the most commonly used bounding volumes is an axis-aligned minimum bounding box. The axis-aligned minimum bounding box for a given set of data objects is easy to compute, needs only few bytes of storage, and robust intersection tests are easy to implement and extremely fast. There are several desired properties for a BVH that should be taken into consideration when designing one for a specific application:[2] • The nodes contained in any given sub-tree should be near each other. The lower down the tree, the nearer the nodes should be to each other. • Each node in the BVH should be of minimum volume. • The sum of all bounding volumes should be minimal. • Greater attention should be paid to nodes near the root of the BVH. Pruning a node near the root of the tree removes more objects from further consideration. • The volume of overlap of sibling nodes should be minimal. • The BVH should be balanced with respect to both its node structure and its content. Balancing allows as much of the BVH as possible to be pruned whenever a branch is not traversed into. In terms of the structure of BVH, it has to be decided what degree (the number of children) and height to use in the tree representing the BVH. A tree of a low degree will be of greater height. That increases root-to-leaf traversal time. On the other hand, less work has to be expended at each visited node to check its children for overlap. The opposite holds for a high-degree tree: although the tree will be of smaller height, more work is spent at each node. In practice, binary trees (degree = 2) are by far the most common. One of the main reasons is that binary trees are easier to build.
27
Bounding volume hierarchy
Construction There are three primary categories of tree construction methods: top-down, bottom-up, and insertion methods. Top-down methods proceed by partitioning the input set into two (or more) subsets, bounding them in the chosen bounding volume, then keep partitioning (and bounding) recursively until each subset consists of only a single primitive (leaf nodes are reached). Top-down methods are easy to implement, fast to construct and by far the most popular, but do not result in the best possible trees in general. Bottom-up methods start with the input set as the leaves of the tree and then group two (or more) of them to form a new (internal) node, proceed in the same manner until everything has been grouped under a single node (the root of the tree). Bottom-up methods are more difficult to implement, but likely to produce better trees in general. Both top-down and bottom-up methods are considered off-line methods as they both require all primitives to be available before construction starts. Insertion methods build the tree by inserting one object at a time, starting from an empty tree. The insertion location should be chosen that causes the tree to grow as little as possible according to a cost metric. Insertion methods are considered on-line methods since they do not require all primitives to be available before construction starts and thus allow updates to be performed at runtime.
Usage BVHs are often used in ray tracing to eliminate potential intersection candidates within a scene by omitting geometric objects located in bounding volumes, which are not intersected by the current ray. [3]
References [1] Herman Johannes Haverkort, Results on geometric networks and data structures, 2004. Chapter 1: Introduction, page 9-10 + 16. Chapter 1 (http:/ / igitur-archive. library. uu. nl/ dissertations/ 2004-0506-101707/ c1. pdf) [2] Christer Ericson, Real-Time Collision Detection, Page 236–237 [3] Johannes Günther, Stefan Popov, Hans-Peter Seidel and Philipp Slusallek, Realtime Ray Tracing on GPU with BVH-based Packet Traversal (http:/ / www. mpi-inf. mpg. de/ ~guenther/ BVHonGPU/ )
External links • BVH implementations: Javascript (http://github.com/imbcmdth/jsBVH).
28
Box modeling
Box modeling Box modeling is a technique in 3D modeling where you take a basic primitive shape (like a box, cylinder or others) and make the basic shape “rough draft” of your final model from there you sculpt out your final model. The process uses various tools and steps that sometimes get repeated again and again until you're done. Despite the fact you’re repeating these steps you will model faster and control the amount of detail you wish to add, slowly building your model up from ground level of detail to high level.
Subdivision Subdivision modeling is derived from the idea that as a work is progressed, should the artist want to make his work appear less sharp, or "blocky", each face would be divided up into smaller, more detailed faces (usually into sets of four). However, more experienced box modelers manage to create their model without subdividing the faces of the model. Basically, box modeling is broken down into the very basic concept of polygonal management.
Quads Quadrilateral faces, commonly named "quads", are the fundamental entity in box modeling. If an artist were to start with a cube, the artist would have six quad faces to work with before extrusion. While most applications for three-dimensional art provide abilities for faces up to any size, results are often more predictable and consistent when working with quads. This is so because if one were to draw an X connecting the corner vertices of a quad, the surface normal is nearly always the same. We say nearly because, when a quad is something other than a perfect parallelogram (such as a rhombus or trapezoid), the surface normal would be different. Also, a quad subdivides into two or four triangles cleanly, making it easier to prepare the model for software that can only handle triangles.
Advantages and disadvantages Box modeling is a modeling method that is quick and easy to learn. It is also appreciably faster than placing each point individually. However, it is difficult to add high amounts of detail to models created using this technique without practice.
References
29
CatmullClark subdivision surface
30
Catmull–Clark subdivision surface The Catmull–Clark algorithm is a technique used in computer graphics to create smooth surfaces by subdivision surface modeling. It was devised by Edwin Catmull and Jim Clark in 1978 as a generalization of bi-cubic uniform B-spline surfaces to arbitrary topology. In 2005, Edwin Catmull received an Academy Award for Technical Achievement together with Tony DeRose and Jos Stam for their invention and application of subdivision surfaces.
Recursive evaluation Catmull–Clark surfaces are defined recursively, using the following refinement scheme: Start with a mesh of an arbitrary polyhedron. All the vertices in this mesh shall be called original points. • For each face, add a face point • Set each face point to be the average of all original points for the respective face.
First three steps of Catmull–Clark subdivision of a cube with subdivision surface below
• For each edge, add an edge point. • Set each edge point to be the average of the two neighbouring face points and its two original endpoints. • For each face point, add an edge for every edge of the face, connecting the face point to each edge point for the face. • For each original point P, take the average F of all n (recently created) face points for faces touching P, and take the average R of all n edge midpoints for edges touching P, where each edge midpoint is the average of its two endpoint vertices. Move each original point to the point This is the barycenter of P, R and F with respective weights (n-3), 2 and 1. • Connect each new Vertex point to the new edge points of all original edges incident on the original vertex. • Define new faces as enclosed by edges The new mesh will consist only of quadrilaterals, which won't in general be planar. The new mesh will generally look smoother than the old mesh. Repeated subdivision results in smoother meshes. It can be shown that the limit surface obtained by this refinement process is at least at extraordinary vertices and everywhere else (when n indicates how many derivatives are continuous, we speak of
continuity). After one iteration, the number of extraordinary points on the surface
remains constant. The arbitrary-looking barycenter formula was chosen by Catmull and Clark based on the aesthetic appearance of the resulting surfaces rather than on a mathematical derivation, although Catmull and Clark do go to great lengths to rigorously show that the method yields bicubic B-spline surfaces.
CatmullClark subdivision surface
Exact evaluation The limit surface of Catmull–Clark subdivision surfaces can also be evaluated directly, without any recursive refinement. This can be accomplished by means of the technique of Jos Stam. This method reformulates the recursive refinement process into a matrix exponential problem, which can be solved directly by means of matrix diagonalization.
Software using Catmull–Clark subdivision surfaces • • • • • • • • •
3ds max 3D-Coat AC3D Anim8or AutoCAD Blender Carrara CATIA (Imagine and Shape) CGAL
• • • • • • • • • • • • • • • • • • • • • • • • • •
Cheetah3D Cinema4D Clara.io DAZ Studio, 2.0 Gelato Hammer Hexagon Houdini K-3D LightWave 3D, version 9 Maya Metasequoia modo Mudbox PRMan Realsoft3D Remo 3D Shade Rhinoceros 3D - Grasshopper 3D Plugin - Weaverbird Plugin Silo SketchUp - Requires a Plugin. Softimage XSI Strata 3D CX Wings 3D Zbrush TopMod
31
CatmullClark subdivision surface
References
Cloth modeling Cloth modeling is the term used for simulating cloth within a computer program; usually in the context of 3D computer graphics. The main approaches used for this may be classified into three basic types: geometric, physical, and particle/energy.
Background Most models of cloth are based on "particles" of mass connected in some manner of mesh. Newtonian Physics is used to model each particle through the use of a "black box" called a physics engine. This involves using the basic law of motion (Newton's Second Law):
In all of these models, the goal is to find the position and shape of a piece of fabric using this basic equation and several other methods.
Geometric methods Weil pioneered the first of these, the geometric technique, in 1986.[1] His work was focused on approximating the look of cloth by treating cloth like a collection of cables and using Hyperbolic cosine (catenary) curves. Because of this, it is not suitable for dynamic models but works very well for stationary or single-frame renders. This technique creates an underlying shape out of single points; then, it parses through each set of three of these points and maps a catenary curve to the set. It then takes the lowest out of each overlapping set and uses it for the render.
Physical methods The second technique treats cloth like a grid work of particles connected to each other by springs. Whereas the geometric approach accounted for none of the inherent stretch of a woven material, this physical model accounts for stretch (tension), stiffness, and weight:
• s terms are elasticity (by Hooke's Law) • b terms are bending • g terms are gravity (see Acceleration due to gravity) Now we apply the basic principle of mechanical equilibrium in which all bodies seek lowest energy by differentiating this equation to find the minimum energy.
32
Cloth modeling
Particle/energy methods The last method is more complex than the first two. The particle technique takes the physical technique from (f) a step further and supposes that we have a network of particles interacting directly. That is to say, that rather than springs, we use the energy interactions of the particles to determine the cloth’s shape. For this we use an energy equation that adds onto the following:
• • • • •
The energy of repelling is an artificial element we add to prevent cloth from intersecting itself. The energy of stretching is governed by Hooke's law as with the Physical Method. The energy of bending describes the stiffness of the fabric The energy of trellising describes the shearing of the fabric (distortion within the plane of the fabric) The energy of gravity is based on acceleration due to gravity
We can also add terms for energy added by any source to this equation, then derive and find minima, which generalizes our model. This allows us to model cloth behavior under any circumstance, and since we are treating the cloth as a collection of particles its behavior can be described with the dynamics provided in our physics engine.
References • Cloth Modeling [2]
Notes [1] Tutorial on Cloth Modeling (http:/ / www. webcitation. org/ query?url=http:/ / www. geocities. com/ SiliconValley/ Heights/ 5445/ cloth. html& date=2009-10-25+ 09:48:40) [2] http:/ / davis. wpi. edu/ ~matt/ courses/ cloth/
33
COLLADA
34
COLLADA COLLADA Filename extension .dae Internet media type model/vnd.collada+xml Developed by
Sony Computer Entertainment, Khronos Group
Initial release
October 2004
Latest release
1.5.0 / August 2008
Type of format
3D computer graphics
Extended from
XML
Website
collada.org
[1]
COLLADA (from collaborative design activity) is an interchange file format for interactive 3D applications. It is managed by the nonprofit technology consortium, the Khronos Group, and has been adopted by ISO as a publicly available specification, ISO/PAS 17506. COLLADA defines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible file formats. COLLADA documents that describe digital assets are XML files, usually identified with a .dae (digital asset exchange) filename extension.
History Originally created at Sony Computer Entertainment by Rémi Arnaud and Mark C. Barnes, it has since become the property of the Khronos Group, a member-funded industry consortium, which now shares the copyright with Sony. The COLLADA schema and specification are freely available from the Khronos Group. The COLLADA DOM uses the SCEA Shared Source License [2]. Several graphics companies collaborated with Sony from COLLADA's beginnings to create a tool that would be useful to the widest possible audience, and COLLADA continues to evolve through the efforts of Khronos contributors. Early collaborators included Alias Systems Corporation, Criterion Software, Autodesk, Inc., and Avid Technology. DozensWikipedia:Manual of Style/Dates and numbers of commercial game studios and game engines have adopted the standard. Members of the developer team: • Lilli Thompson[3] In March 2011, Khronos released[4] the COLLADA Conformance Test Suite (CTS). The suite allows applications that import and export COLLADA to test against a large suite of examples, ensuring that they conform properly to the specification. In July 2012, the CTS software was released on GitHub,[5] allowing for community contributions. ISO/PAS 17506:2012 Industrial automation systems and integration -- COLLADA digital asset schema specification for 3D visualization of industrial data was published in July 2012.
COLLADA
Software tools COLLADA was originally intended as an intermediate format for transporting data from one digital content creation (DCC) tool to another application. Applications exist to support the usage of several DCCs, including: • • • • • • • • • • • • • •
3ds Max (ColladaMax) Adobe Photoshop ArtiosCAD Blender Bryce Carrara Cheddar Cheese Press (model processor) [6] Chief Architect Software Cinema 4D (MAXON) CityEngine CityScape Clara.io DAZ Studio E-on Vue 9 xStream
• • • • • • • • • • • • • • • • • • • • • • • • • •
EskoArtwork Studio FreeCAD FormZ GPure Houdini (Side Effects Software) iBooks Author LightWave 3D (v 9.5) Maya (ColladaMaya) MeshLab Mobile Model Viewer (Android) [7] modo Okino PolyTrans [8] for bidirectional Collada conversions OpenRAVE Poser Pro (v 7.0) Presagis Creator Robot Operating System SAP Visual Enterprise Author Shade 3D (E Frontier, Mirye) SketchUp (v 8.0) – KMZ file is a zip file containing a KML file, a COLLADA file, and texture images Softimage|XSI Strata 3D Ürban PAD Vectorworks Visual3D Game Development Tool for Collada scene and model viewing, editing, and exporting Wings 3D Xcode (v 4.4)
35
COLLADA
Game engines Although originally intended as an interchange format, many game engines now support COLLADA natively, including: • • • • • • • • • • • • • •
Ardor3D C4 Engine CryEngine 2 GLGE Irrlicht Engine Panda3d ShiVa Spring Torque 3D Turbulenz Unity Unreal Engine Vanda Engine [9] Visual3D Game Engine
• GamePlay
Applications Some games and 3D applications have started to support COLLADA: • • • • • • • • • • • • •
ArcGIS Autodesk Infrastructure Modeler Google Earth (v 4) – users can simply drag and drop a COLLADA file on top of the virtual Earth Maple (software) - 3D plots can be exported as COLLADA Open Wonderland OpenSimulator Mac OS X 10.6's Preview NASA World Wind Second Life TNTmips SAP Visual Enterprise Author – supports import and export .dae files. Google Sketchup - import .dae files. Kerbal Space Program - .dae files for 3d model mods.
Libraries There are several libraries available to read and write COLLADA files under programmatic control: • COLLADA DOM [10] (C++) - The COLLADA DOM is generated at compile-time from the COLLADA schema. It provides a low-level interface that eliminates the need for hand-written parsing routines, but is limited to reading and writing only one version of COLLADA, making it difficult to upgrade as new versions are released. • FCollada [11] (C++) - A utility library available from Feeling Software. In contrast to the COLLADA DOM, Feeling Software's FCollada provides a higher-level interface. FCollada is used in ColladaMaya [12], ColladaMax [13] , and several commercial game engines. The development of the open source part was discontinued by Feeling Software in 2008. The company continues to support its paying customers and licenses with improved versions of its software.
36
COLLADA • OpenCOLLADA [14] (C++) - The OpenCOLLADA project provides plugins for 3ds Max and Maya and the sources of utility libraries which were developed for the plugins. • pycollada [15] (Python) - A Python module for creating, editing and loading COLLADA. The library allows the application to load a COLLADA file and interact with it as a Python object. In addition, it supports creating a COLLADA Python object from scratch, as well as in-place editing. • Scene Kit [16] (Objective-C) - An Objective-C framework introduced in OS X 10.8 Mountain Lion that allows reading, high-level manipulation and display of COLLADA scenes. • GLGE (JavaScript) - a JavaScript library presenting COLLADA files in a web browser using WebGL. • Three.js (JavaScript) - a 3D Javascript library capable of loading COLLADA files in a web browser. • StormEngineC (JavaScript) - Javascript 3D graphics library with option of loading COLLADA files.
Physics As of version 1.4, physics support was added to the COLLADA standard. The goal is to allow content creators to define various physical attributes in visual scenes. For example, one can define surface material properties such as friction. Furthermore, content creators can define the physical attributes for the objects in the scene. This is done by defining the rigid bodies that should be linked to the visual representations. More features include support for ragdolls, collision volumes, physical constraints between physical objects, and global physical properties such as gravitation. Physics middleware products that support this standard include Bullet Physics Library, Open Dynamics Engine, PAL and NVIDIA's PhysX. These products support by reading the abstract found in the COLLADA file and transferring it into a form that the middleware can support and represent in a physical simulation. This also enables different middleware and tools to exchange physics data in a standardized manner. The Physics Abstraction Layer provides support for COLLADA Physics to multiple physics engines that do not natively provide COLLADA support including JigLib, OpenTissue, Tokamak physics engine and True Axis. PAL also provides support for COLLADA to physics engines that also feature a native interface.
Versions • • • •
1.0: October 2004 1.2: February 2005 1.3: June 2005 1.4.0: January 2006; added features such as character skinning and morph targets, rigid body dynamics, support for OpenGL ES materials, and shader effects for multiple shading languages including the Cg programming language, GLSL, and HLSL. First release through Khronos. • 1.4.1: July 2006; primarily a patch release. • 1.5.0: August 2008; added kinematics and B-rep as well as some FX redesign and OpenGL ES support. Formalised as ISO/PAS 17506:2012.
37
COLLADA
References [1] http:/ / collada. org/ [2] http:/ / research. scea. com/ scea_shared_source_license. html [3] Building Game Development Tools with App Engine, GWT, and WebGL (http:/ / www. google. com/ events/ io/ 2011/ sessions/ building-game-development-tools-with-app-engine-gwt-and-webgl. html), Google i/o 2011, Lilli Thompson. [4] http:/ / www. khronos. org/ news/ press/ khronos-group-releases-free-collada-conformance-test-suite [5] http:/ / www. khronos. org/ news/ permalink/ opencollada-and-collada-cts-now-on-github [6] http:/ / www. cheddarcheesepress. com/ [7] http:/ / www. mobilemodelviewer. com/ [8] http:/ / www. okino. com/ conv/ exp_collada. htm [9] http:/ / www. vandaengine. com [10] http:/ / collada. org/ mediawiki/ index. php/ COLLADA_DOM [11] http:/ / collada. org/ mediawiki/ index. php/ FCollada [12] http:/ / collada. org/ mediawiki/ index. php/ ColladaMaya [13] http:/ / collada. org/ mediawiki/ index. php/ ColladaMax [14] https:/ / github. com/ khronosGroup/ OpenCOLLADA [15] http:/ / pycollada. github. com/ [16] http:/ / developer. apple. com/ library/ mac/ documentation/ 3DDrawing/ Conceptual/ SceneKit_PG/ Introduction/ Introduction. html
External links • • • • • • •
Official homepage (http://www.khronos.org/collada/) COLLADA website (http://collada.org/) COLLADA DOM (http://sourceforge.net/projects/collada-dom/) OpenCOLLADA Project (https://github.com/khronosGroup/OpenCOLLADA) pycollada (http://pycollada.github.com/) GLC_Player (http://sourceforge.net/projects/glc-player/) Media Grid : News : "Create Once, Experience Everywhere" Format Unveiled for Immersive Education (http:// mediagrid.org/news/2010-11_iED_Create_Once_Experience_Everywhere.html)
38
Computed Corpuscle Sectioning
Computed Corpuscle Sectioning Computed Corpuscle Sectioning is a general method for determining the volume, profile area, and perimeter of a slab section of any computer-modeled three-dimensional object in any orientation and at any position. It was originally developed as a model of cell nuclei in a tissue section in conjunction with the Reference Curve Method for correcting ploidy measurements by image analysis in a tissue section, but it is useful for evaluating any algorithm that corrects ploidy measurements for the effect of sectioning. Computed Corpuscle Sectioning has obvious pertinence to stereology, but has not been exploited in that field. The patents on this method (U.S. Patent numbers 5,918,038, 6,035,258, and 6,603,869) are no longer in force.
References • Freed JA. Possibility of correcting image cytometric DNA (ploidy) measurements in tissue sections: Insights from computed corpuscle sectioning and the reference curve method. Analyt Quant Cytol Histol 19(5):376-386, 1997. [1] • Freed JA. Improved correction of quantitative nuclear DNA (ploidy) measurements in tissue sections. Analyt Quant Cytol Histol 21(2):103-112, 1999.[2] • Freed JA. Conceptual comparison of two computer models of corpuscle sectioning and of two algorithms for correction of ploidy measurements in tissue sections. Analyt Quant Cytol Histol 22(1): 17-25, 2000. [3] • "A general method for determining the volume and profile area of a sectioned corpuscle", U.S. Pat. No. 5,918,038 issued 6/29/99 to Jeffrey A. Freed. [4] • "Method for correction of quantitative DNA measurements in a tissue section", U.S. Pat. No. 6,035,258 issued 3/7/00 to Jeffrey A. Freed. [5] • “Use of perimeter measurements to improve ploidy measurements in a tissue section”, U.S. Pat. No. 6,603,869 issued 8/5/03 to Jeffrey A. Freed. [6]
External links • Computed Corpuscle Sectioning and the Reference Curve Method [7]
References [1] [2] [3] [4] [5] [6] [7]
http:/ / www. aqch. com/ toc/ auto_abstract. php?id=3135 http:/ / www. aqch. com/ toc/ auto_abstract. php?id=11847 http:/ / www. aqch. com/ toc/ auto_abstract. php?id=14161 http:/ / www. google. com/ patents?q=5918038 http:/ / www. google. com/ patents?q=6035258 http:/ / www. google. com/ patents?q=6603869 http:/ / www. fortunecity. com/ skyscraper/ terabyte/ 562/ jfccs. htm
39
Computer representation of surfaces
Computer representation of surfaces In technical applications of 3D computer graphics (CAx) such as computer-aided design and computer-aided manufacturing, surfaces are one way of representing objects. The other ways are wireframe (lines and curves) and solids. Point clouds are also sometimes used as temporary ways to represent an object, with the goal of using the points to create one or more of the three permanent representations.
Open and closed surfaces If one considers a local parametrization of a surface:
then the curves obtained by varying u while keeping v fixed are sometimes called the u An open surface with u- and v-flow lines and Z-contours shown. flow lines. The curves obtained by varying v while u is fixed are called the v flow lines. These are generalizations of the x and y lines in the plane and of the meridians and circles of latitude on a sphere. Open surfaces are not closed in either direction. This means moving in any direction along the surface will cause an observer to hit the edge of the surface. The top of a car hood is an example of a surface open in both directions. Surfaces closed in one direction include a cylinder, cone, and hemisphere. Depending on the direction of travel, an observer on the surface may hit a boundary on such a surface or travel forever. Surfaces closed in both directions include a sphere and a torus. Moving in any direction on such surfaces will cause the observer to travel forever without hitting an edge. Places where two boundaries overlap (except at a point) are called a seam. For example, if one imagines a cylinder made from a sheet of paper rolled up and taped together at the edges, the boundaries where it is taped together are called the seam.
Flattening a surface Some open surfaces and surfaces closed in one direction may be flattened into a plane without deformation of the surface. For example, a cylinder can be flattened into a rectangular area without distorting the surface distance between surface features (except for those distances across the split created by opening up the cylinder). A cone may also be so flattened. Such surfaces are linear in one direction and curved in the other (surfaces linear in both directions were flat to begin with). Sheet metal surfaces which have flat patterns can be manufactured by stamping a flat version, then bending them into the proper shape, such as with rollers. This is a relatively inexpensive process. Other open surfaces and surfaces closed in one direction, and all surfaces closed in both directions, can't be flattened without deformation. A hemisphere or sphere, for example, can't. Such surfaces are curved in both directions. This is why maps of the Earth are distorted. The larger the area the map represents, the greater the distortion. Sheet metal surfaces which lack a flat pattern must be manufactured by stamping using 3D dies (sometimes requiring multiple
40
Computer representation of surfaces dies with different draw depths and/or draw directions), which tend to be more expensive.
Surface patches A surface may be composed of one or more patches, where each patch has its own U-V coordinate system. These surface patches are analogous to the multiple polynomial arcs used to build a spline. They allow more complex surfaces to be represented by a series of relatively simple equation sets rather than a single set of complex equations. Thus, the complexity of operations such as surface intersections can be reduced to a series of patch intersections. Surfaces closed in one or two directions frequently must also be broken into two or more surface patches by the software.
Faces Surfaces and surface patches can only be trimmed at U and V flow lines. To overcome this severe limitation, surface faces allow a surface to be limited to a series of boundaries projected onto the surface in any orientation, so long as those boundaries are collectively closed. For example, trimming a cylinder at an angle would require such a surface face. A single surface face may span multiple surface patches on a single surface, but can't span multiple surfaces. Planar faces are similar to surface faces, but are limited by a collectively closed series of boundaries projected to an infinite plane, instead of a surface.
Skins and volumes As with surfaces, surface faces closed in one or two directions frequently must also be broken into two or more surface faces by the software. To combine them back into a single entity, a skin or volume is created. A skin is an open collection of faces and a volume is a closed set. The constituent faces may have the same support surface or face or may have different supports.
Transition to solids Volumes can be filled in to build a solid model (possibly with other volumes subtracted from the interior). Skins and faces can also be offset to create solids of uniform thickness.
Types of continuity A surface's patches and the faces built on that surface typically have point continuity (no gaps) and tangent continuity (no sharp angles). Curvature continuity (no sharp radius changes) may or may not be maintained. Skins and volumes, however, typically only have point continuity. Sharp angles between faces built on different supports (planes or surfaces) are common.
Surface visualization / display Surfaces may be displayed in many ways: • Wireframe mode. In this representation the surface is drawn as a series of lines and curves, without hidden line removal. The boundaries and flow lines (isoparametric curves) may each be shown as solid or dashed curves. The advantage of this representation is that a great deal of geometry may be displayed and rotated on the screen with no delay needed for graphics processing.
41
Computer representation of surfaces
wireframe hidden edges
42
wireframe uv isolines
• Faceted mode. In this mode each surface is drawn as a series of planar regions, usually rectangles. Hidden line removal is typically used with such a representation. Static hidden line removal does not update which lines are hidden during rotation, but only once the screen is refreshed. Dynamic hidden line removal continuously updates which curves are hidden during rotations.
Facet wireframe
Facet shaded
• Shaded mode. Shading can then be added to the facets, possibly with blending between the regions for a smoother display. Shading can also be static or dynamic. A lower quality of shading is typically used for dynamic shading, while high quality shading, with multiple light sources, textures, etc., requires a delay for rendering.
Computer representation of surfaces
shaded
43
reflection lines
reflected image
CAD/CAM representation of a surface CAD/CAM systems use primarily two types of surfaces: • Regular (or canonical) surfaces include surfaces of revolution such as cylinders, cones, spheres, and tori, and ruled surfaces (linear in one direction) such as surfaces of extrusion. • Freeform surfaces (usually NURBS) allow more complex shapes to be represented via freeform surface modeling. Other surface forms such as facet and voxel are also used in a few specific applications.
CAE/FEA representation of a surface In computer-aided engineering and finite element analysis, an object may be represented by a surface mesh of node points connected by triangles or quadrilaterals (polygon mesh). More accurate, but also far more CPU-intensive, results can be obtained by using a solid mesh. The process of creating a mesh is called tessellation. Once tessellated, the mesh can be subjected to simulated stresses, strains, temperature differences, etc., to see how those changes propagate from node point to node point throughout the mesh.
VR/computer animation representation of a surface In virtual reality and computer animation, an object may also be represented by a surface mesh of node points connected by triangles or quadrilaterals. If the goal is only to represent the visible portion of an object (and not show changes to the object) a solid mesh serves no purpose, for this application. The triangles or quadrilaterals can each be shaded differently depending on their orientation toward the light sources and/or viewer. This will give a rather faceted appearance, so an additional step is frequently added where the shading of adjacent regions is blended to provide smooth shading. There are several methods for performing this blending.
External links • 3D-XplorMath: Program to visualize many kinds of surfaces in wireframe, patch and anaglyph mode. [1]
References [1] http:/ / 3d-xplormath. org
Constructive solid geometry
44
Constructive solid geometry Constructive solid geometry (CSG) is a technique used in solid modeling. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine objects. Often CSG presents a model or surface that appears visually complex, but is actually little more than cleverly combined or decombined objects. In 3D computer graphics and CAD CSG is often used in procedural modeling. CSG can also be performed on polygonal meshes, and may or may not be procedural and/or parametric.
Workings of CSG The simplest solid objects used for the representation are called primitives. Typically Venn diagram created with CSG they are the objects of simple shape: cuboids, The source is on the description page. cylinders, prisms, pyramids, spheres, cones. The set of allowable primitives is limited by each software package. Some software packages allow CSG on curved objects while other packages do not. It is said that an object is constructed from primitives by means of allowable operations, which are typically Boolean operations on sets: union, intersection and difference. A primitive can typically be described by a procedure which accepts some number of parameters; for example, a sphere may be described by the coordinates of its center point, along with a radius value. These primitives can be combined into compound objects using operations like these:
Union Merger of two objects into one
Difference Subtraction of one object from another
Intersection Portion common to both objects
Combining these elementary operations, it is possible to build up objects with high complexity starting from simple ones.
Constructive solid geometry
Applications of CSG Constructive solid geometry has a number of practical uses. It is used in cases where simple geometric objects are desired, or where mathematical accuracy is important. The Quake engine and Unreal engine both use this system, as does Hammer (the native Source engine level editor), and Torque Game Engine/Torque Game Engine Advanced. CSG is popular because a modeler can use a set of relatively simple objects to create very complicated geometry. When CSG is procedural or parametric, the user can revise their complex geometry by changing the position of objects or by changing the Boolean operation used to combine those objects. One of the advantages of CSG is that it can easily CSG objects can be represented by binary trees, where leaves represent assure that objects are "solid" or water-tight if all primitives, and nodes represent operations. In this figure, the nodes are of the primitive shapes are water-tight. This can labeled for intersection, for union, and for difference. be important for some manufacturing or engineering computation applications. By comparison, when creating geometry based upon boundary representations, additional topological data is required, or consistency checks must be performed to assure that the given boundary description specifies a valid solid object. A convenient property of CSG shapes is that it is easy to classify arbitrary points as being either inside or outside the shape created by CSG. The point is simply classified against all the underlying primitives and the resulting boolean expression is evaluated. This is a desirable quality for some applications such as collision detection.
Applications with CSG support • • • • • • • • • • • • • • •
3Delight Blender (provides meta objects) BRL-CAD Clara.io NETGEN [1] - an automatic 3d tetrahedral mesh generator. It accepts input from constructive solid geometry (CSG) or boundary representation (BRep) Feature Manipulation Engine FreeCAD GtkRadiant HyperFun OpenSCAD PhotoRealistic RenderMan PLaSM - Programming Language of Solid Modeling POV-Ray SimpleGeo [2] A solid modeling for particle transport Monte Carlo simulations SolidWorks mechanical CAD suite
• UnBBoolean [3] a Java3D implementation • Vectorworks
45
Constructive solid geometry • GiDES++ [4] a gesture-based CSG CAD
Gaming • • • •
3D World Studio UnrealEd Valve Hammer Editor Leadwerks [5]
Libraries • Carve CSG [6] - a fast and robust constructive solid geometry library • CSG.js [7] A JavaScript implementation using WebGL • GTS [8] - an Open Source Free Software Library intended to provide a set of useful functions to deal with 3D surfaces meshed with interconnected triangles • sgCore C++/C# library [9]
External links • Leadwerks Software 'What is Constructive Solid Geometry?' [10] - explanation of CSG definitions, equations, techniques, and uses.
References [1] http:/ / sourceforge. net/ projects/ netgen-mesher [2] http:/ / www. cern. ch/ theis/ simplegeo [3] http:/ / unbboolean. sourceforge. net/ [4] http:/ / www. inevo. pt/ portfolio/ gides/ [5] http:/ / www. leadwerks. com/ [6] http:/ / code. google. com/ p/ carve/ [7] http:/ / evanw. github. com/ csg. js/ [8] http:/ / gts. sourceforge. net/ index. html [9] http:/ / www. geometros. com [10] http:/ / www. leadwerks. com/ files/ csg. pdf
46
Conversion between quaternions and Euler angles
Conversion between quaternions and Euler angles Spatial rotations in three dimensions can be parametrized using both Euler angles and unit quaternions. This article explains how to convert between the two representations. Actually this simple use of "quaternions" was first presented by Euler some seventy years earlier than Hamilton to solve the problem of magic squares. For this reason the dynamics community commonly refers to quaternions in this application as "Euler parameters".
Definition A unit quaternion can be described as:
We can associate a quaternion with a rotation around an axis by the following expression
where α is a simple rotation angle (the value in radians of the angle of rotation) and cos(βx), cos(βy) and cos(βz) are the "direction cosines" locating the axis of rotation (Euler's Theorem).
47
Conversion between quaternions and Euler angles
48
Rotation matrices The orthogonal matrix (post-multiplying a column vector) corresponding to a clockwise/left-handed rotation by the unit quaternion is given by the inhomogeneous expression:
Euler angles – The xyz (fixed) system is shown in blue, the XYZ (rotated) system is shown in red. The line of nodes, labelled N, is shown in green.
or equivalently, by the homogeneous expression:
If
is not a unit quaternion then the homogeneous form is still a scalar multiple of a rotation
matrix, while the inhomogeneous form is in general no longer an orthogonal matrix. This is why in numerical work the homogeneous form is to be preferred if distortion is to be avoided. The direction cosine matrix corresponding to a Body 3-2-1 sequence with Euler angles (ψ, θ, φ) is given by:
Conversion between quaternions and Euler angles
49
Conversion By combining the quaternion representations of the Euler rotations we get for the Body 3-2-1 sequence, where the airplane first does yaw (body-z) turn during taxing on the runway, then pitches (body-y) during take-off, and finally rolls (body-x) in the air. The resulting orientation of Body 3-2-1 sequence is equivalent to that of Lab 1-2-3 sequence, where the airplane is rolled first (lab-X axis), and then nosed up around the horizontal lab-Y axis, and finally rotated around the vertical Lab-Z axis:
Other rotation sequences use different conventions.
Relationship with Tait–Bryan angles Similarly for Euler angles, we use the Tait–Bryan angles (in terms of flight dynamics): • Roll – • Pitch –
: rotation about the X-axis : rotation about the Y-axis
• Yaw –
: rotation about the Z-axis
where the X-axis points forward, Y-axis to the right and Z-axis downward and in the example to follow the rotation occurs in the order yaw, pitch, roll (about body-fixed axes).
Singularities One must be aware of singularities in the Euler angle parametrization when the pitch approaches ±90° (north/south pole). These cases must be handled specially. The common name for this situation is gimbal lock.
Tait–Bryan angles for an aircraft
Code to handle the singularities is derived on this site: www.euclideanspace.com [1]
External links • Q60. How do I convert Euler rotation angles to a quaternion? [2] and related questions at The Matrix and Quaternions FAQ
References [1] http:/ / www. euclideanspace. com/ maths/ geometry/ rotations/ conversions/ quaternionToEuler/ [2] http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html#Q60
Crowd simulation
50
Crowd simulation 3D computer graphics
Basics • • •
3D modeling / 3D scanning 3D rendering / 3D printing 3D computer graphics software Primary Uses
• • • • •
3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Virtual cinematography Related concepts
• • • • • • • •
CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering
v t
e [1]
Crowd simulation is the process of simulating the movement of a large number of entities or characters, now often appearing in 3D computer graphics for film. While simulating these crowds, observed human behavior interaction is taken into account, to replicate the collective behavior. It is a method of creating virtual cinematography. The need for crowd simulation arises when a scene calls for more characters than can be practically animated using conventional systems, such as skeletons/bones. Simulating crowds offer the advantages of being cost effective as well as allow for total control of each simulated character or agent. Animators typically create a library of motions, either for the entire character or for individual body parts. To simplify processing, these animations are sometimes baked as morphs. Alternatively, the motions can be generated procedurally - i.e. choreographed automatically by software. The actual movement and interactions of the crowd is typically done in one of two ways:
Crowd simulation
Particle Motion The characters are attached to point particles, which are then animated by simulating wind, gravity, attractions, and collisions. The particle method is usually inexpensive to implement, and can be done in most 3D software packages. However, the method is not very realistic because it is difficult to direct individual entities when necessary, and because motion is generally limited to a flat surface.
Crowd AI The entities - also called agents - are given artificial intelligence, which guides the entities based on one or more functions, such as sight, hearing, basic emotion, energy level, aggressiveness level, etc. The entities are given goals and then interact with each other as members of a real crowd would. They are often programmed to respond to changes in environment, enabling them to climb hills, jump over holes, scale ladders, etc. This system is much more realistic than particle motion, but is very expensive to program and implement. The most notable examples of AI simulation can be seen in New Line Cinema's The Lord of the Rings films, where AI armies of many thousands battle each other. The crowd simulation was done using Weta Digital's Massive software.
Sociology Crowd simulation can also refer to simulations based on group dynamics and crowd psychology, often in public safety planning. In this case, the focus is just the behavior of the crowd, and not the visual realism of the simulation. Crowds have been studied as a scientific interest since the end of the 19th Century. A lot of research has focused on the collective social behavior of people at social gatherings, assemblies, protests, rebellions, concerts, sporting events and religious ceremonies. Gaining insight into natural human behavior under varying types of stressful situations will allow better models to be created which can be used to develop crowd controlling strategies. Emergency response teams such as policemen, the National Guard, military and even volunteers must undergo some type of crowd control training. Using researched principles of human behavior in crowds can give disaster training designers more elements to incorporate to create realistic simulated disasters. Crowd behavior can be observed during panic and non-panic conditions. When natural and unnatural events toss social ideals into a twisting chaotic bind, such as the events of 9/11 and hurricane Katrina, humanity’s social capabilities are truly put to the test. Military programs are looking more towards simulated training, involving emergency responses, due to their cost effective technology as well as how effective the learning can be transferred to the real world.[citation needed] Many events that may start out controlled can have a twisting event that turns them into catastrophic situations, where decisions need to be made on the spot. It is these situations in which crowd dynamical understanding would play a vital role in reducing the potential for anarchy. Modeling techniques of crowds vary from holistic or network approaches to understanding individualistic or behavioral aspects of each agent. For example the Social Force Model describes a need for individuals to find a balance between social interaction and physical interaction. An approach that incorporates both aspects, and is able to adapt depending on the situation, would better describe natural human behavior, always incorporating some measure of unpredictability. With the use of multi-agent models understanding these complex behaviors has become a much more comprehensible task. With the use of this type of software, systems can now be tested under extreme conditions, and simulate conditions over long periods of time in the matter of seconds.
51
Crowd simulation
52
External links • • • • • •
CrowdManagementSimulation.com [1] CrowdSimulation.org [2], Open discussion forum on crowd simulations CSG [3], Crowd simulation research. UNC GAMMA Group [4], Crowd simulation research at the University of North Carolina at Chapel Hill SteerSuite [5], An open-source framework for developing and evaluating crowd simulation algorithms Crowd Tracking [6], Crowd tracking research in computer vision
References [1] [2] [3] [4] [5] [6]
http:/ / crowdmanagementsimulation. com http:/ / crowdsimulation. org/ forum http:/ / www. crowdsimulationgroup. co. uk http:/ / gamma. cs. unc. edu/ research/ crowds/ http:/ / steersuite. cse. yorku. ca/ http:/ / www. di. ens. fr/ ~rodrigue/ crowd_tracking. html
Cutaway drawing
A cutaway drawing of a 1942 Nash Ambassador
Part of a series on
Graphical projection
• • •
v t
e [1]
Cutaway drawing
53
A cutaway drawing, also called a cutaway diagram is a 3D graphics, drawing, diagram and or illustration, in which surface elements a three-dimensional model are selectively removed, to make internal features visible, but without sacrificing the outer context entirely.
Overview According to Diepstraten et al. (2003) "the purpose of a cutaway drawing is to allow the viewer to have a look into an otherwise solid opaque object. Instead of letting the inner object shine through the surrounding surface, parts of outside object are simply removed. This produces a visual appearance as if someone had cutout a piece of the object or sliced it into parts. Cutaway illustrations avoid ambiguities with respect to spatial ordering, provide a sharp contrast between foreground and background objects, and facilitate a good understanding of spatial ordering".[2] Though cutaway drawing are not dimensioned manufacturing blueprints, they are meticulously drawn by a handful of devoted artists who either had access to manufacturing details or deduced them by observing the visible evidence of the hidden skeleton (e.g. rivet lines, etc.). The goal of this drawings in studies can be to identify common design patterns for particular vehicle classes. Thus, the accuracy of most of these drawings, while not 100 percent, is certainly high enough for this purpose.[3] The technique is used extensively in computer-aided design, see first image. It has also been incorporated into the user interface of some video games. In The Sims, for instance, users can select through a control panel whether to view the house they are building with no walls, cutaway walls, or full walls.
History The cutaway view and the exploded view were minor graphic inventions of the Renaissance that also clarified pictorial representation. This cutaway view originates in the early fifteenth century notebooks of Marino Taccola (1382 – 1453). In the 16th century cutaway views in definite form were used in Georgius Agricola's (1494-1555) mining book De Re Metallica to illustrate underground operations. [4] The 1556 book is a complete and systematic treatise on mining and extractive metallurgy, illustrated with many fine and interesting woodcuts which illustrate every conceivable process to extract ores from the ground and metal from the ore, and more besides. It shows the many watermills used in mining, such as the machine for lifting men and material into and out of a mine shaft, see image.
An engraving by Georgius Agricola illustrating the mining practice of fire-setting
The term "Cutaway drawing" was already in use in the 19th century but, became popular in the 1930s.
Cutaway drawing
54
Technique The location and shape to cut the outside object depends on many different factors, for example: • the sizes and shapes of the inside and outside objects, • the semantics of the objects, • personal taste, etc. These factors, according to Diepstraten et al. (2003), "can seldom be formalized in a simple algorithm, But the properties of cutaway can be distinguish in two classes of cutaways of a drawing": • cutout : illustrations were the cutaway is retricted to very simple and regularly shaped of often only a small number of planar slices into the outside object. • breakaway : a cutaway realized by a single hole in the outside of the object.
Examples Some more examples of cutaway drawings, from products and systems to architectural building.
A dynamic loudspeaker
Mercury spacecraft.
16"/50 caliber Mark 7 gun
Lake Washington Ship Canal Fish Ladder
Cutaway of an inkjet printer
Cutaway of a hybrid car
References [1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Views& action=edit [2] J. Diepstraten, D. Weiskopf & T. Ertl (2003). "Interactive Cutaway Illustrations" (http:/ / www. vis. uni-stuttgart. de/ ~weiskopf/ publications/ eg2003. pdf). in: Eurographics 2003. P. Brunet and D. Fellner (ed). Vol 22 (2003), Nr 3. [3] Mark D. Sensmeier, and Jamshid A. Samareh (2003). "A Study of Vehicle Structural Layouts in Post-WWII Aircraft" (http:/ / 140. 116. 81. 56/ FS/ NASA-aiaa-2004-1624. pdf) Paper American Institute of Aeronautics and Astronautics. [4] Eugene S. Ferguson (1999). Engineering and the Mind's Eye. p.82.
Demoparty
55
Demoparty Demoscene
Concepts •
Demo
•
Intro
•
Demoparty
•
Effects
•
Demogroup
•
Compo
•
Music disk
•
Diskmag
•
Module file
•
Tracker Alternative demo platforms
•
Amiga
•
Apple IIGS
•
Atari ST
•
Commodore 64
•
Vic-20
•
Text mode
•
ZX Spectrum Current parties
•
Alternative Party
•
Assembly
•
Buenzli
•
Evoke
•
The Gathering
•
Revision
•
Sundown
•
X Past parties
• •
Breakpoint The Party Websites
• •
Scene.org Mod Archive Software
Demoparty
56 •
ProTracker
•
Scream Tracker
•
Fast Tracker
•
Impulse Tracker
•
ModPlug
•
Renoise
• •
Tracker musicians Demosceners
• • •
v t
e [1]
A demoparty is an event that gathers demosceners[2] and other computer enthusiasts to compete in competitions.[3] A typical demoparty is a non-stop event lasting over a weekend, providing the visitors a lot of time to socialize. The competing works, at least those in the most important competitions, are usually shown at night, using a video projector and big loudspeakers. The most important competition is usually the demo compo.[4]
Concept The visitors of a demoparty often bring their own computers to compete and show off their works. To this end, most parties provide a large hall with tables, electricity and usually a local area network connected to the Internet. In this respect, many demoparties resemble LAN parties, and many of the largest events also gather gamers and other computer enthusiasts in addition to demosceners. A major difference between a real demoparty and a LAN party is that demosceners typically spend more time socializing (often outside the actual party hall) than in front of their computers.[5] Large parties have often tried to come up with alternative terms to describe the concept to the general public. While the events have always been known as "demoparties", "copyparties" or just "parties" by the subculture itself, they are often referred to as "computer conferences", "computer fairs", "computer festivals", "computer art festivals", "youngsters' computer events" or even "geek gatherings" or "nerd festivals" by the mass media and the general public. Demoscene events are most frequent in continental Europe, with around fifty parties every year. In comparison, there has only been a dozen or so demoparties in the United States in total. Most events are local, gathering demomakers mostly from a single country, while the largest international parties (such as Breakpoint and Assembly) attract visitors from all over the globe.[6] Most demoparties are relatively small in size, with the number of visitors varying from dozens to a few hundred. The largest events typically gather thousands of visitors, although most of them have little or no connection to the demoscene. In that aspect, the scene separates "pure" parties (which abandons non-scene related activities and promotion) from "crossover" parties.
Demoparty
57
History Demoparties started to appear in the 1980s in the form of copyparties, where software pirates and demomakers gathered to meet each other and share their software. Competitions did not become a major aspect of the events until the early 1990s. Copyparties mainly pertained to the Amiga and C64 scene. As the PC compatibles started to take over the market, the difficulties in easily making nice demos and intros increased. Along with increased police crackdowns on copying of pirated software, the "underground" copyparties were gradually replaced by slightly higher-profile events that came to be known as demoparties. However, some of the "old-school" demosceners still prefer to use the word copyparty even for today's demoparties.
Breakpoint 2005: The real party is outside.
During the 1990s, the focus of the events shifted away from illegal activities into demomaking and competitions. The copying of copyrighted material was often explicitly prohibited by the organizers, and many events also forbade the consumption of alcohol. However, illegal copying and "boozing" still continued to take place, although in a less public form. Three well-known and appreciated large-scale demoparties were established in the early 1990s: The Party in Denmark, Assembly in Finland and The Gathering in Norway. Taking place every year and Assembly 2004 - a combination of a demoparty and a LAN party gathering thousands of visitors, these parties used to be the leading demoscene events in this period. Assembly still retains this status today. The Gathering continues to be organized yearly as a generic "computer party", but most of the demosceners now prefer Breakpoint in Germany, which takes place at the same time. The emergence of high-profile demoparties gave rise to phenomena that were not always well welcomed by the scene. The events started to attract unaffiliated computer enthusiasts who were often generally referred to as "lamers" by the original attendants. A particularly visible group in the large gatherings since the mid-1990s have been the LAN gamers, who often have very little interest in the demoscene and mainly use the party facilities for playing multi-player computer games. However, many of today's demosceners received their first interest for demos and demomaking from a visit to a large demoparty.
Demoparty
58
Common properties Parties usually last from two to four days, most often from Friday to Sunday to ensure that sceners who work or study are also able to attend. Small parties (under 100 attendants) usually take place in cultural centres or schools, whereas larger parties (over 400–500 people) typically take place in sports halls. Entrance fees are usually between €10 and €40, given the size and location of the party. It is still a common practice in many countries to allow females to enter the party for free (mostly due to the low concentration of female attendees, which is usually under 20%), albeit most parties enforce an "only vote with ticket" rule, which means that an attendee who got in free can only vote with a paid ticket.
Evoke 2002: Spectators at one of the demoshow rooms watch computer animations in 3D.
Attendees are allowed to bring their desktop computer along, but this is by no means a necessity and is usually omitted by most sceners, especially those who travel long distance. Those who have computer-related jobs may even regard a demoparty as a well-deserved break from sitting in front of a computer. For those who do bring a computer, it is becoming increasingly common to bring a laptop or some sort of handheld device rather than a complete desktop PC. Partygoers often bring various senseless gadgets to parties to make their desk space look unique; this can be anything from a disco ball or a plasma lamp to a large LED display panel complete with a scrolling message about how "elite" its owner is. Many visitors also bring large loudspeakers for playing music. This kind of activity is particularly common among new partygoers, while the more experienced attendees tend to prefer a more quiet and relaxed atmosphere. Those who need housing during the party are often offered a separate "sleeping room", usually an isolated empty room with some sort of carpet or mats, where the attendees are able to sleep, separated from the noise. Most sceners prefer bringing sleeping bags for this, as well as inflatable mattresses or polyfoam rolls. Parties that do not offer a sleeping room generally allow sceners to sleep under the tables. Partyplaces often become decorated by visitors with flyers and banners. These all serve promotional reasons, in most cases to advertise a certain group, but sometimes to create promotion for a given demoscene product, such as a demo or a diskmag, possibly to be released later at the party. A major portion of the events at a demoparty often takes place outdoors. Demosceners usually spend considerable time outside to have a beer and talk, or engage into some sort of open-air activity such as barbecueing or sport, such as hardware throwing or soccer. It is also a common tradition to gather around a bonfire during the night, usually after the compos. In recent years, many parties were available for spectators through the Internet: This tradition was first started by the live team of demoscene.tv, who broadcast from the event live or created footage for a postmortem video-report. This has since been ostensibly replaced by the SceneSat radio crew, who provide live streaming radio shows from parties, and larger parties now offer their own dedicated streaming video solution.
Demoparty
59
References [1] [2] [3] [4] [5] [6]
http:/ / en. wikipedia. org/ w/ index. php?title=Template:Demoscene& action=edit demoscene study (http:/ / www. scheib. net/ play/ demos/ what/ borzyskowski/ index. html) demoparty (http:/ / catb. org/ jargon/ html/ D/ demoparty. html) -demos- (http:/ / www. scheib. net/ play/ demos/ what/ index. html) Breakpoint 2008 - Digital Garden // Bingen am Rhein, Germany, Easter Weekend 2008 (http:/ / breakpoint. untergrund. net/ newvisitors. php) demoscene.info - the portal on the demoscene (http:/ / www. demoscene. info/ )
External links • • • •
Demoparty.net (http://www.demoparty.net) - collective demoscene party database Slengpung (http://www.slengpung.com) - demoscene party pictures, videos and party reports Assembly07 (http://uk.youtube.com/watch?v=mjf5dAeeHs0&feature=related) TV report breakpoint05 (http://video.google.com/videoplay?docid=-5626513091120110384) report on German TV (English subtitles) • faq (http://tomaes.32x.de/text/faq.php) about demoscene
Depth map In 3D computer graphics a depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. The term is related to and may be analogous to depth buffer, Z-buffer, Z-buffering and Z-depth.[1] The "Z" in these latter terms relates to a convention that the central axis of view of a camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene.
Examples
Cubic Structure
Depth Map: Nearer is darker
Depth Map: Nearer the Focal Plane is darker
Two different depth maps can be seen here, together with the original model from which they are derived. The first depth map shows luminance in proportion to the distance from the camera. Nearer surfaces are darker; further surfaces are lighter. The second depth map shows luminance in relation to the distances from a nominal focal plane. Surfaces closer to the focal plane are darker; surfaces further from the focal plane are lighter, (both closer to and also further away from the viewpoint).
Depth map
60
Uses Depth maps have a number of uses, including: • Simulating the effect of uniformly dense semi-transparent media within a scene - such as fog, smoke or large volumes of water. • Simulating shallow depths of field where some parts of a scene appear to be out of focus. Depth maps can be used to selectively blur an image to varying degrees. A shallow depth of field can be a characteristic of macro photography and so the technique may form a part of the process of miniature faking. • Z-buffering and z-culling, techniques which can be used to make the rendering of 3D scenes more efficient. They can be used to identify objects hidden from view and which may therefore be ignored for some rendering purposes. This is particularly important in real time applications such as computer games, where a fast succession of completed renders must be available in time to be displayed at a regular and fixed rate.
Fog effect
Shallow depth of field effect
• Shadow mapping - part of one process used to create shadows cast by illumination in 3D computer graphics. In this use, the depth maps are calculated from the perspective of the lights, not the viewer. • To provide the distance information needed to create and generate autostereograms and in other related applications intended to create the illusion of 3D viewing through stereoscopy . • Subsurface scattering - can be used as part of a process for adding realism by simulating the semi-transparent properties of translucent materials such as human skin.
Limitations • Single channel depth maps record the first surface seen, and so cannot display information about those surfaces seen or refracted through transparent objects, or reflected in mirrors. This can limit their use in accurately simulating depth of field or fog effects. • Single channel depth maps cannot convey multiple distances where they occur within the view of a single pixel. This may occur where more than one object occupies the location of that pixel. This could be the case - for example - with models featuring hair, fur or grass. More generally, edges of objects may be ambiguously described where they partially cover a pixel.
Depth map • Depending on the intended use of a depth map, it may be useful or necessary to encode the map at higher bit depths. For example, an 8 bit depth map can only represent a range of up to 256 different distances. • Depending on how they are generated, depth maps may represent the perpendicular distance between an object and the plane of the scene camera. For example, a scene camera pointing directly at - and perpendicular to - a flat surface may record a uniform distance for the whole surface. In this case, geometrically, the actual distances from the camera to the areas of the plane surface seen in the corners of the image are greater than the distances to the central area. For many applications, however, this discrepancy is not a significant issue.
References [1] Computer Arts / 3D World Glossary (ftp:/ / ftp. futurenet. co. uk/ pub/ arts/ Glossary. pdf), Document retrieved 26th January 2011.
Digital puppetry Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real time by computers. It is most commonly used in filmmaking and television production, but has also been utilized in interactive theme park attractions and live theatre. The exact definition of what is and is not digital puppetry is subject to debate among puppeteers and computer graphics designers, but it is generally agreed that digital puppetry differs from conventional computer animation in that it involves performing characters in real time, rather than animating them frame by frame. Digital puppetry is closely associated with motion capture technologies and 3D animation, as well as skeletal animation. Digital puppetry is also known as virtual puppetry, performance animation, living animation, live animation and real-time animation (although the latter also refers to animation generated by computer game engines). Machinima is another form of digital puppetry, and Machinima performers are increasingly being identified as puppeteers.
History and usage Early experiments One of the earliest pioneers of digital puppetry was Lee Harrison III. He conducted experiments in the early 1960s that animated figures using analog circuits and a cathode ray tube. Harrison rigged up a body suit with potentiometers and created the first working motion capture rig, animating 3D figures in real-time on his CRT screen. He made several short films with this system, which he called ANIMAC.[1]
Waldo C. Graphic Perhaps the first truly commercially successful example of a digitally animated figure being performed and rendered in real-time is Waldo C. Graphic, a character created in 1988 by Jim Henson and Pacific Data Images for the Muppet television series The Jim Henson Hour. Henson had been trying to create computer generated puppets as early as 1985[2] and Waldo grew out of experiments Henson conducted to create a computer generated version of his character Kermit the Frog.[3] Waldo's strength as a computer generated puppet was that he could be controlled by a single puppeteer (Steve Whitmire[4]) in real-time in concert with conventional puppets. The computer image of Waldo was mixed with the video feed of the camera focused on physical puppets so that all of the puppeteers in a scene could perform together. (It was already standard Muppeteering practice to use monitors while performing, so the use of a virtual puppet did not significantly increase the complexity of the system.) Afterwards, in post production, PDI re-rendered Waldo in full resolution, adding a few dynamic elements on top of the performed motion.[5]
61
Digital puppetry Waldo C. Graphic can be seen today in Jim Henson's Muppet*Vision 3D at the Disney's Hollywood Studios and Disney California Adventure Park theme parks.
Mike Normal Another significant development in digital puppetry in 1988 was Mike Normal, which Brad DeGraf and partner Michael Wahrman developed to show off the real-time capabilities of Silicon Graphics' then-new 4D series workstations. Unveiled at the 1988 SIGGRAPH convention, it was the first live performance of a digital character. Mike was a sophisticated talking head driven by a specially built controller that allowed a single puppeteer to control many parameters of the character's face, including mouth, eyes, expression, and head position.[6] The system developed by deGraf/Wahrman to perform Mike Normal was later used to create a representation of the villain Cain in the motion picture RoboCop 2, which is believed to be the first example of digital puppetry being used to create a character in a full-length motion picture. Trey Stokes was the puppeteer for both Mike Normal's SIGGRAPH debut and Robocop II.
Sesame Street: Elmo's World One of the most widely seen successful examples of digital puppetry in a TV series is Sesame Street's "Elmo's World" segment. A set of furniture characters were created with CGI, to perform simultaneously with Elmo and other real puppets. They were performed in real time on set, simultaneously with live puppet performances. As with the example of Henson's Waldo C. Graphic above, the digital puppets' video feed was seen live by both the digital and physical puppet performers, allowing the digital and physical characters to interact. [7]
Cave Troll and Gollum on "The Lord of the Rings, The Fellowship of The Ring" (2001) In 2000, Ramon Rivero was the first person to perform a digital puppet using Optical Motion Capture against pre-recorded action footage of a feature film. The character was the Cave Troll in the first episode of the The Lord of the Rings trilogy. The motion capture technology was created by Biomechanics Inc in Atlanta (now Giant Studios), Ramon's ideas contributed to enhancements to the technology, directly related to markering systems; virtual feedback of footage and computerised versions of the film sets; as well as the retargeting software called CharMapper (short for Character Mapper). Although the final footage was made with keyframe animation, a few seconds of Ramon's original performance can still be appreciated in the film. The character Gollum, tested by Ramon but performed by Andy Serkis, was also made with the same technology and is still considered the epitome of a virtual character in the film industry. Contrary to the Cave Troll, most of the animation of Gollum made it to the final footage using the original motion captured performance.
Bugs Live "Bugs Live" was a digital puppet of Bugs Bunny created by Phillip Reay for Warner Brothers Pictures. The puppet was created using hand drawn frames of animation that were puppeteered by Bruce Lanoil and David Barclay. The Bugs Live puppet was used to create nearly 900 minutes of live, fully interactive interviews of 2D animated Bugs character about his role in the movie Looney Tunes: Back in Action in English and Spanish. Bugs Live also appeared at the 2004 SIGGRAPH Digital Puppetry Special Session with the Muppet puppet Gonzo.
62
Digital puppetry
Disney theme parks Walt Disney Imagineering has also been an important innovator in the field of digital puppetry, developing new technologies as part of its "Living Character Initiative" in Disney theme parks. In 2004 they used digital puppetry techniques to create the Turtle Talk with Crush attractions at Epcot and Disney California Adventure Park. In the attraction, a hidden puppeteer performs and voices a digital puppet of Crush, the laid-back sea turtle from Finding Nemo, on a large rear-projection screen. To the audience Crush appears to be swimming inside an aquarium and engages in unscripted, real-time conversations with theme park guests. Disney Imagineering continued its use of digital puppetry with the Monsters Inc. Laugh Floor, a new attraction in Tomorrowland at Walt Disney World's Magic Kingdom, which opened in the spring of 2007. Guests temporarily enter the "monster world" introduced in Disney and Pixar's 2001 film, Monsters, Inc., where they are entertained by Mike Wazowski and other monster comedians who are attempting to capture laughter, which they convert to energy. Much like Turtle Talk, the puppeteers interact with guests in real-time, just as a real-life comedian would interact with his/her audience. Disney also uses digital puppetry techniques in Stitch Encounter, which opened in 2006 at the Hong Kong Disneyland park. Disney has another version of the same attraction in Disneyland Resort Paris called Stitch Live!
Types of digital puppetry Waldo puppetry A digital puppet is controlled onscreen by a puppeteer who uses a telemetric input device connected to the computer. The X-Y-Z axis movement of the input device causes the digital puppet to move correspondingly. A keyboard, mouse or joystick-like device is sometimes used in place of a telemetric control. Software for this purpose has been developed by Reallusion, GoAnimate and others.
Motion capture puppetry (mocap puppetry) or Performance Animation An object (puppet) or human body is used as a physical representation of a digital puppet and manipulated by a puppeteer. The movements of the object or body are matched correspondingly by the digital puppet in real-time.
Machinima A production technique that can be used to perform digital puppets. Machinima involves creating computer-generated imagery (CGI) using the low-end 3D engines in video games. Players act out scenes in real-time using characters and settings within a game and the resulting footage is recorded and later edited into a finished film.
References [1] A Critical History of Computer Graphics and Animation: Analog approaches, non-linear editing, and compositing (http:/ / accad. osu. edu/ ~waynec/ history/ lesson12. html), accessed April 28, 2007 [2] Sturman, David J. A Brief History of Motion Capture for Computer Character Animation (http:/ / www. siggraph. org/ education/ materials/ HyperGraph/ animation/ character_animation/ motion_capture/ history1. htm), accessed February 9, 2007 [3] Finch, Christopher. Jim Henson: The Works (New York: Random House, 1993) [4] Henson.com Featured Creature: Waldo C. Graphic (archive.org) (http:/ / web. archive. org/ web/ 20030222193241/ http:/ / henson. com/ fun/ fcreature/ waldo_fcbts. html), accessed February 9, 2007 [5] Walters, Graham. The story of Waldo C. Graphic. Course Notes: 3D Character Animation by Computer, ACM SIGGRAPH '89, Boston, July 1989, pp. 65-79 [6] Barbara Robertson, Mike, the talking head Computer Graphics World, July 1988, pp. 15-17. [7] Yilmaz, Emre. Elmo's World: Digital Puppetry on Sesame Street. Conference Abstracts and Applications, SIGGRAPH '2001, Los Angeles, August 2001, p. 178
63
Digital puppetry
64
External links • The Henson Digital Puppetry Wiki - Wiki for Henson Digital Puppetry projects, people, characters, and technology. • Animata (http://animata.kibu.hu) - Free, open source real-time animation software commonly used to create digital puppets. • Mike the talking head (http://mambo.ucsc.edu/psl/mike.html) - Web page about Mike Normal, one of the earliest examples of digital puppetry.
Dilution of precision (computer graphics) Dilution of precision is an algorithmic trick used to handle difficult problems in hidden line removal, caused when horizontal and vertical edges lie on top of each other due to numerical instability. Numerically, the severity escalates when a CAD model is viewed along the principal axii or when a geometric form is viewed end-on. The trick is to alter the view vector by a small amount, thereby hiding the flaws. Unfortunately, this mathematical modification introduces new issues of its own, namely that the exact nature of the original problem has been destroyed, and visible artifacts of this kludge will continue to haunt the algorithm. One such issue is that edges that were well defined and hidden will now be problematic. Another common issue is that the bottom edges on circles viewed end-on will often become visible and propagate their visibility throughout the problem.
External links • http://wheger.tripod.com/vhl/vhl.htm
Doo–Sabin subdivision surface In computer graphics, Doo–Sabin subdivision surface is a type of subdivision surface based on a generalization of bi-quadratic uniform B-splines. It was developed in 1978 by Daniel Doo and Malcolm Sabin.[1][2] This process generates one new face at each original vertex, n new faces along each original edge, and n x n new faces at each original face. A primary characteristic of the Doo–Sabin subdivision method is the creation of four faces around every vertex. A drawback is that the faces created at the vertices are not necessarily coplanar.
Evaluation
Simple Doo-Sabin sudivision surface. The figure shows the limit surface, as well as the control point wireframe mesh.
Doo–Sabin surfaces are defined recursively. Each refinement iteration replaces the current mesh with a smoother, more refined mesh, following the procedure described in. After many iterations, the surface will gradually converge onto a smooth limit surface. The figure below show the effect of two refinement iterations on a T-shaped quadrilateral mesh.
DooSabin subdivision surface
65 Just as for Catmull–Clark surfaces, Doo–Sabin limit surfaces can also be evaluated directly without any recursive refinement, by means of the technique of Jos Stam.[3] The solution is, however, not as computationally efficient as for Catmull-Clark surfaces because the Doo–Sabin subdivision
matrices are not in general diagonalizable.
External links [1] D. Doo: A subdivision algorithm for smoothing down irregularly shaped polyhedrons, Proceedings on Interactive Techniques in Computer Aided Design, pp. 157 - 165, 1978 ( pdf (http:/ / trac2. assembla. com/ DooSabinSurfaces/ export/ 12/ trunk/ docs/ Doo 1978 Subdivision algorithm. pdf)) [2] D. Doo and M. Sabin: Behavior of recursive division surfaces near extraordinary points, Computer-Aided Design, 10 (6) 356–360 (1978), ( doi (http:/ / dx. doi. org/ 10. 1016/ 0010-4485(78)90111-2), pdf (http:/ / www. cs. caltech. edu/ ~cs175/ cs175-02/ resources/ DS. pdf)) [3] Jos Stam, Exact Evaluation of Catmull–Clark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395–404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/ pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))
• Doo–Sabin surfaces (http://graphics.cs.ucdavis.edu/education/CAGDNotes/Doo-Sabin/Doo-Sabin.html)
Draw distance Draw distance is a computer graphics term, defined as the maximum distance of objects in a three dimensional scene that are drawn by the rendering engine. Polygons that lie behind the draw distance won't be drawn to the screen. As the draw distance increases more distant polygons need to be drawn onto the screen that would regularly be clipped. This requires more computing power; the graphic quality and realism of the scene will increase as draw distance increases, but the overall performance (frames per second) will decrease. Many games and applications will allow users to manually set the draw distance to balance performance and visuals.
Problems in older games Older games had far shorter draw distances, most noticeable in vast, open scenes. Racing arcade games were particularly infamous, as the open highways and roads often led to "pop-up graphics", or "pop-in" - an effect where distant objects suddenly appear without warning as the camera gets closer to it. This is a hallmark of poor draw distance, and still plagues large, open-ended games like the Grand Theft Auto series and Second Life.[citation needed] Formula 1 97 offered a setting so the player could choose between fixed draw distance (with variable frame rate) or fixed frame rate (with variable draw distance).
Draw distance
Alternatives A common trick used in games to disguise a short draw distance is to obscure the area with a distance fog. Alternative methods have been developed to sidestep the problem altogether using level of detail manipulation. Black & White was one of the earlier games to use adaptive level of detail to decrease the number of polygons in objects as they moved away from the camera, allowing it to have a massive draw distance while maintaining detail in close-up views. The Legend of Zelda: The Wind Waker uses a variant of the level of detail programming mentioned above. The game overworld is divided into 49 squares. Each square has an island inside of it; the distance between the island and the borders of the square are considerable. Everything within a square is loaded when entered, including all models used in close-up views and animations. Utilizing the telescope item, one can see just how detailed even far away areas are. However, textures are not displayed; they are faded in as one gets closer to the square's island-this may actually be an aesthetic effect and not to free up system resources. Islands outside of the current square are less detailed-however, these far-away island models do not degenerate any further than that, even though some of these islands can be seen from everywhere else in the overworld. In both indoor and outdoor areas, there is no distance fog-however, there are some areas where "distance" fog is used as an atmospheric effect. As a consequence to the developers' excessive attention to detail, however, some areas of the game have lower framerates due to the large amount of enemies on screen. Grand Theft Auto III made particular use of fogging, however, this made the game less playable when driving or flying at high speed, as objects would pop-up out of the fog and cause the player to crash into them. Halo 3 is claimed by its creators at Bungie Studios to have a draw distance upwards of 14 miles. This is an example of the vastly improved draw distances made able by more recent game consoles. In addition, Crysis is said to have a draw distance up to 16 kilometers (9.9 mi), while Cube 2: Sauerbraten, has a potentially unlimited draw distance, possibly due to the larger map size. Grand Theft Auto V was praised by its seemingly infinite draw distance despite having a large, detailed map.
External links • draw distance/fog problem - Beyond3D Forum [1] • How to: Optimize Your Frame Rates - Features at GameSpot [2]
References [1] http:/ / forum. beyond3d. com/ showthread. php?t=42599 [2] http:/ / www. gamespot. com/ features/ 6168650/ index. html?cpage=9
66
Edge loop
Edge loop An edge loop, in computer graphics, can loosely be defined as a set of connected edges across a surface. Usually the last edge meets again with the first edge, thus forming a loop. The set or string of edges can for example be the outer edges of a flat surface or the edges surrounding a 'hole' in a surface. In a stricter sense an edge loop is defined as a set of edges where the loop follows the middle edge in every 'four way junction'.[1] The loop will end when it encounters another type of junction (three or five way for example). Take an edge on a mesh surface for example, say at one end of the edge it connects with three other edges, making a four way junction. If you follow the middle 'road' each time you would either end up with a completed loop or the edge loop would end at another type of junction. Edge loops are especially practical in organic models which need to be animated. In organic modeling edge loops play a vital role in proper deformation of the mesh.[2] A properly modeled mesh will take into careful consideration the placement and termination of these edge loops. Generally edge loops follow the structure and contour of the muscles that they mimic. For example, in modeling a human face edge loops should follow the orbicularis oculi muscle around the eyes and the orbicularis oris muscle around the mouth. The hope is that by mimicking the way the muscles are formed they also aid in the way the muscles are deformed by way of contractions and expansions. An edge loop closely mimics how real muscles work, and if built correctly, will give you control over contour and silhouette in any position. An important part in developing proper edge loops is by understanding poles.[3] The E(5) Pole and the N(3) Pole are the two most important poles in developing both proper edge loops and a clean topology on your model. The E(5) Pole is derived from an extruded face. When this face is extruded, four 4-sided polygons are formed in addition to the original face. Each lower corner of these four polygons forms a five-way junction. Each one of these five-way junctions is an E-pole. An N(3) Pole is formed when 3 edges meet at one point creating a three-way junction. The N(3) Pole is important in that it redirects the direction of an edge loop.
References [1] Edge Loop (http:/ / wiki. cgsociety. org/ index. php/ Edge_Loop), CG Society [2] Modeling With Edge Loops (http:/ / zoomy. net/ 2008/ 04/ 02/ modeling-with-edge-loops/ ), Zoomy.net [3] "The pole" (http:/ / www. subdivisionmodeling. com/ forums/ showthread. php?t=907), SubdivisionModeling.com
External links • Edge Loop (http://wiki.cgsociety.org/index.php/Edge_Loop), CG Society
67
Euler operator
68
Euler operator In mathematics Euler operators may refer to: • Euler–Lagrange differential operator d/dx see Lagrangian system • Cauchy–Euler operators e.g. x·d/dx • quantum white noise conservation or QWN-Euler operator QWN-Euler operator
Euler operators (Euler operations) In solid modeling and computer-aided design, the Euler operators modify the graph of connections to add or remove details of a mesh while preserving its topology. They are named by Baumgart [1] after the Euler–Poincaré charateristic. He chose a set of operators sufficient to create useful meshes, some lose information and so are not invertible. The boundary representation for a solid object, its surface, is a polygon mesh of vertices, edges and faces. Its topology is captured by the graph of the connections between faces. A given mesh may actually contain multiple unconnected shells (or bodies); each body may be partitioned into multiple connected components each defined by their edge loop boundary. To represent a hollow object, the inside and outside surfaces are separate shells. Let the number of vertices be V, edges be E, faces be F, components H, shells S, and let the genus be G (S and G correspond to the b0 and b2 Betti numbers respectively). Then, to denote a meaningful geometric object, the mesh must satisfy the generalized Euler–Poincaré formula V – E + F = H + 2 * (S – G) The Euler operators preserve this characteristic. The Eastman paper lists the following basic operators, and their effects on the various terms: Name
Description
ΔV ΔE ΔF ΔH ΔS ΔG
MBFLV
Make Body-Face-Loop-Vertex
1
0
1
0
1
0
MEV
Make Edge-Vertex
0
1
1
0
0
0
MEFL
Make Edge-Face-Loop
1
1
0
0
0
0
MEKL
Make Edge, Kill Loop
0
1
0
-1
0
0
KFLEVB
Kill Faces-Loops-Edges-Vertices-Body
−2
−n
−n
0
-1
0
KFLEVMG Kill Faces-Loops-Edges-Vertices, Make Genus −2
−n
−n
0
0
1
Geometry Euler operators modify the mesh's graph creating or removing faces, edges and vertices according to simple rules while preserving the overall topology thus maintaining a valid boundary (i.e. not introducing holes). The operators themselves don't define how geometric or graphical attributes map to the new graph: e.g. position, gradient, uv texture coordinate, these will depend on the particular implementation.
References [1] Baumgart, B.G^ "Winged edge polyhedron representation", Stanford Artificial Intelligence Report No. CS-320, October, 1972.
• (see also Winged edge#External links) • Eastman, Charles M. and Weiler, Kevin J., "Geometric modeling using the Euler operators" (1979). Computer Science Department. Paper 1587. http://repository.cmu.edu/compsci/1587 (http://repository.cmu.edu/
Euler operator
• • • •
compsci/1587). Unfortunately this typo-ridden (OCR’d?) paper can be quite hard to read. Easier-to-read reference (http://solidmodel.me.ntu.edu.tw/lessoninfo/file/Chapter03.pdf), from a solid-modelling course at NTU. Another reference (http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/euler-op.html) that uses a slightly different definition of terms. Sven Havemann, Generative Mesh Modeling (http://www.eg.org/EG/DL/dissonline/doc/havemann.pdf), PhD thesis, Braunschweig University, Germany, 2005. Martti Mäntylä, An Introduction to Solid Modeling, Computer Science Press, Rockville MD, 1988. ISBN 0-88175-108-1.
Explicit modeling With the explicit modeling, designers quickly and easily create 3D CAD designs, which they then modify through direct, on-the-fly interactions with the model geometry.
Advantages The explicit approach is flexible and easy to use, so it’s ideal for companies that create one-off or highly customized products –products that simply don’t require all the extra effort of up-front planning and the embedding of information within models. With an explicit approach to 3D design, the interaction is with the model geometry and not with an intricate sequence of design features. That makes initial training on the software easier. But it also means designers working with an explicit 3D CAD system can easily pick up a design where others left off–much like anyone can open up and immediately continue working on a Microsoft Word document. Thus explicit modeling appeals to a variety of audiences: companies with flexible staff; infrequent users of 3D CAD; and anyone who is concurrently involved in a large number of design projects.
Use in repurposing When designers repurpose a model, they take an existing 3D CAD design and radically transform it by cutting/copying/pasting geometry to derive a new model that has no relationship to the original model. With an explicit approach, companies have demonstrated accelerated product development by repurposing existing designs into new and completely different products. This unique characteristic of an explicit approach can shave weeks or even months from project schedules. Even with direct modeling capabilities, the parametric approach is still designed to leverage embedded product information. The explicit approach, on the other hand, intentionally limits the amount of information captured as part of the model definition in order to provide a genuinely lightweight and flexible product design process.
Parametric vs. explicit approach With a parametric approach, data files include parameters, dimensions, features, and relationships that capture intended behavior. An explicit approach, however, reduces data files to the 3D geometry only, dramatically reducing the design data of each individual part, so large and complex designs don’t overwhelm hardware or software. Smaller file sizes mean designers can load and store data files faster, reload and update parts to new revisions instantly, and make better overall use of their computer memory.
69
Explicit modeling
Use in data management When combined with a data management system, an explicit 3D CAD system can also help manage complex relationships associated with large assemblies. For example, an integrated data management system automates revisioning and encourages true concurrent team design because all designers have access to the most up-to-date design data. When all design data is centralized in a common database, companies can ensure that no one works on the wrong revision of a component, or changes a component reserved by someone else. Explicit 3D CAD systems excel at importing and modifying of multisource CAD data, which benefits companies working across an extended supply chain for procured components or subcontracted design. STEP and IGES are essentially native 3D design data formats in an explicit approach because explicit 3D CAD systems interact intelligently and on-the-fly with geometry, and geometry is the only common element across all CAD systems. The explicit approach to 3D design, with its lower overhead and flexibility, offers a better solution, especially for companies that rely on the ability to radically adapt and change to new and shifting design requirements.
Use in product development Companies that develop new-to-market and one-off product designs often face changing customer and product requirements throughout the development cycle. An explicit approach is always open to change, so companies can keep the window for new product information and major product changes open longer. Unlike other 3D design approaches, including hybrids, explicit modeling can offer true flexibility because it doesn’t require any upfront planning or the embedding of design information within models.
False radiosity False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel. During the period of nascent commercial enthusiasm for radiosity-enhanced imagery, but prior to the democratization of powerful computational hardware, architects and graphic artists experimented with time-saving 3D rendering techniques. By darkening areas of texture maps corresponding to corners, joints and recesses, and applying maps via self-illumination or diffuse mapping in a 3D program, a radiosity-like effect of patch interaction could be created with a standard scan-line renderer. Successful emulation of radiosity required a theoretical understanding and graphic application of patch view factors, path tracing and global illumination algorithms. Texture maps were usually produced with image editing software, such as Adobe Photoshop. The advantage of this method is decreased rendering time and easily modifiable overall lighting strategies. Another common approach similar to false radiosity is the manual placement of standard omni-type lights with limited attenuation in places in the 3D scene where the artist would expect radiosity reflections to occur. This method uses many lights and can require an advanced light-grouping system, depending on what assigned materials/objects are illuminated, how many surfaces require false radiosity treatment, and to what extent it is anticipated that lighting strategies be set up for frequent changes.
70
False radiosity
References • Autodesk interview with Hartness about False Radiosity and real-time design [1]
References [1] http:/ / usa. autodesk. com/ adsk/ servlet/ item?siteID=123112& id=5549510& linkID=10371177
Fiducial marker A fiducial marker or fiducial is an object placed in the field of view of an imaging system which appears in the image produced, for use as a point of reference or a measure. It may be either something placed into or on the imaging subject, or a mark or set of marks in the reticle of an optical instrument.
Accuracy In high-resolution optical microscopy, fiducials can be used to actively stabilize the field of view. Stabilization to better than 0.1 nm is achievable (Carter et al. Applied Optics, (2007)).
Applications Physics In physics, 3D computer graphics, and photography, fiducials are reference points: fixed points or lines within a scene to which other objects can be related or against which objects can be measured. Cameras outfitted with reseau plates produce these reference marks (also called reseau crosses) and are commonly used by NASA. Such marks are closely related to the timing marks used in optical mark recognition.
Geographical Survey Airborne geophysical surveys also use the term "fiducial" as a sequential reference number in the measurement of various geophysical instruments during a survey flight. This application of the term evolved from air photo frame numbers that were originally used to locate geophysical survey lines in the early days of airborne geophysical surveying. This method of positioning has since been replaced by GPS, but the term "fiducial" continues to be used as the time reference for data measured during flights.
Virtual Reality In applications of augmented reality or virtual reality, fiducials are often manually applied to objects in a scene so that the objects can be recognized in images of the scene. For example, to track some object, a light-emitting diode can be applied to it. With knowledge of the color of the emitted light, the object can easily be identified in the picture. The appearance of markers in images may act as a reference for image scaling, or may allow the image and physical object, or multiple independent images, to be correlated. By placing fiducial markers at known locations in a subject, the relative scale in the produced image may be determined by comparison of the locations of the markers in the image and subject. In applications such as photogrammetry, the fiducial marks of a surveying camera may be set so that they define the principal point, in a process called "collimation". This would be a creative use of how the term collimation is conventionally understood.
71
Fiducial marker
72
Medical Imaging Fiducial markers are used in a wide range of medical imaging applications. Images of the same subject produced with two different imaging systems may be correlated by placing a fiducial marker in the area imaged by both systems. In this case, a marker which is visible in the images produced by both imaging modalities must be used. By this method, functional information from SPECT or positron emission tomography can be related to anatomical information provided by magnetic resonance imaging (MRI).[1] Similarly, fiducial points established during MRI can be correlated with brain images generated by magnetoencephalography to localize the source of brain activity. Such fiducial points or markers are often created in tomographic images such as magnetic resonance and computed tomography images using a device known as the N-localizer. ECG In electrocardiography, fiducial points are landmarks on the ECG complex such as the isoelectric line (PQ junction), and onset of individual waves such as PQRST. Cell Biology In processes that involve following a labelled molecule as it is incorporated in some larger polymer, such markers can be used to follow the dynamics of growth/shrinkage of the polymer, as well as its movement. Commonly used fiducial markers are fluorescently labelled monomers of bio-polymers. The task of measuring and quantifying what happens to these is borrowed from methods in physics and computational imaging like Speckle imaging. Radio Therapy In radiotherapy and radiosurgical systems such as the CyberKnife, fiducial points are landmarks in the tumour to facilitate correct targets for treatment. In neuronavigation, a “fiducial spatial coordinate system” is used as a reference, for use in neurosurgery, to describe the position of specific structures within the head or elsewhere in the body. Such fiducial points or landmarks are often created in magnetic resonance imaging and computed tomography images by using the N-localizer.
PCB In printed circuit board (PCB) design, fiducial marks, also known as circuit pattern recognition marks or simply "fids," allow automated assembly equipment to accurately locate and place parts on boards. These devices locate the circuit pattern by providing common measurable points. They are usually made by leaving a circular area of the board bare from solder-stop coating (similar to clearcoat), in which a filled copper circle is placed. This center metallic disc can be solder-coated, gold-plated or otherwise treated, although bare copper is most common as it is not a current-carrying contact.
Fiducial marker for a chip to the right and the whole PCB beneath
Most placement devices are fed boards for assembly by a rail conveyor, with the board being clamped down in the assembly area of the machine. Each board will clamp slightly differently than the others, and the variance -- which will generally be only tenths of a millimeter -- is sufficient to ruin a board without proper calibration. Consequently, a typical PCB will have three fids to allow placement robots to precisely determine the board's orientation. By measuring the location of the fids relative to the board plan stored in the machine's memory, the machine can reliably compute the degree to which parts must be moved relative to the plan, called offset, to ensure accurate placement. Using three fiducials enables the machine to determine offset in both the X and Y axes, as well as to determine if the board has rotated during clamping, allowing the machine to rotate parts to be placed to match. Parts requiring a very
Fiducial marker
73
high degree of placement precision, such as integrated circuit chip packages with many fine leads, may have subsidiary fiducial marks near the package placement area of the board to further fine-tune the targeting. Conversely, low end, low-precision boards may only have two fiducials, or use fiducials applied as part of the screen printing process applied to most circuit boards. Some very low-end boards may use the plated mounting screw holes as ersatz fiducials, although this yields very low accuracy. For prototyping and small batch production runs, the use of a fiducial camera can greatly improve the process of board fabrication. By automatically locating fiducial markers, the camera automates board alignment. This helps with front to back and multilayer applications, eliminating the need for set pins.[2]
Printing In color printing, fiducials—also called "registration black"—are used at the edge of the cyan, magenta, yellow and black (CMYK) printing plates so that they can be correctly aligned with each other.
References [1] Correlation of single photon emission CT with MR image data using fiduciary markers. (http:/ / www. ajnr. org/ cgi/ content/ abstract/ 14/ 3/ 713) BJ Erickson and CR Jack Jr., American Journal of Neuroradiology, Vol 14, Issue 3 713-720 (1993). [2] http:/ / www. youtube. com/ watch?v=-tVZ-sdxG2o
Fluid simulation Fluid simulation is an increasingly popular tool in computer graphics for generating realistic animations of water, smoke, explosions, and related phenomena. Given some input configuration of fluid and scene geometry, a fluid simulator evolves the motion of the fluid forward in time, making use of the (possibly heavily simplified) Navier-Stokes equations which describe the physics of fluids. In computer graphics, such simulations range in complexity from extremely time-consuming high quality animations for film & visual effects, to simple real-time particle systems used in modern games. Example of fluid simulation
Approaches There are several competing techniques for liquid simulation with a variety of trade-offs. The most common are Eulerian grid-based methods, smoothed particle hydrodynamics (SPH) methods, vorticity-based methods, and Lattice Boltzmann methods. These methods originated in the computational fluid dynamics community, and have steadily been adopted by graphics practitioners. The key difference in the graphics setting is that the results need only be plausible. That is, if a human observer is unable to identify by inspection whether a given animation is physically correct, the results are sufficient, whereas in physics, engineering, or mathematics, more rigorous error metrics are necessary.
Fluid simulation
Development In computer graphics, the earliest attempts to solve the Navier-Stokes equations in full 3D came in 1996, by Nick Foster and Dimitris Metaxas, who based their work primarily on a classic CFD paper from 1965 by Harlow & Welch. Prior to this, many methods were built on ad-hoc particle systems, lower dimensional techniques such as 2D shallow water models, and semi-random turbulent noise fields. In 1999, Jos Stam published the so-called Stable Fluids method at SIGGRAPH, which exploited a semi-Lagrangian advection technique and implicit integration of viscosity to provide unconditionally stable behaviour. This allowed for much larger time steps and in general, faster simulations. This general technique was extended by Fedkiw & collaborators to handle complex 3d water simulations using the level set method in papers in 2001 and 2002. Some notable academic researchers in this area include James F. O'Brien, Ron Fedkiw, Mark Carlson, Greg Turk, Robert Bridson, Ken Museth and Jos Stam.
Software Several options are available for fluid simulation in off-the-shelf 3D packages. A popular open source package is Blender 3D, with a stable Lattice Boltzmann method implemented, in addition to two distinct SPH approaches. Another option is Glu3d, a plugin for 3ds Max very similar to Blender's fluid capability. Other options are RealFlow, FumeFx and AfterBurn for Max, Dynamite for LightWave 3D, ICE SPH Fluids and Mootzoid's emFluid4 for Softimage; Turbulence.4D, PhyFluids3D, DPIT for Cinema 4D. Houdini and Maya support fluids natively however plugins can be bought to improve the simulations.
External links • • • • • • • • • • • •
Fusion CI Studios, Fluid FX Specialists [1] Flowline Homepage [2] Glu3d Homepage [3] ICE SPH Fluids Homepage [4] Mootzoid's emFluid4 Webpage [5] RealFlow Homepage [6] Blender Homepage [7] AfterBurn Homepage [8] DPIT Nature Spirit Homepage [9] Ron Fedkiw's Homepage [10] Berkeley Computer Animation Homepage [11] Fluid Simulation for Video Games [12]
References [1] http:/ / www. fusioncis. com/ [2] http:/ / www. flowlines. info/ [3] http:/ / 3daliens. com/ glu3D/ index. htm [4] http:/ / groups. google. com/ group/ ICE_SPH [5] http:/ / www. mootzoid. com/ wb/ pages/ softimagexsi/ emfluid4. php [6] http:/ / www. realflow. com/ [7] http:/ / www. blender3d. com [8] http:/ / www. afterworks. com/ [9] http:/ / www. dpit2. de/ [10] http:/ / graphics. stanford. edu/ ~fedkiw/ [11] http:/ / www. cs. berkeley. edu/ b-cam/ [12] http:/ / software. intel. com/ en-us/ articles/ fluid-simulation-for-video-games-part-1/
74
Forward kinematic animation
75
Forward kinematic animation Forward kinematic animation is a method in 3D computer graphics for animating models. The essential concept of forward kinematic animation is that the positions of particular parts of the model at a specified time are calculated from the position and orientation of the object, together with any information on the joints of an articulated model. So for example if the object to be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of the thumb would be calculated from the angles of the shoulder, elbow, wrist, thumb and knuckle joints. Three of these joints (the shoulder, wrist and the base of the thumb) have more than one degree of freedom, all of which must be taken into account. If the model were an entire human figure, then the location of the shoulder would also have to be calculated from other properties of the model. Forward kinematic animation can be distinguished from inverse kinematic animation by this means of calculation in inverse kinematics the orientation of articulated parts is calculated from the desired position of certain points on the model. It is also distinguished from other animation systems by the fact that the motion of the model is defined directly by the animator - no account is taken of any physical laws that might be in effect on the model, such as gravity or collision with other models.
Forward kinematics Forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters. The kinematics equations of the robot are used in robotics, computer games, and animation. The reverse process that computes the joint parameters that achieve a specified position of the end-effector is known as inverse kinematics.
Kinematics equations An articulated six DOF robotic arm uses forward
The kinematics equations for the series chain of a robot are kinematics to position the gripper. obtained using a rigid transformation [Z] to characterize the relative movement allowed at each joint and separate rigid transformation [X] to define the dimensions of each link. The result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link,
where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the serial chain.[1]
Forward kinematics
76
Link transformations In 1955, Jacques Denavit and Richard Hartenberg introduced a convention for the definition of the joint matrices [Z] and link matrices [X] to standardize the coordinate frame for spatial linkages.[2][3] This convention positions the joint frame so that it consists of a screw displacement along the Z-axis
and it positions the link frame so it consists of a screw displacement along the X-axis,
Using this notation, each transformation-link goes along a serial chain robot, and can be described by the coordinate transformation, The forward kinematics equations define the trajectory of the end-effector of a PUMA robot reaching for parts.
where θi, di, αi,i+1 and ai,i+1 are known as the Denavit-Hartenberg parameters.
Kinematics equations revisited The kinematics equations of a serial chain of n links, with joint parameters θi are given by
where
is the transformation matrix from the frame of link
to link
. In robotics, these are
conventionally described by Denavit–Hartenberg parameters.
Denavit-Hartenberg matrix The matrices associated with these operations are:
Similarly,
The use of the Denavit-Hartenberg convention yields the link transformation matrix, [i-1Ti] as
Forward kinematics
77
known as the Denavit-Hartenberg matrix.
References [1] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA. [2] J. Denavit and R.S. Hartenberg, 1955, "A kinematic notation for lower-pair mechanisms based on matrices." Trans ASME J. Appl. Mech, 23:215–221. [3] Hartenberg, R. S., and J. Denavit. Kinematic Synthesis of Linkages. New York: McGraw-Hill, 1964 on-line through KMODDL (http:/ / ebooks. library. cornell. edu/ k/ kmoddl/ toc_hartenberg1. html)
Freeform surface modelling Freeform surface modelling is the art of engineering Freeform Surfaces with a CAD or CAID system. The technology has encompassed two main fields. Either creating aesthetic (class A surfaces) that also perform a function; for example, car bodies and consumer product outer forms, or technical surfaces for components such as gas turbine blades and other fluid dynamic engineering components. CAD software packages use two basic methods for the creation of surfaces. The first begins with construction curves (splines) from which the 3D surface is then swept (section along guide rail) or meshed (lofted) through. The second method is direct creation of the surface with manipulation of the surface poles/control points.
A surface being created from curves. Animated version
From these initially created surfaces, other surfaces are constructed using either derived methods such as offset or angled extensions from surfaces; or via bridging and blending between groups of surfaces.
Surface edit by poles
Freeform surface modelling
Surfaces Freeform surface, or freeform surfacing, is used in CAD and other computer graphics software to describe the skin of a 3D geometric element. Freeform surfaces do not have rigid radial dimensions, unlike regular surfaces such as planes, cylinders and conic surfaces. They are used to describe forms such as turbine blades, car bodies and boat hulls. Initially developed for the automotive and aerospace industries, Variable smooth blend between surfaces. freeform surfacing is now widely used in all engineering design Animated version disciplines from consumer goods products to ships. Most systems today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are other methods such as Gorden surfaces or Coons surfaces . The forms of freeform surfaces (and curves) are not stored or defined in CAD software in terms of polynomial equations, but by their poles, degree, and number of patches (segments with spline curves). The degree of a surface determines its mathematical properties, and can be seen as representing the shape by a polynomial with variables to the power of the degree value. For example, a surface with a degree of 1 would be a flat cross section surface. A surface with degree 2 would be curved in one direction, while a degree 3 surface could (but does not necessarily) change once from concave to convex curvature. Some CAD systems use the term order instead of degree. The order of a polynomial is one greater than the degree, and gives the number of coefficients rather than the greatest exponent. The poles (sometimes known as control points) of a surface define its shape. The natural surface edges are defined by the positions of the first and last poles. (Note that a surface can have trimmed boundaries.) The intermediate poles act like magnets drawing the surface in their direction. The surface does not, however, go through these points. The second and third poles as well as defining shape, respectively determine the start and tangent angles and the curvature. In a single patch surface (Bézier surface), there is one more pole than Example surface pole map the degree values of the surface. Surface patches can be merged into a single NURBS surface; at these points are knot lines. The number of knots will determine the influence of the poles on either side and how smooth the transition is. The smoothness between patches, known as continuity, is often referred to in terms of a C value: • C0: just touching, could have a nick • C1: tangent, but could have sudden change in curvature • C2: the patches are curvature continuous to one another Two more important aspects are the U and V parameters. These are values on the surface ranging from 0 to 1, used in the mathematical definition of the surface and for defining paths on the surface: for example, a trimmed boundary edge. Note that they are not proportionally spaced along the surface. A curve of constant U or constant V is known as an isoperimetric curve, or U (V) line. In CAD systems, surfaces are often displayed with their poles of constant U or constant V values connected together by lines; these are known as control polygons.
78
Freeform surface modelling
Modelling When defining a form, an important factor is the continuity between surfaces - how smoothly they connect to one another. One example of where surfacing excels is automotive body panels. Just blending two curved areas of the panel with different radii of curvature together, maintaining tangential continuity (meaning that the blended surface doesn't change direction suddenly, but smoothly) won't be enough. They need to have a continuous rate of curvature change between the two sections, or else their reflections will appear disconnected. The continuity is defined using the terms • • • •
G0 – position (touching) G1 – tangent (angle) G2 – curvature (radius) G3 – acceleration (rate of change of curvature)
To achieve a high quality NURBS or Bézier surface, degrees of 5 or greater are generally used. Depending on the product and production process, different levels of accuracy are used but tolerances usually range from 0.02 mm to .001 mm (for example, in the fairing of BIW concept surfaces to production surface). For ship building, this need not be so tight, but for precision gears and medical devices it is much finer.
History of terms The term lofting originally came from the shipbuilding industry where loftsmen worked on "barn loft" type structures to create the keel and bulkhead forms out of wood. This was then passed on to the aircraft then automotive industries who also required streamline shapes. The term spline also has nautical origins coming from East Anglian dialect word for a thin long strip of wood (probably from old English and Germanic word splint).
Freeform surface modelling software • • • • • • • • • • • • • • • • •
CATIA Cobalt (Ashlar-Vellum [1]) form•Z (form-Z [2]) PowerSHAPE (PowerSHAPE [3]) Solidworks SolidThinking ICEM Surf Imageware ProEngineer ISDX ([4]) NX (Unigraphics) ProEngineer Rhinoceros 3D VSR Shape Modeling ([5]) FreeForm Modeling Plus from SensAble Technologies (Sensable.com [6]) Now part of Geomagic Design. Autodesk Inventor Alias StudioTools FreeSHIP (FreeSHIP [7]) Link broken as of Jan 2014.
• GenesisIOD (GenesisIOD [8]) • OmniCAD (OmniCAD [9]) • Thinkdesign (Thinkdesign [10])
79
Freeform surface modelling • • • •
MicroStation (Bentley Systems Inc [11]) Shark FX (Punch! [12]) Moi Moment of Inspiration 3D modeling for designers and artists (Moi3d [13]) Blender Free 3D Modelling Software from Blender Foundation
References [1] http:/ / www. ashlar. com/ sections/ products/ cobalt/ cobalt. html [2] http:/ / www. formz. com [3] http:/ / www. powershape. com/ [4] http:/ / www. ptc. com/ products/ creo/ interactive-surface-design-extension [5] http:/ / www. virtualshape. com/ en/ products/ shape-modeling [6] http:/ / www. sensable. com/ [7] http:/ / www. freeship. org [8] http:/ / www. right-toolbox. com. ar/ genesis/ index. html [9] http:/ / www. omnicad. com [10] http:/ / www. superfici3d. com [11] http:/ / www. Bentley. com [12] http:/ / punchcad. com/ index_pro. htm [13] http:/ / moi3d. com/
Geometry instancing In real-time computer graphics, geometry instancing is the practice of rendering multiple copies of the same mesh in a scene at once. This technique is primarily used for objects such as trees, grass, or buildings which can be represented as repeated geometry without appearing unduly repetitive, but may also be used for characters. Although vertex data is duplicated across all instanced meshes, each instance may have other differentiating parameters (such as color, or skeletal animation pose) changed in order to reduce the appearance of repetition.
API support for geometry instancing Starting in Direct3D version 9, Microsoft included support for geometry instancing. This method improves the potential runtime performance of rendering instanced geometry by explicitly allowing multiple copies of a mesh to be rendered sequentially by specifying the differentiating parameters for each in a separate stream. The same functionality is available in the OpenGL core in versions 3.1 and up, and may be accessed in some earlier implementations using the EXT_draw_instanced extension.
Geometry instancing in offline rendering Geometry instancing in Houdini, Maya or other 3D packages usually involves mapping a static or pre-animated object or geometry to particles or arbitrary points in space, which can then be rendered almost any offline renderer. Geometry instancing in offline rendering is useful for creating things like swarms of insects, in which each one can be detailed, but still behaves in a realisitic way that does not have to be determined by the animator. Most packages allow variation of the material or material parameters on a per instance basis, which helps ensure that instances do not appear to be exact copies of each other. In Houdini, many object level attributes (e.g. such as scale) can also be varied on a per instance basis. Because instancing geometry in most 3D packages only references the original object, file sizes are kept very small and changing the original changes all of the instances. In many offline renderers, such as Pixar's PhotoRealistic RenderMan, instancing is achieved by using delayed load render procedurals to only load geometry when the bucket containing the instance is actually being rendered. This means that the geometry for all the instances does not have to be in memory at once.
80
Geometry instancing
Video cards that support geometry instancing • GeForce 6000 and up (NV40 GPU or later) • ATI Radeon 9500 and up (R300 GPU or later).
External links • EXT_draw_instanced documentation [1] • A quick overview on D3D9 instancing on MSDN [2]
References [1] http:/ / www. opengl. org/ registry/ specs/ EXT/ draw_instanced. txt [2] http:/ / msdn. microsoft. com/ en-us/ library/ bb173349(VS. 85). aspx
Geometry pipelines Geometric manipulation of modeling primitives, such as that performed by a geometry pipeline, is the first stage in computer graphics systems which perform image generation based on geometric models. While Geometry Pipelines were originally implemented in software, they have become highly amenable to hardware implementation, particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what has since become an increasingly commoditized function in contemporary image-synthetic raster display systems. Geometric transformations are applied to the vertices of polygons, or other geometric objects used as modelling primitives, as part of the first stage in a classical geometry-based graphic image rendering pipeline. Geometric computations may also be applied to transform polygon or patch surface normals, and then to perform the lighting and shading computations used in their subsequent rendering.
History Hardware implementations of the geometry pipeline were introduced in the early Evans & Sutherland Picture System, but perhaps received broader recognition when later applied in the broad range of graphics systems products introduced by Silicon Graphics (SGI). Initially the SGI geometry hardware performed simple model space to screen space viewing transformations with all the lighting and shading handled by a separate hardware implementation stage, but in later, much higher performance applications such as the RealityEngine, they began to be applied to perform part of the rendering support as well. More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and rendering of quite complex scenes has become accessible to the consumer market. Companies such as Nvidia and AMD Graphics (formerly ATI) are two current leading representatives of hardware vendors in this space. The GeForce line of graphics cards from Nvidia was the first to support full OpenGL and Direct3D hardware geometry processing in the consumer PC market, while some earlier products such as Rendition Verite incorporated hardware geometry processing through proprietary programming interfaces. On the whole, earlier graphics accelerators by 3Dfx, Matrox and others relied on the CPU for geometry processing. This subject matter is part of the technical foundation for modern computer graphics, and is a comprehensive topic taught at both the undergraduate and graduate levels as part of a computer science education.
81
Geometry pipelines
References
Geometry processing Geometry processing, or mesh processing, is a fast-growing[citation needed] area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment and classical computer-aided design, to biomedical computing, reverse engineering and scientific computing.[citation needed]
References External links • Siggraph 2001 Course on Digital Geometry Processing (http://www.multires.caltech.edu/pubs/DGPCourse/), by Peter Schroder and Wim Sweldens • Symposium on Geometry Processing (http://www.geometryprocessing.org/) • Multi-Res Modeling Group (http://www.multires.caltech.edu/), Caltech • Mathematical Geometry Processing Group (http://geom.mi.fu-berlin.de/index.html), Free University of Berlin • Computer Graphics Group (http://www.graphics.rwth-aachen.de), RWTH Aachen University • Polygonal Mesh Processing Book (http://www.pmp-book.org/)
82
Gimbal lock
83
Gimbal lock Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space. The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals axes there is no gimbal available to accommodate rotation along one axis.
Gimbals A gimbal is a ring that is so suspended as to rotate about an axis. Gimbals typically nest one within another to accommodate rotation about multiple axes.
Gimbal with 3 axes of rotation. A set of three gimbals mounted together to allow three degrees of freedom: roll, pitch and yaw. When two gimbals rotate around the same axis, the system loses one degree of freedom.
They so appear in gyroscopes and in inertial measurement units as to allow the inner gimbal's orientation to remain fixed while the outer gimbal suspension assumes any orientation. In compasses and flywheel energy storage mechanisms they allow objects to remain upright. They are used to orient thrusters on rockets. Some coordinate systems in mathematics behave as if real gimbals were used to measure the angles, notably Euler angles. For cases of three or fewer nested gimbals, gimbal lock inevitably occurs at some point in the system due to properties of covering spaces (described below).
Gimbal lock in mechanical engineering Gimbal lock in two dimensions
Adding a fourth rotational axis can solve the problem of gimbal lock, but it requires the outermost ring to be actively driven so that it stays 90 degrees out of alignment with the innermost axis (the flywheel shaft). Without active driving of the outermost ring, all four axes can become aligned in a plane as shown above, again leading to gimbal lock and inability to roll.
Gimbal lock can occur in gimbal systems with two degrees of freedom such as a theodolite with rotations about an azimuth and elevation in two dimensions. These systems can gimbal lock at zenith and nadir, because at those points azimuth is not well-defined, and rotation in the azimuth direction does not change the direction the theodolite is pointing. Consider tracking a helicopter flying towards the theodolite from the horizon. The theodolite is a telescope mounted on a tripod so that it can move in azimuth and elevation to track the helicopter. The helicopter flies towards the theodolite and is tracked by the telescope in elevation and azimuth. The helicopter flies immediately above the tripod (i.e. it is at zenith) when it changes direction and flies at 90 degrees to its previous course. The telescope cannot track this maneuver without a discontinuous jump in one or both of the gimbal orientations. There is no continuous motion that allows it to follow the target. It is in gimbal lock. So there is an infinity of directions around zenith that the telescope cannot continuously track all movements of a target. Note that even if the helicopter does not pass through zenith, but only near zenith, so that gimbal lock does not occur, the system must still move exceptionally rapidly to
Gimbal lock
84
track it, as it rapidly passes from one bearing to the other. The closer to zenith the nearest point is, the faster this must be done, and if it actually goes through zenith, the limit of these "increasingly rapid" movements becomes infinitely fast, namely discontinuous. To recover from gimbal lock the user has to go around the zenith – explicitly: reduce the elevation, change the azimuth to match the azimuth of the target, then change the elevation to match the target. Mathematically, this corresponds to the fact that spherical coordinates do not define a coordinate chart on the sphere at zenith and nadir. Alternatively, that the corresponding map T2→S2 from the torus T2 to the sphere S2 (given by the point with given azimuth and elevation) is not a covering map at these points.
Gimbal lock in three dimensions Consider a case of a level sensing platform on an aircraft flying due North with its three gimbal axes mutually perpendicular (i.e., roll, pitch and yaw angles each zero). If the aircraft pitches up 90 degrees, the aircraft and platform's Yaw axis gimbal becomes parallel to the Roll axis gimbal, and changes about yaw can no longer be compensated for.
Solutions This problem may be overcome by use of a fourth gimbal, intelligently driven by a motor so as to maintain a large angle between roll and yaw gimbal axes. Another solution is to rotate one or more of the gimbals to an arbitrary position when gimbal lock is detected and thus reset the device. Modern practice is to avoid the use of gimbals entirely. In the context of inertial navigation systems, that can be done by mounting the inertial sensors directly to the body of the vehicle (this is called a strapdown system) and integrating sensed rotation and acceleration digitally using quaternion methods to derive vehicle orientation and velocity. Another way to replace gimbals is to use fluid bearings or a flotation chamber.
Gimbal locked airplane. When the pitch (green) and yaw (magenta) gimbals become aligned, changes to roll (blue) and yaw apply the same rotation to the airplane.
Gimbal lock on Apollo 11 A well-known gimbal lock incident happened in the Apollo 11 Moon mission. On this spacecraft, a set of gimbals was used on an inertial measurement unit (IMU). The engineers were aware of the gimbal lock problem but had declined to use a fourth gimbal. Some of the reasoning behind this decision is apparent from the following quote: "The advantages of the redundant gimbal seem to be outweighed by the equipment simplicity, size advantages, and corresponding implied reliability of the direct three degree of freedom unit."
Normal situation: the three gimbals are independent
—David Hoag, Apollo Lunar Surface Journal They preferred an alternate solution using an indicator that would be triggered when near to 85 degrees pitch.
Gimbal lock
85
"Near that point, in a closed stabilization loop, the torque motors could theoretically be commanded to flip the gimbal 180 degrees instantaneously. Instead, in the LM, the computer flashed a 'gimbal lock' warning at 70 degrees and froze the IMU at 85 degrees" —Paul Fjeld, Apollo Lunar Surface Journal Rather than try to drive the gimbals faster than they could go, the system simply gave up and froze the platform. From this point, the spacecraft would have to be manually moved away from the gimbal lock position, and the platform would have to be manually realigned using the stars as a reference.
Gimbal lock: two out of the three gimbals are in the same plane, one degree of freedom is lost
After the Lunar Module had landed, Mike Collins aboard the Command Module joked "How about sending me a fourth gimbal for Christmas?"
Robotics In robotics, gimbal lock is commonly referred to as "wrist flip", due to the use of a "triple-roll wrist" in robotic arms, where three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist flip, also called a wrist singularity, is when the path through which the robot is traveling causes the first and third axes of the robot's wrist to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. The importance of non-singularities in robotics has led the American Industrial robot operating in a foundry. National Standard for Industrial Robots and Robot Systems — Safety Requirements to define it as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities".[1]
Gimbal lock in applied mathematics The problem of gimbal lock appears when one uses Euler angles in applied mathematics; developers of 3D computer programs, such as 3D modeling, embedded navigation systems, and video games must take care to avoid it. In formal language, gimbal lock occurs because the map from Euler angles to rotations (topologically, from the 3-torus T3 to the real projective space RP3) is not a covering map – it is not a local homeomorphism at every point, and thus at some points the rank (degrees of freedom) must drop below 3, at which point gimbal lock occurs. Euler angles provide a means for giving a numerical description of any rotation in three-dimensional space using three numbers, but not only is this description not unique, but there are some points where not every change in the target space (rotations) can be realized by a change in the source space (Euler angles). This is a topological constraint – there is no covering map from the 3-torus to the 3-dimensional real projective space; the only (non-trivial) covering map is from the 3-sphere, as in the use of quaternions.
Gimbal lock
86
To make a comparison, all the translations can be described using three numbers of three consecutive linear movements along three perpendicular axes
,
rotations, all the rotations can be described using three numbers
, and
,
,
, and
and
, as the succession
axes. That's the same for , as the succession of three
rotational movements around three axes that are perpendicular one to the next. This similarity between linear coordinates and angular coordinates makes Euler angles very intuitive, but unfortunately they suffer from the gimbal lock problem.
Loss of a degree of freedom with Euler angles A rotation in 3D space can be represented numerically with matrices in several ways. One of these representations is:
with
and
constrained in the interval
, and
Let's examine for example what happens when
constrained in the interval . Knowing that
. and
, the above
expression becomes equal to:
The second matrix is the identity matrix and has no effect on the product. Carrying out matrix multiplication of first and third matrices:
And finally using the trigonometry formulas:
Changing the values of
and
the rotation axis remains in the
in the above matrix has the same effects: the rotation angle
changes, but
direction: the last column and the last row in the matrix won't change. Only one
degree of freedom (corresponding to
) remains; one other (corresponding to
degree of freedom corresponding to the choice
). The only solution for
) has been lost (the third and
to recover different roles
is to change to some value other than 0. A similar problem appears when . One can choose another convention for representing a rotation with a matrix using Euler angles than the Z-X-Z convention above, and also choose other variation intervals for the angles, but in the end there is always at least one value for which a degree of freedom is lost. Note that the gimbal lock problem does not make Euler angles "invalid" (they always serve as a well-defined coordinate system), but it makes them unsuited for some practical applications.
Alternate orientation representation The cause of gimbal lock is representing an orientation as 3 axial rotations with Euler angles. A potential solution therefore is to represent the orientation in some other way. This could be as a rotation matrix, a quaternion, or a similar orientation representation that treats the orientation as a value rather than 3 separate and related values. Given such a representation, the user stores the orientation as a value. To apply angular changes, the orientation is modified by a delta angle/axis rotation. The resulting orientation must be re-normalized to prevent floating-point error from successive transformations from accumulating. For matrices, re-normalizing the result requires converting the matrix
Gimbal lock into its nearest orthonormal representation. For quaternions, re-normalization requires performing quaternion normalization.
References [1] ANSI/RIA R15.06-1999
External links • Euler angles and gimbal lock (video) Part 1 (http://guerrillacg.org/home/3d-rigging/the-rotation-problem), Part 2 (http://guerrillacg.org/home/3d-rigging/euler-rotations-explained) • Gimble Lock - Explained (http://www.youtube.com/watch?v=rrUCBOlJdt4) at YouTube
87
Glide API
88
Glide API Glide Original author(s) 3dfx Interactive Stable release
3.10 / April 3, 2013
Written in
Assembly, C
Operating system
Cross-platform
Type
3D graphics API
License
GNU General Public License
Website
glide.sourceforge.net
Glide is a 3D graphics API developed by 3dfx Interactive for their Voodoo Graphics 3D accelerator cards. Although it originally started as a proprietary API, it was later open sourced by 3dfx.[2] It was dedicated to gaming performance, supporting geometry and texture mapping primarily, in data formats identical to those used internally in their cards. Wide adoption of 3Dfx led to Glide being extensively used in the late 1990s[citation needed], but further refinement of Microsoft's Direct3D and the appearance of full OpenGL implementations from other graphics card vendors, in addition to growing diversity in 3D hardware, eventually caused it to become superfluous.[citation needed]
[1]
Unreal utilizing its Glide renderer on Voodoo Graphics hardware
API Glide is based on the basic geometry and "world view" of OpenGL. OpenGL is a large graphics library with 336 calls[citation needed] in the API, many of which are of limited use. Glide was an effort to select primarily features that were useful for real-time rendering of 3D games. The result was an API that was small enough to be implemented entirely in late-1990s hardware. However, this focus led to various limitations in Glide, such as a 16-bit color depth limit in the display buffer.[3]
Glide API
Use in games The combination of the hardware performance of Voodoo Graphics (Voodoo 1) and Glide's easy-to-use API resulted in Voodoo cards generally dominating the gaming market during the latter half of the 1990s. The name Glide was chosen to be indicative of the GL underpinnings, while being different enough to avoid trademark problems. [citation needed]
Glide wrappers and emulators Glide emulator development has been in progress since the late 1990s. During 3dfx's lifetime, the company was aggressive at trying to stop these attempts to emulate their proprietary API, shutting down early emulation projects with legal threats.[4] However, just before it ceased operations and had its assets purchased by Nvidia, 3dfx released the Glide API, along with the Voodoo 2 and Voodoo 3 specifications, under an open source license,[5] which later evolved into an open source project.[6] Although no games released after 1999 depend on Glide for 3D acceleration (Direct3D and OpenGL are used instead), Glide emulation is still needed to run older games in hardware accelerated mode. With the specifications and code now open source, there are several capable emulators and wrappers available allowing older games that make use of the Glide API to run on non-Voodoo hardware. Other projects like Glidos allow even older games to use Glide.
References [1] http:/ / glide. sourceforge. net/ [2] (http:/ / www. ohloh. net/ licenses/ 3DFX GLIDE Source Code General Public License) The 3DFX GLIDE Source Code General Public License [3] http:/ / www. gamers. org/ dEngine/ xf3D/ glide/ glidepgm. htm GLIDE programming manual [4] 3dfx wraps up wrapper Web sites (http:/ / www. theregister. co. uk/ 1999/ 04/ 08/ 3dfx_wraps_up_wrapper_web/ ), The Register, April 8, 1999. [5] http:/ / www. theregister. co. uk/ 1999/ 12/ 07/ 3dfx_open_sources_glide_voodoo/ [6] http:/ / sourceforge. net/ projects/ glide/
External links • • • •
Glide Sourceforge Project (http://glide.sourceforge.net/) GLIDE programming manual (http://www.gamers.org/dEngine/xf3D/glide/glidepgm.htm) Glide Wrappers List (http://www.sierrahelp.com/Utilities/DisplayUtilities/GlideWrappers.html) OpenGL Documentation (http://www.opengl.org/documentation)
89
GloriaFX
90
GloriaFX GloriaFX
Gloria FX The Head logo of GloriaFX Type
Incorporated
Industry
VFX, Animation, Design
Founded
2008
Headquarters
Dnepropetrovsk, Ukraine
Key people
Tomash Kuzmitskiy
Owner(s)
Tomash Kuzmitskiy (Creative Director)
Website
http:/ / gloriafx. com/
Gloria FX— is a Ukrainian visual effects company based in Dnipropetrovsk, Ukraine. The company known for creating high-quality visual effects for feature films, music videos and commercials.[1] It was founded in 2008 by Tomash Kuzmitskiy. The company has more than 45 creative artists: VFX supervisors, Animators, Modelers, FX TD, Matte painters, Compositors, Rotoscopers, Matchmove artists.[2] In 2013 was opened the Gloria FX School, which allows to prepare the appropriate level professionals for the further success projects. The company collaborates with major U.S .and European production companies such as Riveting Entertainment, London Alley Entertainment, RockHard, DNA, Iconoclast, the Masses, Saatchi&Saatchi, BHC Films, Friendly Films AS , NE Derection, Ramble West Productions, Aggressive Group, Doomsday Entertainment, Star Media. Also worked with such directors as Colin Tilley, Ray Kay, Nabil, Chris Marrs Piliero. Gloria FX has completed many successful effects projects for music artists, including Chris Brown,[3] Lil Wayne,[4]Wiz Khalifa,[5]Rick Ross,[6] Kelly Clarkson, Nicki Minaj,[7] Daft Punk, Tyga, Justin Bieber,[8] Foals,[9] Madcon, Cher, Ciara,Busta Rhymes, Hurts, Snoop Dogg and many other.
GloriaFX
91
Awards Year
Project
2013 Austin Mahone- «What about love» Lil Wayne - Love Me (Explicit) ft. Drake, Future
2012 Chris Brown «Turn Up the Music»
Gloria FX reel 2011 Chris Brown ft Lil Wayne & Busta Rhymes - «Look at Me Now» Gloria FX reel
Ceremony
Result
MTV Video Music Awards
Artist to watch
Music Video Production Association
Best Hip Hop Video in 2013
MTV
Best Male Video and Best Dance Video
Won
CG Event 2012 (Moscow)
Commercials & Motion Design
Won
BET
Video Of The Year
[10]
CG Event 2011 (Moscow)
Music videography 2013 • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Category
Rick Ross - "No Games ft. Future" Enrique Iglesias - "Heart Attack" French Montana - "Gifted ft. The Weeknd" JAMES BLAKE [life round here] ft. Chance The Rapper Cris Cab - "Liar Liar" Paris Hilton - "Good Time ft. Lil Wayne" Tyga - "Hijack ft. 2 Chainz" Zendaya - "Replay" DJ Smash - Stop The Time Mike WILL Made It ft Miley, Wiz Khalifa, Juicy J "23" Vali - "Dimes ft. Wiz Khalifa" Chris Brown - "Love More (Explicit) ft. Nicki Minaj" Cher - "Woman's World" Arctic Monkeys - "Why'd You Only Call Me When You're High?" JUST BLAZE X BAAUER x JAYZ. [HIGHER] Krewella - "Live for the Night" Jay Sean - "Mars ft. Rick Ross" Demi Lovato - "Made in the USA" Jason Derulo - "Marry Me" Masspike Miles - "Flatline ft Wiz Khalifa" SKINNY - "TALK 4 ME" Jason Derulo - "Talk Dirty" feat. 2 Chainz" Nelly ft. Nicki Minaj & Pharrell Williams - "Get Like Me" Fifth Harmony - "Miss Movin' On" Ciara ft. Nicki Minaj "I'm Out" Snoop Dogg "Let The Bass Go" Hurts - "Somebody to Die For" DJ Khaled f/ Drake, Rick Ross, Lil Wayne "No New Friends" Austin Mahone "What About Love"
CG Event Reel 2011
Won Won
Won Won
GloriaFX • • • • • • • • • • • • • • • • •
Tyga Ft. Chris Brown - Fuck For The Road Jason Derulo - The Other Side Rich Gang – Tapout (feat. Lil Wayne, Birdman, Mack Maine, Nicki Minaj & Future) Sean Kingston - Beat It ft. Chris Brown, Wiz Khalifa Ray J - I Hit It First ft. Bobby Brackins Tyga - Molly (Explicit) Jay Sean - Where You Are Lil Poopy - I think I'm French Montana I Think I am RICK ROSS Kelly Rowland - Kisses Down Low Lil Wayne - Love Me (Explicit) ft. Drake, Future Cher Lloyd - With Ur Love Madcon - In My Head Mirami – Amour Tyga - Dope (Explicit) ft. Rick Ross David Guetta - Just One Last Time ft. Taped Rai Juicy J ft. Lil Wayne - Bands A Make Her Dance Salud [OFFICIAL VIDEO] - Sky Blu ft. Reek Rude, Sensato and Wilmer Valderrama
• • • • • • • • • • • • • • •
Lil Wayne "God Bless Amerika" Ex.Producer: Andrew Listermann Chris Brown Feat Aaliyah - Don't Think They Know FOALS - Bad habit U.G.L.Y. - REDD (feat. Maad Scientist) POLICA - TIFF (feat. Justin Vernon) Kelly Clarkson - People Like Us Daft Punk New Video (Teaser) James Blake [overgrown] Chris Brown - Fine China Lil Wayne ft. 2 Chainz - Rich As Fuck ft. 2 Chainz Mack Maine - Celebrate (Explicit) ft. Talib Kweli, Lil Wayne Sabrina Antoinette - I Know You're Out There Amex Sync On Twitter - American Express COMMERCIAL Sevyn Streeter - "I Like
2012 • • • • • • • • • • •
LL Cool J - Take It ft. Joe Nicki Minaj Pink Friday - Official Fragrance Commercial ТИМАТИ feat. Craig David - Sex In The Bathroom Nicki Minaj, Cassie - The Boys Colin Tilley Chris Brown "Dont Judge Me" Taio Cruz - Fast Car Machine Gun Kelly — «Stereo» Conor Maynard - Turn Around ft. Ne-Yo Dappy - Good Intentions Tulisa ft. Tyga - Live it Up
• Nicki Minaj — »I am your leader» • DJ SMASH feat T-MOOR RODRIGUEZ – JUM
92
GloriaFX • • • • • • • • • • • • • • • • •
Director Parris Stewart Lil Wayne feat. Big Sean "My Homies Still" Director - Colin Tilley David Guetta - I Can Only Imagine ft. Chris Brown, Lil Wayne Melanie Fiona - This Time ft. J. Cole Chris Brown - Don't Wake Me Up Chris Brown ft. Big Sean & Wiz Khalifa - Till I Die Chris Brown - Sweet Love Kevin McCall - Naked ft. Big Sean Birdman "Dark Shades" Ft Lil Wayne and Mack Maine DJ KHALED "Take It to the Head" ft. Chris Brown . Rick Ross . Nicki Minaj . Lil Wayne Chris Brown - Sweet Love Rick Ross, Meek Mill & T-Pain - Bag of Money Mercedes-Benz : Stars Clara Louise feat. Fabian Buch - Happy Birthday (Once Again) Mary J. Blige feat Rick Ross – Why Justin Bieber – Boyfriend
• • • • • • • • • • • • • • • • •
TYGA feat BIG SEAN - "IM GONE" Tyga feat Lil Wayne - Faded Platnum - SoulRSystem Timati - RockStar J.Cole feat Missy Elliot - Nobody's perfect Aura Dione feat Rock Mafia - Friends Timati feat Sergey Lazarev and DJ M.E.G. - Moscow to California Kem - You're On My Mind LABRINTH - Last Time Chris Brown - "Turn up the music" French Montana ft. Rick Ross, Diddy & Charlie Rock - Shot Caller (Remix) Wale feat Big Sean - Slight Work Shanell - "My Buttons" Тимати и Григорий Лепс - «Реквием по любви» Lil Wayne ft. Bruno Mars – «Mirror» Mike Posner – «She looks like sex» Cher Lloyd ft. Astro – «Want U Back»
2011 • • • • • • • • •
Jawan Harris feat. Chris Brown – «Another Planet» Lara Scandar- «Chains» Kevin Cossom ft. Fabolous&Diddy– «Baby I Like It» Diddy / Dirty Money Feat. Chris Brown – «Yesterday» Mary J. Blige – «Someone To Love Me» (Naked) feat. Diddy& Lil Wayne Romeo Santos – «You» Melanie Fiona – «NeverComingBack» Diddy Dirty Money feat. TreySongzandRickRoss – «YourLove» Chipmunk Feat. KeriHilson – «InTheAir»
• Lil Wayne – «John» (Explicit) ft. RickRoss • Chris Brown – «SheAin'tYou»
93
GloriaFX • • • • • • • • • • • • • • • • •
Chris Brown ft. BustaRhymes&LilWayne – «LookAtMeNow» Keri Hilson - "LoseControl" ft. Nelly New Boyz ft. Chris Brown – «Better With The Lights Off» Chris Brown feat. Justin Bieber – «Next 2 You» JLS feat. Dev – «She Makes Me Wanna» Young Jeezy ft. Lil Wayne – «Ballin» Katy B – «WitchesBrew» WizKhalifa – NoSleep Sean Garrett (Feat. Rick Ross) – «In Da Box» Fabian Buch – «Turn Off The Lights» Melanie Fiona – «4 AM» Loick Essien – «Me Without You» Jason Derulo – «Breathing» Kristian Valen - «Letting go» Chris Brown ft. Kevin K-MAC McCall – «Strip» FreeSol Ft. Justin Timberlake &Timbaland – «Fascinated» Jessie J – «Domino»
References [1] http:/ / vfxg. org/ profiles/ blogs/ the-kick-ass-ukrainian-vfx-company-you-ve-never-heard-of?fb_action_ids=430594120387072& fb_action_types=og. likes& fb_source=aggregation& fb_aggregation_id=288381481237582 [2] http:/ / www. cgmeetup. net/ home/ gloriafx-visual-effects-demoreel-2012/ [3] http:/ / gloriafx. com/ chris-brown-love-more- ft-nicki-minaj/ [4] http:/ / www. videostatic. com/ watch-it/ 2013/ 07/ 15/ lil-wayne-god-bless-amerika-eif-rivera-dir [5] http:/ / gloriafx. com/ wiz-khalifa-no-sleep/ [6] http:/ / www. hotnewhiphop. com/ rick-ross-feat-future-no-games-video-video. 21251. html [7] http:/ / www. dailymotion. com/ video/ xugfh2_nicki-minaj-cassie-the-boys-explicit_music [8] http:/ / www. directlyrics. com/ justin-bieber-boyfriend-lyrics. html [9] http:/ / www. stereogum. com/ 1383932/ foals-bad-habit-video-nsfw/ video/ [10] https:/ / www. facebook. com/ media/ set/ ?set=a. 314901068528152. 82780. 160300277321566& type=1
94
Hemicube (computer graphics)
95
Hemicube (computer graphics) See also Hemicube (geometry) A hemicube is a concept used in 3D computer graphics rendering. A hemicube is one way to represent a 180° view from a surface or point in space.
Shape
Unfolding a hemicube
Although the name implies any half of a cube, a hemicube is usually a cube cut through a plane parallel to one of its faces. Therefore it consists of one square face, two 2:1 aspect ratio rectangles, and two 1:2 aspect ratio rectangles.
Uses The hemicube may be used in the Radiosity algorithm or other Light Transport algorithms in order to determine the amount of light arriving at a particular point on a surface. In some cases, a hemicube may be used in Environment Mapping or Reflection Mapping.
Image plane In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the monitor. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the mapping between pixels on the monitor and points (or rather, rays) in the 3D world. In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal plane.
Image-based meshing
Image-based meshing Image-based meshing is the automated process of creating computer models for computational fluid dynamics (CFD) and finite element analysis (FEA) from 3D image data (such as magnetic resonance imaging (MRI), computed tomography (CT) or microtomography). Although a wide range of mesh generation techniques are currently available, these were usually developed to generate models from computer-aided design (CAD), and have therefore difficulties meshing from 3D imaging data.
Mesh generation from 3D imaging data Meshing from 3D imaging data presents a number of challenges but also unique opportunities for presenting a more realistic and accurate geometrical description of the computational domain. There are generally two ways of meshing from 3D imaging data:
CAD-based approach The majority of approaches used to date still follow the traditional CAD route by using an intermediary step of surface reconstruction which is then followed by a traditional CAD-based meshing algorithm.[1] CAD-based approaches use the scan data to define the surface of the domain and then create elements within this defined boundary. Although reasonably robust algorithms are now available, these techniques are often time consuming, and virtually intractable for the complex topologies typical of image data. They also do not easily allow for more than one domain to be meshed, as multiple surfaces are often non-conforming with gaps or overlaps at interfaces where one or more structures meet.[2]
Image-based approach This approach is the more direct way as it combines the geometric detection and mesh creation stages in one process which offers a more robust and accurate result than meshing from surface data. Voxel conversion technique providing meshes with brick elements [3] and with tetrahedral elements [4] have been proposed. Another approach generates 3D hexahedral or tetrahedral elements throughout the volume of the domain, thus creating the mesh directly with conforming multipart surfaces. [5]
Generating a model The steps involved in the generation of models based on 3D imaging data are:
Scan and image processing An extensive range of image processing tools can be used to generate highly accurate models based on data from 3D imaging modalities, e.g. MRI, CT, MicroCT (XMT), and Ultrasound. Features of particular interest include: • Segmentation tools (e.g. thresholding, floodfill, level set methods, etc.) • Filters and smoothing tools (e.g. volume- and topology-preserving smoothing).
96
Image-based meshing
Volume and surface mesh generation The image-based meshing technique allows the straightforward generation of meshes out of segmented 3D data. Features of particular interest include: • Multi-part meshing (mesh any number of structures simultaneously) • Mapping functions to apply material properties based on signal strength (e.g. Young's modulus to Hounsfield scale) • Smoothing of meshes (e.g. topological preservation of data to ensure preservation of connectivity, and volume neutral smoothing to prevent shrinkage of convex hulls) • Export to FEA and CFD codes for analysis (e.g. nodes, elements, material properties, contact surfaces)
Typical use • • • • •
Biomechanics and design of medical and dental implants Food science Forensic science Materials science (composites and foams) Nondestructive testing (NDT)
• Paleontology and functional morphology • Reverse engineering • Soil science and petrology
References [1] Viceconti et al, 1998. TRI2SOLID: an application of reverse engineering methods to the creation of CAD models of bone segments. Computer Methods and Programs in Biomedicine, 56, 211–220. [2] Young et al, 2008. An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366, 3155–3173. [3] Fyhrie et al, 1993. The probability distribution of trabecular level strains for vertebral cancellous bone. Transactions of the 39th Annual Meeting of the Orthopaedic Research Society, San Francisco. [4] Frey et al, 1994. Fully automatic mesh generation for 3-D domains based upon voxel sets. International Journal of Methods in Engineering, 37, 2735–2753. [5] Young et al, 2008. An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366, 3155–3173.
External links • ScanIP commercial image-based meshing software: www.simpleware.com (http://www.simpleware.com) • Mimics 3D image-based engineering software for FEA and CFD on anatomical data: Mimics website (http:// www.materialise.com/mimics) • Google group on image-based modelling: (http://groups.google.co.uk/group/image-based-modelling) • Avizo Software's 3D image-based meshing tools for CFD and FEA • iso2mesh: a free 3D surface and volumetric mesh generator for matlab/octave (http://iso2mesh.sourceforge.net/ )
97
Inflatable icons
98
Inflatable icons Inflatable Icons refers to a technique that turns 2D icons into 3D models. There are many applications for this technique such as rapid prototyping, simulations and presentations, where non-professional computer users could benefit from the ability to create simple 3D models. Existing tools are geared towards the creation of production quality 3D models by professional users with sufficient background, time and motivation to overcome steep learning curves. 3D model
References • Repenning, A. 2005. Inflatable Icons: Diffusion-based Interactive Extrusion of 2D Images into 3D Models [1]. * The Journal of Graphical Tools, 10(1): 1-15
2D icon
References [1] http:/ / jgt. akpeters. com/ papers/ Repenning05/
Interactive Digital Centre Asia Interactive Digital Centre Asia (IDCA) is a strategic partnership among Temasek Polytechnic, IM Innovations Pte Ltd and global technology companies, focusing on 3D Interactive Digital Media (IDM) solutions and services for various industries. IDCA aims to be a leading virtual and physical hub for connectivity, innovation and collaboration amongst the IDM community, especially in Singapore and the Asia-Pacific region.
History IDCA is an expansion of the original strategic partnership between Temasek Polytechnic and IM Innovations Pte Ltd which started with the establishment of the 3D Media Studio in 2005 to promote the pervasive use of 3D visualisation and digital media solutions and services. With the success of 3D Media Studio, Temasek Polytechnic, IM Innovations and EON Reality Inc. formally launched IDCA on 1st November 2007, supported by the Infocomm Development Authority of Singapore (IDA).
Interactive Digital Centre Asia
3D Research & Development • • • • • •
Development of 3D interactive applications Creation of 3D digital contents, animation and visual effects Training to use various 3D software tools Professional, consulting and system integration services Innovative proof-of-concept projects Use-inspired applied research
External links • • • • •
Interactive Digital Centre Asia (IDCA) [1] Temasek Polytechnic [2] IM Innovations [3] Infocomm Development Authority of Singapore (IDA) [4] EON Reality [5]
References [1] [2] [3] [4] [5]
http:/ / www. idc-asia. com. sg http:/ / www. tp. edu. sg http:/ / www. im-innovations. com http:/ / www. ida. gov. sg http:/ / www. eonreality. com
Interactive skeleton-driven simulation Interactive skeleton-driven simulation (or Interactive skeleton-driven dynamic deformations) is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality.
Background Methods for simulating deformation, such as changes of shapes, of dynamic bodies involve intensive calculations, and several models have been developed. Some of these are known as free-form deformation, skeleton-driven deformation, dynamic deformation and anatomical modelling. Skeletal animation is well known in computer animation and 3D character simulation. Because of the calculation intensitivity of the simulation, few interactive systems are available which realistically can simulate dynamic bodies in real-time. Being able to interact with such a realistic 3D model would mean that calculations would have to be performed within the constraints of a frame rate which would be acceptable via a user interface. Recent research has been able to build on previously developed models and methods to provide sufficiently efficient and realistic simulations. The promise for this technique can be as widespread as mimicing human facial expressions for perception of simulating a human actor in real-time or other cell organisms. Using skeletal constraints and parameterized force to calculate deformations also has the benefit of matching how a single cell has a shaping skeleton, as well as how a larger living organism might have an internal bone skeleton - such as the vertebraes. The generalized external body force simulations makes elasticity calculations more efficient, and means real-time interactions are possible.
99
Interactive skeleton-driven simulation
100
Basic theory There are several components to such a simulation system: • • • • • • • •
a polygon mesh defining the body shape of the model a coarse volumetric mesh using finite element methods to ensure complete integration over the model line constraints corresponding to internal skeleton and instrumented to the model linearizing of equations of motion to achieve interactive rates hierarchical regions of the mesh associated with skeletal lines blending of locally linearlized simulations a control lattice through subdivision fitting the model by surrounding and covering it a hierarchical basis containing functions which will provide values for deformation of each lattice
domain with calculations of these hierarchical functions similar to that of lazy wavelets Rather than fitting the object to the skeleton, as is common, the skeleton is used to set constraints for deformation. Also the hierarchical basis means that detail levels can be introduced or removed when needed - for example, observing from a distance or hidden surfaces. Pre-calculated poses are used to be able to interpolate between shapes and achieve realistic deformations throughout motions. This means traditional keyframes are avoided. There are performance tuning similarities between this technique and procedural generation, wavelet and data compression methods.
Algorithmic considerations To achieve interactivity there are several optimizations necessary which are implementation specific. Start
by
defining
the
object
you .
wish
to
animate
as
a
set
(i.e.
define
all
the
points):
non-wobble
point):
Then get a handle on it. Let Then
you
need
to
define
the
rest
state
of
the
object
(the
Projects Projects are taking place to further develop this technique and presenting results to SIGGRAPH, with available reference of details. Academic institutions and commercial enterprises like Alias Systems Corporation (the makers of the Maya rendering software), Intel and Electronic Arts are among the known proponents of this work. There are also videos available showcasing the techniques, with editors showing interactivity in real-time with realistic results. The computer game Spore also has showcased similar techniques.
References • Interactive Character Animation Using Dynamic Elastic Simulation [1], 2004, Steve Capell Ph.D. dissertation. • Interactive Skeleton-Driven Dynamic Deformations [2], 2002 SIGGRAPH. Authors: Steve Capell, Seth Green, Brian Curless, Tom Duchamp and Zoran Popović. • A Multiresolution Framework for Dynamic Deformations [3], 2002 SIGGRAPH.Authors: Steve Capell, Seth Green, Brian Curless, Tom Duchamp and Zoran Popović. • Physically Based Rigging for Deformable Characters [4], 2005 SIGGRAPH. Authors: Steve Capell, Matthew Burkhart, Brian Curless, Tom Duchamp and Zoran Popović.
Interactive skeleton-driven simulation
101
• Skeleton-driven Deformation - lecture on physically-based modelling, simulation and animation [5], 2005, Ming C. Lin, University of North Carolina, USA.
External links • Video of an interactive skeletal and model editor with introduction to the basic theory [6], University of Washington, USA. • Deformable Objects and Characters project [7], University of Washington, USA. Has example videos of the techniques. • Motion Libraries for Character Animation project [8], University of Washington, USA. Has example videos of the techniques.
References [1] [2] [3] [4] [5]
http:/ / grail. cs. washington. edu/ theses/ CapellPhd. pdf http:/ / grail. cs. washington. edu/ pub/ papers/ Capell-2002-ISD. pdf http:/ / grail. cs. washington. edu/ pub/ papers/ Capell-2002-MFD. pdf http:/ / grail. cs. washington. edu/ pub/ papers/ Capell-2005-PBR. pdf http:/ / www. cs. unc. edu/ ~lin/ COMP259-S05/ LEC/ 24. ppt
[6] http:/ / grail. cs. washington. edu/ projects/ deformation/ Capell-2002-ISD-divx. avi [7] http:/ / grail. cs. washington. edu/ projects/ deformation/ [8] http:/ / grail. cs. washington. edu/ projects/ charanim/
Inverse kinematics Inverse kinematics refers to the use of the kinematics equations of a robot to determine the joint parameters that provide a desired position of the end-effector. Specification of the movement of a robot so that its end-effector achieves a desired task is known as motion planning. Inverse kinematics transforms the motion plan into joint actuator trajectories for the robot. The movement of a kinematic chain whether it is a robot or an animated character is modeled by the kinematics equations of the chain. These equations define the configuration of the chain in terms of its joint parameters. Forward kinematics uses the joint parameters to compute the configuration of the chain, and inverse kinematics reverses this calculation to determine the joint parameters that achieves a desired configuration.[1][2][3] For example, inverse kinematics formulas allow calculation of the joint parameters that position a robot arm to pick up a part. Similar formulas determine the positions of the skeleton of an animated character that is to move in a particular way.
An industrial robot performing arc welding. Inverse kinematics computes the joint trajectories needed for the robot to guide the welding tip along the part.
Inverse kinematics
102
Kinematic analysis Kinematic analysis is one of the first steps in the design of most industrial robots. Kinematic analysis allows the designer to obtain information on the position of each component within the mechanical system. This information is necessary for subsequent dynamic analysis along with control paths. Inverse kinematics is an example of the kinematic analysis of a constrained system of rigid bodies, or kinematic chain. The kinematic equations of a robot can be used to define the loop equations of a complex articulated system. These loop equations are non-linear constraints on the configuration parameters of the system. The independent parameters in these equations are known as the degrees of freedom of the system. While analytical solutions to the inverse kinematics problem exist for a wide range of kinematic chains, computer modeling and animation tools often use Newton's method to solve the non-linear kinematics equations. Other applications of inverse kinematic algorithms include interactive manipulation, animation control and collision avoidance.
Inverse kinematics and 3D animation Inverse kinematics is important to game programming and 3D animation, where it is used to connect game characters physically to the world, such as feet landing firmly on top of terrain.
A model of the human skeleton as a kinematic chain allows positioning using inverse kinematics.
An animated figure is modeled with a skeleton of rigid segments connected with joints, called a kinematic chain. The kinematics equations of the figure define the relationship between the joint angles of the figure and its pose or configuration. The forward kinematic animation problem uses the kinematics equations to determine the pose given the joint angles. The inverse kinematics problem computes the joint angles for a desired pose of the figure. It is often easier for computer-based designers, artists and animators to define the spatial configuration of an assembly or figure by moving parts, or arms and legs, rather than directly manipulating joint angles. Therefore, inverse kinematics is used in computer-aided design systems to animate assemblies and by computer-based artists and animators to position figures and characters. The assembly is modeled as rigid links connected by joints that are defined as mates, or geometric constraints. Movement of one element requires the computation of the joint angles for the other elements to maintain the joint constraints. For example, inverse kinematics allows an artist to move the hand of a 3D human model to a desired position and orientation and have an algorithm select the proper angles of the wrist, elbow, and shoulder joints. Successful implementation of computer animation usually also requires that the figure move within reasonable anthropomorphic limits.
Inverse kinematics
103
Approximating solutions to IK systems There are many methods of modelling and solving inverse kinematics problems. The most flexible of these methods typically rely on iterative optimization to seek out an approximate solution, due to the difficulty of inverting the forward kinematics equation and the possibility of an empty solution space. The core idea behind several of these methods is to model the forward kinematics equation using a Taylor series expansion, which can be simpler to invert and solve than the original system.
The Jacobian inverse technique The Jacobian inverse technique is a simple yet effective way of implementing inverse kinematics. Let there be variables that govern the forward-kinematics equation, i.e the position function. These variables may be joint angles, lengths, or other arbitrary real values. If the IK system lives in a 3-dimensional space, the position function can be viewed as a mapping . Let give the initial position of the system, and be the goal position of the system. The Jacobian inverse technique simply attempts to generate iteratively improved estimates of to minimize the error given by . Each one of the intermediate estimates can be added to and evaluated by the position function to animate the system. For small -vectors, the series expansion of the position function gives:
Where
is the (3 x m) Jacobian matrix of the position function at
.
Note that the (i, k)-th entry of the Jacobian matrix can be determined numerically:
Where
gives the i-th component of the position function,
is simply
with a small delta added to
its k-th component, and is a reasonably small positive value. Taking the Moore-Penrose pseudoinverse of the Jacobian and re-arranging terms results in:
Where
. It is possible to use a singular value decomposition to obtain the
pseudo-inverse of the Jacobian. Applying the inverse Jacobian method once will result in a very rough estimate of the desired -vector. A line search should be used to scale this to an acceptable value. The estimate for can be improved via the following algorithm (known as the Newton-Raphson method):
Once some -vector has caused the error to drop close to zero, the algorithm should terminate. Existing methods based on the Hessian matrix of the system have been reported to converge to desired values using fewer iterations, though, in some cases more computational resources.
Inverse kinematics
References [1] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA. [2] J. J. Uicker, G. R. Pennock, and J. E. Shigley, 2003, Theory of Machines and Mechanisms, Oxford University Press, New York. [3] J. M. McCarthy and G. S. Soh, 2010, Geometric Design of Linkages, (http:/ / books. google. com/ books?id=jv9mQyjRIw4C& pg=PA231& lpg=PA231& dq=geometric+ design+ of+ linkages& source=bl& ots=j6TS1043qE& sig=R5ycw5DximWrQOEVshfiytflD6Q& hl=en& sa=X& ei=0Zj4TuiCFvCGsgKyvO3FAQ& ved=0CGAQ6AEwBQ#v=onepage& q=geometric design of linkages& f=false) Springer, New York.
External links • Robotics and 3D Animation in FreeBasic (http://sites.google.com/site/proyectosroboticos/ cinematica-inversa-iii) (Spanish) • Analytical Inverse Kinematics Solver (http://openrave.programmingvision.com/index. php?title=Component:Ikfast) - Given an OpenRAVE robot kinematics description, generates a C++ file that analytically solves for the complete IK. • Inverse Kinematics algorithms (http://freespace.virgin.net/hugo.elias/models/m_ik2.htm) • Robot Inverse solution for a common robot geometry (http://www.learnaboutrobots.com/inverseKinematics. htm) • HowStuffWorks.com article How do the characters in video games move so fluidly? (http://entertainment. howstuffworks.com/question538.htm) with an explanation of inverse kinematics • 3D Theory Kinematics (http://www.euclideanspace.com/physics/kinematics/joints/index.htm) • Protein Inverse Kinematics (http://cnx.org/content/m11613/latest/) • Simple Inverse Kinematics example with source code using Jacobian (http://diegopark.googlepages.com/ computergraphics) • Detailed description of Jacobian and CCD solutions for inverse kinematics (http://billbaxter.com/courses/290/ html/index.htm)
104
Isosurface
105
Isosurface An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous function whose domain is 3D-space. Isosurfaces are normally displayed using computer graphics, and are used as data visualization methods in computational fluid dynamics (CFD), allowing engineers to study features of a fluid flow (gas or liquid) around objects, such as aircraft wings. An isosurface may represent an individual shock wave in supersonic flight, or several isosurfaces may be generated showing a sequence of pressure values in the air flowing around a wing. Isosurfaces tend to be a popular form of visualization for volume datasets since they can be rendered by a simple polygonal model, which can be drawn on the screen very quickly.
Zirconocene with an isosurface showing areas of the molecule susceptible to electrophilic attack.
In medical imaging, isosurfaces may be used to represent regions of a particular density in a three-dimensional CT scan, allowing the visualization of internal organs, bones, or other structures. Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information about pharmacology, chemistry, geophysics and meteorology. A popular method of constructing an isosurface from a data volume is the marching cubes algorithm, and another, very similar method is the marching tetrahedrons algorithm. Yet another is called the asymptotic decider. Examples of isosurfaces are 'Metaballs' or 'blobby objects' used in 3D visualisation. A more general way to construct an isosurface is to use the function representation.
Isosurface of vorticity trailed from a propeller blade
References • Charles D. Hansen; Chris R. Johnson (2004). Visualization Handbook [1]. Academic Press. pp. 7–11. ISBN 978-0-12-387582-2.
Isosurface
External links • Isosurface Polygonization [2]
References [1] http:/ / books. google. com/ books?id=ZFrlULckWdAC& pg=PA7 [2] http:/ / www2. imm. dtu. dk/ ~jab/ gallery/ polygonization. html
Joint constraints Joint constraints are rotational constraints on the joints of an artificial bone system. They are used in an inverse kinematics chain, for such things as 3D animation or robotics. Joint constraints can be implemented in a number of ways, but the most common method is to limit rotation about the X, Y and Z axis independently. An elbow, for instance, could be represented by limiting rotation on Y and Z axis to 0 degrees, and constraining the X-axis rotation to 130 degrees. To simulate joint constraints more accurately, dot-products can be used with an independent axis to repulse the child bones orientation from the unreachable axis. Limiting the orientation of the child bone to a border of vectors tangent to the surface of the joint, repulsing the child bone away from the border, can also be useful in the precise restriction of shoulder movement.
Kinematic chain Kinematic chain refers to an assembly of rigid bodies connected by joints that is the mathematical model for a mechanical system.[1] As in the familiar use of the word chain, the rigid bodies, or links, are constrained by their connections to other links. An example is the simple open chain formed by links connected in series, like the usual chain, which is the kinematic model for a typical robot manipulator.[2] Mathematical models of the connections, or joints, between two links are termed kinematic pairs. Kinematic pairs model the hinged and sliding joints fundamental to The JPL mobile robot ATHLETE is a platform with six serial chain legs ending in robotics, often called lower pairs and the wheels. surface contact joints critical to cams and gearing, called higher pairs. These joints are generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that shows the kinematic chain.
106
Kinematic chain
107
The modern use of kinematic chains includes compliance that arises from flexure joints in precision mechanisms, link compliance in compliant mechanisms and micro-electro-mechanical systems, and cable compliance in cable robotic and tensegrity systems.[3] [4]
Mobility formula The degrees of freedom, or mobility, of a kinematic chain is the number of parameters that define the configuration of the chain.[5] A system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. This frame is included in the count of bodies, so that mobility does not depend on link that forms the fixed frame. This means the degree-of-freedom of this system is M=6(N-1), where N=n+1 is the number of moving bodies plus the fixed body.
The arms, fingers and head of the JSC Robonaut are modeled as kinematic chains.
Joints that connect bodies impose constraints. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c=6-f. In the case of a hinge or slider, which are one degree of freedom joints, have f=1 and therefore c=6-1=5. The result is that the mobility of a kinematic chain formed from n moving links and j joints each with freedom fi, i=1, ..., j, is given by
Recall that N includes the fixed link.
Analysis of kinematic chains The constraint equations of a kinematic chain couple the range of movement
The movement of the Boulton & Watt steam engine is studied as a system of rigid bodies connected by joints forming a kinematic chain.
Kinematic chain
108
allowed at each joint to the dimensions of the links in the chain, and form algebraic equations that are solved to determine the configuration of the chain associated with specific values of input parameters, called degrees of freedom. The constraint equations for a kinematic chain are obtained using rigid transformations [Z] to characterize the relative movement allowed at each joint and separate rigid transformations [X] to define the dimensions of each link. In the case of a serial open chain, the result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link. A chain of n links connected in series has the kinematic equations,
where [T] is the transformation locating the end-link---notice that the chain includes a "zeroth" link consisting of the ground frame to which it is attached. These equations are called the forward kinematics equations of the serial chain.[6] Kinematic chains of a wide range of complexity are analyzed by equating the kinematics equations of serial chains that form loops within the kinematic chain. These equations are often called loop equations. The complexity (in terms of calculating the forward and inverse kinematics) of the chain is determined by the following factors: • Its topology: a serial chain, a parallel manipulator, a tree structure, or a graph.
A model of the human skeleton as a kinematic chain allows positioning using forward and inverse kinematics.
• Its geometrical form: how are neighbouring joints spatially connected to each other? Explanation:Two or more rigid bodies in space are collectively called a rigid body system. We can hinder the motion of these independent rigid bodies with kinematic constraints. Kinematic constraints are constraints between rigid bodies that result in the decrease of the degrees of freedom of rigid body system.
Synthesis of kinematic chains The constraint equations of a kinematic chain can be used in reverse to determine the dimensions of the links from a specification of the desired movement of the system. This is termed kinematic synthesis.[7] Perhaps the most developed formulation of kinematic synthesis is for four-bar linkages, which is known as Burmester theory.[8][9][10] Ferdinand Freudenstein is often called the father of modern kinematics for his contributions to the kinematic synthesis of linkages beginning in the 1950s. His use of the newly developed computer to solve Freudenstein's equation became the prototype of computer-aided design systems. This work has been generalized to the synthesis of spherical and spatial mechanisms.
Kinematic chain
References [1] Reuleaux, F., 1876 The Kinematics of Machinery, (http:/ / books. google. com/ books?id=WUZVAAAAMAAJ& printsec=frontcover& dq=kinematics+ of+ machinery& hl=en& sa=X& ei=qpn4Tse-E9SasgLcsZytDw& ved=0CEQQ6AEwAQ#v=onepage& q=kinematics of machinery& f=false) (trans. and annotated by A. B. W. Kennedy), reprinted by Dover, New York (1963) [2] J. M. McCarthy and G. S. Soh, 2010, Geometric Design of Linkages, (http:/ / books. google. com/ books?id=jv9mQyjRIw4C& pg=PA231& lpg=PA231& dq=geometric+ design+ of+ linkages& source=bl& ots=j6TS1043qE& sig=R5ycw5DximWrQOEVshfiytflD6Q& hl=en& sa=X& ei=0Zj4TuiCFvCGsgKyvO3FAQ& ved=0CGAQ6AEwBQ#v=onepage& q=geometric design of linkages& f=false) Springer, New York. [3] Larry L. Howell, 2001, Compliant mechanisms (http:/ / books. google. com/ books/ about/ Compliant_mechanisms. html?id=tiiSOuhsIfgC), John Wiley & Sons. [4] Alexander Slocum, 1992, Precision Machine Design (http:/ / books. google. com/ books?id=uG7aqgal65YC& printsec=frontcover& source=gbs_ge_summary_r& cad=0#v=onepage& q& f=false), SME [5] J. J. Uicker, G. R. Pennock, and J. E. Shigley, 2003, Theory of Machines and Mechanisms, Oxford University Press, New York. [6] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA. [7] R. S. Hartenberg and J. Denavit, 1964, Kinematic Synthesis of Linkages, McGraw-Hill, New York. [8] Suh, C. H., and Radcliffe, C. W., Kinematics and Mechanism Design, John Wiley and Sons, New York, 1978. [9] Sandor,G.N.,andErdman,A.G.,1984,AdvancedMechanismDesign:AnalysisandSynthesis, Vol. 2. Prentice-Hall, Englewood Cliffs, NJ. [10] Hunt, K. H., Kinematic Geometry of Mechanisms, Oxford Engineering Science Series, 1979
Lambert's cosine law In optics, Lambert's cosine law says that the radiant intensity or luminous intensity observed from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal.[1][2] The law is also known as the cosine emission law or Lambert's emission law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760. A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface has the same radiance when viewed from any angle. This means, for example, that to the human eye it has the same apparent brightness (or luminance). It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the apparent size (solid angle) of the observed area, as seen by a viewer, is decreased by a corresponding amount. Therefore, its radiance (power per unit solid angle per unit projected source area) is the same.
Lambertian scatterers and radiators When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons/time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles than would a Lambertian scatterer. The emission of a Lambertian radiator does not depend upon the amount of incident radiation, but rather from radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator.
109
Lambert's cosine law
110
Details of equal brightness effect The situation for a Lambertian surface (emitting or scattering) is illustrated in Figures 1 and 2. For conceptual clarity we will think in terms of photons rather than energy or luminous energy. The wedges in the circle each represent an equal angle dΩ, and for a Lambertian surface, the number of photons per second emitted into each wedge is proportional to the area of the wedge. It can be seen that the length of each wedge is the product of the diameter of the circle and cos(θ). It can also be seen that the maximum rate of photon emission per unit solid angle is along the normal and diminishes to zero for θ = 90°. In mathematical terms, the radiance along the normal is I photons/(s·cm2·sr) and the number of photons per second emitted into the vertical wedge is I dΩ dA. The number of photons per second emitted into the wedge at angle θ is I cos(θ) dΩ dA.
Figure 1: Emission rate (photons/s) in a normal and off-normal direction. The number of photons/sec directed into any wedge is proportional to the area of the wedge.
Figure 2 represents what an observer sees. The observer directly above the area element will be seeing the scene through an aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0. We can assume without loss of generality that the aperture happens to subtend solid angle dΩ when "viewed" from the emitting area element. This normal observer will then be recording I dΩ dA photons per second and so will be measuring a radiance of photons/(s·cm2·sr). The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0 cos(θ). This observer will be recording I cos(θ) dΩ dA photons per second, and so will be measuring a radiance of
Figure 2: Observed intensity (photons/(s·cm2·sr)) for a normal and off-normal observer; dA0 is the area of the observing aperture and dΩ is the solid angle subtended by the aperture from the viewpoint of the emitting area element.
photons/(s·cm2·sr), which is the same as the normal observer.
Relating peak luminous intensity and luminous flux In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the Lambertian assumption holds, we can calculate the total luminous flux, , from the peak luminous intensity, , by integrating the cosine law:
Lambert's cosine law
111
and so
where
is the determinant of the Jacobian matrix for the unit sphere, and realizing that
per steradian.[3] Similarly, the peak intensity will be
is luminous flux
of the total radiated luminous flux. For Lambertian
surfaces, the same factor of relates luminance to luminous emittance, radiant intensity to radiant flux, and [citation needed] radiance to radiant emittance. Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included only for clarity. Example: A surface with a luminance of say 100 cd/m2 (= 100 nits, typical PC monitor) will, if it is a perfect Lambert emitter, have a luminous emittance of 314 lm/m2. If its area is 0.1 m2 (~19" monitor) then the total light emitted, or luminous flux, would thus be 31.4 lm.
Uses Lambert's cosine law in its reversed form (Lambertian reflection) implies that the apparent brightness of a Lambertian surface is proportional to the cosine of the angle between the surface normal and the direction of the incident light. This phenomenon is, among others, used when creating mouldings, which are a means of applying light- and dark-shaded stripes to a structure or object without having to change the material or apply pigment. The contrast of dark and light areas gives definition to the object. Mouldings are strips of material with various cross-sections used to cover transitions between surfaces or for decoration.
References [1] RCA Electro-Optics Handbook, p.18 ff [2] Modern Optical Engineering, Warren J. Smith, McGraw-Hill, p.228, 256 [3] Incropera and DeWitt, Fundamentals of Heat and Mass Transfer, 5th ed., p.710.
Light stage
112
Light stage A light stage or light cage is an instrumentation set-up used for reflectance, texture and motion capture often with structured light and a multi-camera setup.
Reflectance capture The reflectance field over a human face was first captured in 2000 by Paul Debevec et al. The method they used to find the light that travels under the skin was based on the existing scientific knowledge that light reflecting off the air-to-oil retains its polarization while light that travels under the skin loses its polarization. Using this information and the simplest, yet most revolutionary to date, light stage was built by Paul Debevec et al. and it consisted of 1. Moveable digital camera 2. Moveable simple light source (full rotation with adjustable radius and height) 3. 2 polarizers set into various angles in front of the light and the camera 4. A computer with relatively simple programs doing relatively simple tasks.
BSSRDF: BRDF + Subsurface scattering
The setup enabled the team to find the subsurface scattering component of the BSDF over the human face which was required for fully virtual cinematography with ultra-photorealistic digital look-alikes like seen in the Matrix Reloaded and Matrix Revolutions and numerous other movies since the early 2000s. Following great scientific success Debevec et al. constructed a further 5 newer more elaborate versions of the light stage at the University of Southern California Institute for Creative Technologies and Ghosh et al. built the USC light stage X, the seventh version.
See • Digital Emily [1] presented to the SIGGRAPH convention in 2008 for which the reflection field of actress Emily O'Brien was captured using the USC light stage 5.[2] and the prerendered digital look-alike was made in association with Image Metrics. Video includes USC light stage 5 and USC light stage 6.
BSDF: BRDF + BTDF
• Digital Ira [3] that runs in precomputed but also is fairly convincingly rendered also in real-time was presented at the 2013 SIGGRAPH in association with Activision. Digital Emily shown in 2008 was a pre-computed simulation meanwhile Digital Ira run in real-time in 2013 and is fairly realistic looking even in real-time rendering of animation. The field is rapidly moving from movies to computer games and leisure applications – Video includes
Light stage USC light stage X.
References [1] http:/ / www. ted. com/ talks/ paul_debevec_animates_a_photo_real_digital_face. html [2] Paul Debevec animates a photo-real digital face - Digital Emily (http:/ / www. ted. com/ talks/ paul_debevec_animates_a_photo_real_digital_face. html) 2008 [3] http:/ / gl. ict. usc. edu/ Research/ DigitalIra
Light transport theory Light transport theory deals with the mathematics behind calculating the energy transfers between media that affect visibility. This article is currently specific to light transport in rendering processes such as global illumination and HDRI.
Light thumbnail
Light Transport The amount of light transported is measured by flux density, that is flux per unit area. See this link here [1] for a PDF explaining A Theory of Inverse Light Transport.
Models Hemisphere Given a surface S, a hemisphere H can be projected on to S to calculate the amount of incoming and outgoing light . If a point P is selected at random on the surface S, the amount of light incoming and outgoing can be calculated by its projection onto the hemisphere.
Hemicube The hemicube model works in a similar way that the hemisphere model works, with the exception that a hemicube is projected as opposed to a hemisphere. The similarity is only in concept, the actual calculation done by integration has a different form factor.
113
Light transport theory
114
Equations Rendering Rendering converts a model into an image either by simulating light transport to get physically based photorealistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light).
References [1] http:/ / www. cs. toronto. edu/ ~kyros/ pubs/ 05. iccv. interreflect. pdf
Loop subdivision surface In computer graphics, Loop subdivision surface is a subdivision scheme developed by Charles Loop in 1987 for triangular meshes. Each triangle is divided into four subtriangles, adding new vertices in the middle of each edge.
External links • Charles Loop: Smooth Subdivision Surfaces Based on Triangles, M.S. Mathematics thesis, University of Utah, 1987 (pdf [1]). • Homepage of Charles Loop [2]. • Jos Stam: Evaluation of Loop Subdivision Surfaces, Computer Graphics Proceedings ACM SIGGRAPH 1998, (pdf [3], downloadable eigenstructures [4] ).
Loop Subdivision of an icosahedron (top) after one and after two refinement steps
Loop subdivision surface
115
References [1] [2] [3] [4]
http:/ / research. microsoft. com/ ~cloop/ thesis. pdf http:/ / research. microsoft. com/ ~cloop/ http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/ pdf/ loop. pdf http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html
Low poly Low poly is a polygon mesh in 3D computer graphics that has a relatively small number of polygons. Low poly meshes occur in real-time applications (e.g. games) and contrast with high poly meshes in animated movies and special effects of the same era. The term low poly is used in both a technical and a descriptive sense; the number of polygons in a mesh is an important factor to optimize for performance but can give an undesirable appearance to the resulting graphics. This polygon mesh representing a dolphin would be considered low poly by modern (2013) standards.
Motivation for low poly meshes Polygon meshes are one of the major methods of modelling a 3D object for display by a computer. Polygons can, in theory, have any number of sides but are commonly broken down into triangles for display. In general the more triangles in a mesh the more detailed the object is, but the more computationally intensive it is to display. In order to decrease render times (i.e. increase frame rate) the number of triangles in the scene must be reduced, by using low poly meshes.
Polygon budget A combination of the game engine or rendering method and the computer being used defines the polygon budget; the number of polygons which can appear in a scene and still be rendered at an acceptable frame rate. Therefore the use of low poly meshes are mostly confined to computer games and other software a user must manipulate 3D objects in real time because processing power is limited to that of a typical personal computer or games console and the frame rate must be high. Computer generated imagery, for example, for films or still images have a higher polygon budget because rendering does not need to be done in real-time, which requires higher frame rates. In addition, computer processing power in these situations is typically less limited, often using a large network of computers or what is known as a render farm. Each frame can take hours to create, despite the enormous computer power involved. A common example of the difference this makes is full motion video sequences in computer games which, because they can be pre-rendered, and look much smoother than the games themselves.
Appearance of low poly meshes Objects that are said to be low poly often appear blocky (such as square heads) and lacking in detail (such as no individual fingers). Objects that are supposed to be circular or spherical are most obviously low poly as the number of triangles needed to make a curve appear smooth is high and polygons are restricted to straight edges. Low poly meshes do not necessarily look bad, for example a flat sheet of paper represented by one polygon looks extremely accurate. As computer graphics are getting more powerful, low poly graphics may be used to achieve a certain retro
Low poly
116
style conceptually similar to pixel art orienting on 'classic' video games. Computer graphics techniques such as normal and bump mapping have been designed to make a low poly object appear to contain more polygons than it does. This is done by altering the shading of polygons to contain internal detail which is not in the mesh.
Low poly as a relative term There is no defined threshold for a mesh to be low poly; low poly is always a relative term and depends on (amongst other factors):
An example of normal mapping used to add detail to a low poly (500 triangle) mesh.
• The time the meshes were designed and for what hardware • The detail required in the final mesh • The shape and properties of the object in question As computing power inevitably increases, the number of polygons that can be used increases too. For example, Super Mario 64 would be considered low poly today, but was considered a stunning achievement when it was released in 1996. Similarly, in 2009, using hundreds of polygons on a leaf in the background of a scene would be considered high poly, but using that many polygons on the main character would be considered low poly.
Low poly meshes in physics engines Physics engines have presented a new role for low poly meshes. Whilst the display of computer graphics has become very efficient, allowing (as of 2009) the display of tens to hundreds of thousands of polygons at 25 frames per second on a desktop computer, the calculation of physical interactions is still slow. A low poly simplified version of the mesh is often used for simplifying the calculation of collisions with other meshes, in some cases this is as simple as a 6 polygon bounding box.
References
Marching cubes
117
Marching cubes Marching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline,[1] for extracting a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels). This paper has been one the most cited papers in the computer graphics field. The applications of this algorithm are mainly concerned with medical visualizations such as CT and MRI scan data images, and special effects or 3-D modelling with what is usually called metaballs or other metasurfaces. An analogous two-dimensional method is called the marching squares algorithm.
History The Algorithm was developed by William E. Lorensen and Harvey E. Cline as a result of their research for General Electric. At General Electric they worked on a way to efficiently visualize data from CT and MRI devices.
Head and cerebral structures (hidden) extracted from 150 MRI slices using marching-cubes (about 150,000 triangles)
Their first published version exploited rotational and reflective symmetry and also sign changes to build the table with 15 unique cases. However, in meshing the faces there are possibly ambiguous cases. These ambiguous cases can lead to meshings with holes. Topologically correct isosurfaces can still be constructed with extra effort. The problem was that for cases with The originally published 15 cube configurations "rippling" signs, there are at least two correct choices for where the correct contour should pass. The actual choice does not matter, but it has to be topologically consistent. The original cases made consistent choices, but the sign change could lead to mistakes. The extended table in shows 33 configurations. The ambiguities were improved upon in later algorithms such as the 1991 asymptotic decider of Nielson and Hamann which corrected these mistakes. Several other analyses of ambiguities and related improvements have been proposed since then; see the 2005 survey of Lopes and Bordlie for instance.
Marching cubes
Algorithm The algorithm proceeds through the scalar field, taking eight neighbor locations at a time (thus forming an imaginary cube), then determining the polygon(s) needed to represent the part of the isosurface that passes through this cube. The individual polygons are then fused into the desired surface. This is done by creating an index to a precalculated array of 256 possible polygon configurations (28=256) within the cube, by treating each of the 8 scalar values as a bit in an 8-bit integer. If the scalar's value is higher than the iso-value (i.e., it is inside the surface) then the appropriate bit is set to one, while if it is lower (outside), it is set to zero. The final value after all 8 scalars are checked, is the actual index to the polygon indices array. Finally each vertex of the generated polygons is placed on the appropriate position along the cube's edge by linearly interpolating the two scalar values that are connected by that edge. The gradient of the scalar field at each grid point is also the normal vector of a hypothetical isosurface passing from that point. Therefore, we may interpolate these normals along the edges of each cube to find the normals of the generated vertices which are essential for shading the resulting mesh with some illumination model.
Patent issues The marching cubes algorithm is claimed by anti-software patent advocates as a prime example in the graphics field of the woes of patenting software[citation needed]. An implementation was patented (United States Patent 4,710,876) despite being a relatively obvious solution to the surface-generation problem, they claim. Another similar algorithm was developed, called marching tetrahedra, in order to circumvent the patent as well as solve a minor ambiguity problem of marching cubes with some cube configurations. This patent expired in 2005, and it is now legal for the graphics community to use it without royalties since more than 17 years have passed from its issue date (December 1, 1987).
Sources [1] William E. Lorensen, Harvey E. Cline: Marching Cubes: A high resolution 3D surface construction algorithm. In: Computer Graphics, Vol. 21, Nr. 4, July 1987
External links • Lorensen, W. E.; Cline, Harvey E. (1987). "Marching cubes: A high resolution 3d surface construction algorithm". ACM Computer Graphics 21 (4): 163–169. doi: 10.1145/37402.37422 (http://dx.doi.org/10.1145/ 37402.37422). • Nielson, G. M.; Hamann, Bernd (1991). "The asymptotic decider: resolving the ambiguity in marching cubes" (http://dl.acm.org/citation.cfm?id=949621). Proc. 2nd conference on Visualization (VIS' 91): 83–91. • Montani, Claudio; Scateni, Riccardo; Scopigno, Roberto (1994). "A modified look-up table for implicit disambiguation of Marching cubes". The Visual Computer 10 (6): 353–355. doi: 10.1007/BF01900830 (http:// dx.doi.org/10.1007/BF01900830). • Nielson, G. M.; Sung, Junwon (1997). "Interval volume tetrahedrization". 8th IEEE Visualization (VIS'97). doi: 10.1109/VISUAL.1997.663886 (http://dx.doi.org/10.1109/VISUAL.1997.663886). • Paul Bourke. "Overview and source code" (http://paulbourke.net/geometry/polygonise/). • Matthew Ward. "GameDev overview" (http://www.gamedev.net/page/resources/_/technical/ math-and-physics/overview-of-marching-cubes-algorithm-r424). • "Introductory description with additional graphics" (http://users.polytech.unice.fr/~lingrand/MarchingCubes/ algo.html). • "Marching Cubes" (http://www.marchingcubes.org/index.php/Marching_Cubes).. Some of the early history of Marching Cubes.
118
Marching cubes • Newman, Timothy S.; Yia, Hong (2006). "A survey of the marching cubes algorithm". Computers and Graphics 30 (5): 854–879. doi: 10.1016/j.cag.2006.07.021 (http://dx.doi.org/10.1016/j.cag.2006.07.021). • Stephan Diel. "Specializing visualization algorithms" (http://extras.springer.com/2003/978-1-4020-7259-8/ media/diehl/diehl.pdf).
Mesh parameterization Given two surfaces with the same topology, a bijective mapping between them exists. On triangular mesh surfaces, the problem of computing this mapping is called mesh parameterization. The parameter domain is the surface that the mesh is mapped onto. Parameterization was mainly used for mapping textures to surfaces. Recently, it has become a powerful tool for many applications in mesh processing.[citation needed] Various techniques are developed for different types of parameter domains with different parameterization properties.
Applications • Texture mapping • • • • • • • •
Normal mapping Detail transfer Morphing Mesh completion Mesh Editing Mesh Databases Remeshing Surface fitting
Techniques • Barycentric Mappings • Differential Geometry Primer • Non-Linear Methods
Implementations • A fast and simple stretch-minimizing mesh parameterization [1] • Graphite [2]: ABF, ABF++, DPBF, LSCM, HLSCM, Barycentric, mean-value coordinates, L2 stretch, spectral conformal, Periodic Global Parameterization, Constrained texture mapping, texture atlas generation • Linear discrete conformal parameterization [3] • Discrete Exponential Map [4]
119
Mesh parameterization
120
External links "Mesh Parameterization: theory and practice" [5]
References [1] [2] [3] [4] [5]
http:/ / www. riken. jp/ brict/ Yoshizawa/ Research/ Param. html http:/ / alice. loria. fr/ index. php/ software/ 3-platform/ 22-graphite. html http:/ / www. cs. caltech. edu/ ~keenan/ project_dgp. html http:/ / www. dgp. toronto. edu/ ~rms/ software/ expmapdemo. html http:/ / www. inf. usi. ch/ hormann/ parameterization/ index. html
Metaballs Metaballs are, in computer graphics, organic-looking n-dimensional objects. The technique for rendering metaballs was invented by Jim Blinn in the early 1980s. Each metaball is defined as a function in n-dimensions (i.e. for three dimensions, ; three-dimensional metaballs tend to be most common, with two-dimensional implementations popular as well). A thresholding value is also chosen, to define a solid volume. Then,
represents whether the volume enclosed by the surface defined by metaballs is filled at or not. A
typical
function
chosen
for
metaballs
is ,
where
is the center of the metaball. However, due to
the divide, it is computationally expensive. For this reason, approximate polynomial functions are typically used.[citation needed] When seeking a more efficient falloff function, several qualities are desired:
1: The influence of 2 positive metaballs on each other. 2: The influence of a negative metaball on a positive metaball by creating an indentation in the positive metaball's surface.
• Finite support. A function with finite support goes to zero at a maximum radius. When evaluating the metaball field, any points beyond their maximum radius from the sample point can be ignored. A hierarchical culling system can thus ensure only the closest metaballs will need to be evaluated regardless of the total number in the field.
• Smoothness. Because the isosurface is the result of adding the fields together, its smoothness is dependent on the smoothness of the falloff curves. The simplest falloff curve that satisfies these criteria is:
, where r is the distance to the point.
This formulation avoids expensive square root calls. More complicated models use a Gaussian potential constrained to a finite radius or a mixture of polynomials to achieve smoothness. The Soft Object model by the Wyvill brothers provides higher degree of smoothness and still avoids square roots. A simple generalization of metaballs is to apply the falloff curve to distance-from-lines or distance-from-surfaces.
Metaballs
121
There are a number of ways to render the metaballs to the screen. In the case of three dimensional metaballs, the two most common are brute force raycasting and the marching cubes algorithm. 2D metaballs were a very common demo effect in the 1990s. The effect is also available as an XScreensaver module.
Further reading
The interaction between two differently coloured 3D positive metaballs, created in Bryce. Note that the two smaller metaballs combine to create one larger object.
• Blinn, J. F. (July 1982). "A Generalization of Algebraic Surface Drawing". ACM Transactions on Graphics 1 (3): 235–256. doi:10.1145/357306.357310 [1].
External links • Implicit Surfaces article [2] by Paul Bourke
• • • •
Meta Objects article [3] from Blender wiki Metaballs article [4] from SIGGRAPH website Exploring Metaballs and Isosurfaces in 2D [5] by Stephen Whitmore (gamedev article) Simulating 2D Metaball Blobbies with Photoshop [6]
References • Intro to Metaballs [7]
References [1] [2] [3] [4] [5] [6] [7]
http:/ / dx. doi. org/ 10. 1145%2F357306. 357310 http:/ / local. wasp. uwa. edu. au/ ~pbourke/ miscellaneous/ implicitsurf/ http:/ / wiki. blender. org/ index. php/ Manual/ Meta_Objects http:/ / www. siggraph. org/ education/ materials/ HyperGraph/ modeling/ metaballs/ metaballs. htm http:/ / www. gamedev. net/ page/ resources/ _/ / feature/ fprogramming/ exploring-metaballs-and-isosurfaces-in-2d-r2556 http:/ / www. digitalartform. com/ archives/ 2009/ 06/ simulating_2d_m. html http:/ / steve. hollasch. net/ cgindex/ misc/ metaballs. html
Micropolygon
Micropolygon In 3D computer graphics, a micropolygon (or µ-polygon) is a polygon that is very small relative to the image being rendered. Commonly, the size of a micropolygon is close to or even less than the area of a pixel. Micropolygons allow a renderer to create a highly detailed image. The concept of micropolygons was developed within the Reyes algorithm, in which geometric primitives are tessellated at render time into a rectangular grid of tiny, four-sided polygons. A shader might fill each micropolygon with a single color or assign colors on a per-vertex basis. Shaders that operate on micropolygons can process an entire grid of them at once in SIMD fashion. This often leads to faster shader execution, and allows shaders to compute spatial derivatives (e.g. for texture filtering) by comparing values at neighboring micropolygon vertices. Furthermore, a renderer using micropolygons can support displacement mapping simply by perturbing micropolygon vertices during shading. This displacement is usually not limited to the local surface normal but can be given an arbitrary direction.
Further reading • Robert L. Cook., Loren Carpenter, and Edwin Catmull. "The Reyes image rendering architecture." Computer Graphics (SIGGRAPH '87 Proceedings), pp. 95–102. • Anthony A. Apodaca, Larry Gritz: Advanced RenderMan: Creating CGI for Motion Pictures, Morgan Kaufmann Publishers, ISBN 1-55860-618-1
Morph target animation Morph target animation, per-vertex animation, shape interpolation, or blend shapes is a method of 3D computer animation used together with techniques such as skeletal animation. In a morph target animation, a "deformed" version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions.
Technique The "morph target" is a deformed version of a shape. When applied to In this example from the open source project a human face, for example, the head is first modelled with a neutral Sintel, four facial expressions have been defined as deformations of the face geometry. The mouth expression and a "target deformation" is then created for each other is then animated by morphing between these expression. When the face is being animated, the animator can then deformations. Dozens of similar controllers are smoothly morph (or "blend") between the base shape and one or used to animate the rest of the face. several morph targets. Typical examples of morph targets used in facial animation is a smiling mouth, a closed eye, and a raised eyebrow, but the technique can also be used to morph between, for example, Dr Jekyll and Mr Hyde. When used for facial animation, these morph target are often referred to as "key poses". The interpolations between key poses when an
122
Morph target animation
123
animation is being rendered, are typically small and simple transformations of movement, rotation, and scale performed by the 3D software. Not all morph target animation has to be done by actually editing vertex positions. It is also possible to take vertex positions found in skeletal animation and then use those rendered as morph target animation.
An arbitrary object deformed by morphing
An animation composed in one 3D application suite sometimes needs between defined vertex positions. to be transferred to another, as for rendering. Because different 3D applications tend to implement bones and other special effects differently, the morph target technique is sometimes used to transfer animations between 3D applications to avoid export issues.
Benefits and drawbacks There are advantages to using morph target animation over skeletal animation. The artist has more control over the movements because he or she can define the individual positions of the vertices within a keyframe, rather than being constrained by skeletons. This can be useful for animating cloth, skin, and facial expressions because it can be difficult to conform those things to the bones that are required for skeletal animation. However, there are also disadvantages. Vertex animation is usually a lot more labour-intensive than skeletal animation because every vertex position must be manually manipulated and, for this reason, the number of pre-made target morphs is typically limited. Also, in methods of rendering where vertices move from position to position during in-between frames, a distortion is created that does not happen when using skeletal animation. This is described by critics of the technique as looking "shaky". On the other hand, this distortion may be part of the desired "look".
References External links • Morph target example using C# and Microsoft XNA (http://mvinetwork.co.uk/2011/02/02/ xna-morph-targets/)
Motion capture
Motion capture Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision[1] and robotics. In filmmaking and video game development, it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers Motion capture of two pianists' fingers playing the same piece (slow motion, no or captures subtle expressions, it is often sound). referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking more usually refers to match moving. In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions, often the purpose of motion capture is to record only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted to the older technique of rotoscope, such as the Ralph Bakshi 1978 The Lord of the Rings and 1981 American Pop animated films where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character. Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.
Advantages Motion capture offers several advantages over traditional computer animation of a 3D model: • More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation. The Hand Over technique is an example of this. • The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor. • Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner. • The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines. • Potential for free software and third party solutions reducing its costs.
124
Motion capture
Disadvantages • Specific hardware and special software programs are required to obtain and process the data. • The cost of the software, equipment and personnel required can be prohibitive for small productions. • The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion. • When problems occur, it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone. • The initial results are limited to what can be performed within the capture volume without extra editing of the data. • Movement that does not follow the laws of physics cannot be captured. • Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later. • If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is not careful with their physical motion.
Applications Video games often use motion capture to animate athletes, martial artists, and other in-game characters.[2] This has been done since the Atari Jaguar CD-based game Highlander: The Last of the MacLeods, released in 1995 Snow White and the Seven Dwarfs used an early form of motion capture technology. Actors and actresses would act out scenes and would be filmed. The animators would then use the individual frames as a guide to their drawings. Movies use motion capture for CG effects, in some cases replacing traditional cel animation, and for completely computer-generated creatures, such as Gollum, The Mummy, King Kong, Davy Jones from Pirates of the Caribbean, the Na'vi from the film Avatar, and Clu from Tron: Legacy. The Great Goblin, the three Stone-trolls, many of the orcs and goblins in the 2012 film The Hobbit: An Unexpected Journey, and Smaug were created using motion capture. Sinbad: Beyond the Veil of Mists was the first movie Motion Capture Performers at Centroid, Pinewood Studios made primarily with motion capture, although many character animators also worked on the film, which had a very limited release. Final Fantasy: The Spirits Within was the first widely released movie made primarily with motion capture. The Lord of the Rings: The Two Towers was the first feature film to utilize a real-time motion capture system. This method streamed the actions of actor Andy Serkis into the computer generated skin of Gollum / Smeagol as it was being performed.
125
Motion capture
126
Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Disney·Pixar's Cars was animated without motion capture. In the ending credits of Pixar's film Ratatouille, a stamp appears labelling the film as "100% Pure Animation – No Motion Capture!" Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look of live-action cinema, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Cameron's Avatar used this technique to create the Na'vi that inhabit Pandora. The Walt Disney Company has produced Robert Zemeckis's A Christmas Carol using this technique. In 2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in 2011, after a string of failures. Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Headcases in the UK. Virtual Reality and Augmented Reality allow users to interact with digital content in real-time. This can be useful for training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer generated characters in real-time. Gait analysis is the major application of motion capture in clinical medicine. Techniques allow clinicians to evaluate human motion across several biometric factors, often while streaming this information live into analytical software. During the filming of James Cameron's Avatar all of the scenes involving this process were directed in realtime using Autodesk Motion Builder software to render a screen image which allowed the director and the actor to see what they would look like in the movie, making it easier to direct the movie as it would be seen by the viewer. This method allowed views and angles not possible from a pre-rendered animation. Cameron was so proud of his results that he even invited Steven Spielberg and George Lucas on set to view the system in action. In Marvel's The Avengers, Mark Ruffalo used motion capture so he could play his character the Hulk, rather than have him be only CGI like previous films.
Methods and systems Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television, cinema, and video games as the technology matured. A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion, to submillimeter positions. The resolution of the system is important in both the spatial resolution and temporal resolution as motion blur causes almost the same problems as low resolution.
Reflective markers attached to skin to identify bony landmarks and the 3D motion of body segments
Motion capture
127
Optical systems Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with 3 degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow. Newer hybrid systems are combining inertial sensors with optical sensors to reduce occlusion, increase the number of users and improve the ability to track without having to manually clean up data.
Passive markers Passive optical system use markers coated with a retroreflective material to reflect light that is generated near the cameras lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric. The centroid of the marker is estimated as a position within the 2 dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian. An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.
A dancer wearing a suit used in an optical motion capture system
Vendors have constraint software to reduce the problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing Several markers are placed at specific points on a full body spandex/lycra suit designed specifically for motion capture. an actor's face during facial optical motion This type of system can capture large numbers of markers at frame capture rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10000 fps.
Motion capture
Active marker Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that is generated externally, the markers themselves are powered to emit their own light. Since Inverse Square law provides 1/4 the power at 2 times the distance, this can increase the distances and volume for capture. The TV series ("Stargate SG1") produced episodes using an active optical system for the VFX allowing the actor to walk around props that would make motion capture difficult for other non-active optical systems. ILM used active Markers in Van Helsing to allow capture of Dracula's flying brides on very large sets similar to Weta's use of active markers in "Rise of the Planet of the Apes". The power to each marker can be provided sequentially in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in realtime applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data.
Time modulated active marker Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12 megapixel spatial resolution modulated systems show more subtle movements than 4 megapixel optical systems by having both higher spatial and temporal resolution. Directors can see the actors performance in real time, and watch the results on the motion capture driven CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other A high-resolution active marker system with 3,600 × 3,600 resolution at 480 hertz technologies. LEDs with onboard providing real time submillimeter positions. processing and a radio synchronization allow motion capture outdoors in direct sunlight, while capturing at 120 to 960 frames per second due to a high speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but the additional processing is done at the camera to improve resolution via a subpixel or centroid processing, providing both high resolution and high speed. These motion capture systems are typically $20,000 for an eight camera, 12 megapixel spatial resolution 120 hertz system with one actor.
128
Motion capture
Semi-passive imperceptible marker One can reverse the traditional approach based on high speed cameras. Systems such as Prakash use inexpensive multi-LED high speed projectors. The specially built multi-LED IR projectors optically encode IR sensors can compute their location when lit by mobile multi-LED emitters, e.g. the space. Instead of retro-reflective or in a moving car. With Id per marker, these sensor tags can be worn under clothing active light emitting diode (LED) markers, and tracked at 500 Hz in broad daylight. the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance. Microsoft's Kinect system, released for the Xbox 360, projects an invisible infra red pattern for depth recovery motion acquisition. These tracking tags work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion capture or real-time broadcasting of virtual sets but has yet to be proven.
Underwater motion capture system Motion capture technology has been available for researchers and scientists for a few decades, which has given new insight into many fields. Underwater cameras
The vital part of the system, the Underwater camera, has a waterproof housing. The housing has a finish that withstands corrosion and chlorine which makes it perfect for use in basins and swimming pools. The underwater cameras comes with a cyan light strobe instead of the typical IR light – for minimum falloff under water. Since the index of refraction of water differs from air, a special internal and external calibration have been implemented. Measurement volume A Underwater camera is typically able to measure 15–20 meters depending on the water quality and the type of marker used. Unsurprisingly, the best range is achieved when the water is clear, and like always, the measurement volume is also dependent on the number of cameras. A range of underwater markers are available for different circumstances. Tailored Different pools require different mountings and fixtures. Therefore all underwater motion capture systems are uniquely tailored to suit each specific pool installment. For cameras placed in the center of the pool, specially designed tripods, using suction cups, are provided.
129
Motion capture
Markerless Emerging techniques and research in computer vision are leading to the rapid development of the markerless approach to motion capture. Markerless systems such as those developed at Stanford University, the University of Maryland, MIT, and the Max Planck Institute, do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. Applications of this technology extend deeply into popular imagination about the future of computing technology. Several commercial solutions for markerless motion capture have also been introduced, including systems by Organic Motion[3] and Xsens.[4] ESC entertainment a subsidiary of Warner Brothers Pictures, created specially to enable virtual cinematography, including photorealistic digital look-alikes for filming the Matrix Reloaded and Matrix Revolutions movies and used a technique called Universal Capture that utilized 7 camera setup and the tracking the optical flow of all pixels over all the 2-D planes of the cameras for motion, gesture and facial expression capture leading to photorealistic results. Traditional systems Traditionally markerless optical motion tracking is used to keep track on various objects, including airplanes, launch vehicles, missiles and satellites. Many of such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage over radar in that the objects must be reflecting or emitting sufficient light. An optical tracking system typically consists of 3 subsystems. The optical imaging system, the mechanical tracking platform and the tracking computer. The optical imaging system is responsible for converting the light from the target area into digital image that the tracking computer can process. Depending on the design of the optical tracking system, the optical imaging system can vary from as simple as a standard digital camera to as specialized as an astronomical telescope on the top of a mountain. The specification of the optical imaging system determines the upper-limit of the effective range of the tracking system. The mechanical tracking platform holds the optical imaging system and is responsible for manipulating the optical imaging system in such a way that it always points to the target being tracked. The dynamics of the mechanical tracking platform combined with the optical imaging system determines the tracking system's ability to keep the lock on a target that changes speed rapidly. The tracking computer is responsible for capturing the images from the optical imaging system, analyzing the image to extract target position and controlling the mechanical tracking platform to follow the target. There are several challenges. First the tracking computer has to be able to capture the image at a relatively high frame rate. This posts a requirement on the bandwidth of the image capturing hardware. The second challenge is that the image processing software has to be able to extract the target image from its background and calculate its position. Several textbook image processing algorithms are designed for this task but each has its own limitations. This problem can be simplified if the tracking system can expect certain characteristics that is common in all the targets it will track. The next problem down the line is to control the tracking platform to follow the target. This is a typical control system design problem rather than a challenge, which involves modeling the system dynamics and designing controllers to control it. This will however become a challenge if the tracking platform the system has to work with is not designed for real-time and highly dynamic applications, in which case the tracking software has to compensate for the mechanical and software imperfections of the tracking platform. Traditionally optical tracking systems often involves highly customized optical and electrical subsystems. The software that runs such systems are also customized for the corresponding hardware components. Because of the
130
Motion capture real-time nature of the application and the limited size of the market, commercializing optical tracking software posts a big challenge. One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites.
Non-optical systems Inertial systems Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms. The motion data of the inertial sensors (inertial guidance system) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use gyroscopes to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more gyros the more natural the data. No external cameras, emitters or markers are needed for relative motions, although they are required to give the absolute position of the user if desired. Inertial motion capture systems capture the full six degrees of freedom body motion of a human in real-time and can give limited direction information if they include a magnetic bearing sensor, although these are much lower resolution and susceptible to electromagnetic noise. Benefits of using Inertial systems include: no solving, portability, and large capture areas. Disadvantages include 'floating' where the user looks like a marionette on strings, lower positional accuracy and positional drift which can compound over time. These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst independent game developers, mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $5,000 to $80,000 USD. Ironically the $5,000 systems use newer chips and sensors and are wireless taking advantage of the next generation of inertial sensors and wireless devices.
Mechanical motion Mechanical motion capture systems directly track body joint angles and are often referred to as exo-skeleton motion capture systems, due to the way the sensors are attached to the body. Performers attaches the skeletal-like structure to their body and as they move so do the articulated mechanical parts, measuring the performer’s relative motion. Mechanical motion capture systems are real-time, relatively low-cost, free-of-occlusion, and wireless (untethered) systems that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metal or plastic rods linked together with potentiometers that articulate at the joints of the body. These suits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system. Some suits provide limited force feedback or Haptic input.
Magnetic systems Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both the transmitter and each receiver. The relative intensity of the voltage or current of the three coils allows these systems to calculate both range and orientation by meticulously mapping the tracking volume. The sensor output is 6DOF, which provides useful results obtained with two-thirds the number of markers required in optical systems; one on upper arm and one on lower arm for elbow position and angle. The markers are not occluded by nonmetallic objects but are susceptible to magnetic and electrical interference from metal objects in the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect the magnetic field, and electrical sources such as monitors, lights, cables and computers. The sensor response is nonlinear, especially toward edges of the capture area. The wiring from the sensors tends to preclude extreme performance movements. The capture volumes for magnetic systems are dramatically smaller than they are for optical systems. With the magnetic systems, there is a distinction between “AC” and “DC” systems: one uses square pulses, the other uses sine wave pulse.
131
Motion capture
Related techniques Facial motion capture Most traditional motion capture hardware vendors provide for some type of low resolution facial capture utilizing anywhere from 32 to 300 markers with either an active or passive marker system. All of these solutions are limited by the time it takes to apply the markers, calibrate the positions and process the data. Ultimately the technology also limits their resolution and raw output quality levels. High fidelity facial motion capture, also known as performance capture, is the next generation of fidelity and is utilized to record the more complex movements in a human face in order to capture higher degrees of emotion. Facial capture is currently arranging itself in several distinct camps, including traditional motion capture data, blend shaped based solutions, capturing the actual topology of an actor's face, and proprietary systems. The two main techniques are stationary systems with an array of cameras capturing the facial expressions from multiple angles and using software such as the stereo mesh solver from OpenCV to create a 3D surface mesh, or to use light arrays as well to calculate the surface normals from the variance in brightness as the light source, camera position or both are changed. These techniques tend to be only limited in feature resolution by the camera resolution, apparent object size and number of cameras. If the users face is 50 percent of the working area of the camera and a camera has megapixel resolution, then sub millimeter facial motions can be detected by comparing frames. Recent work is focusing on increasing the frame rates and doing optical flow to allow the motions to be retargeted to other computer generated faces, rather than just making a 3D Mesh of the actor and their expressions.
RF positioning RF (radio frequency) positioning systems are becoming more viable as higher frequency RF devices allow greater precision than older RF technologies such as traditional radar. The speed of light is 30 centimeters per nanosecond (billionth of a second), so a 10 gigahertz (billion cycles per second) RF signal enables an accuracy of about 3 centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about 8 mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are almost as line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances is not likely to be as high. Many RF scientists believe that radio frequency will never produce the accuracy required for motion capture.
Non-traditional systems An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a hamster ball, which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment. Even though this technology could potentially lead to much lower costs for motion capture, the basic sphere is only capable of recording a single continuous direction. Additional sensors worn on the person would be needed to record anything more. Another alternative is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training, biomechanical research and virtual reality.
132
Motion capture
133
References Library resources about Motion capture •
Resources in your library
[5]
[1] David Noonan, Peter Mountney, Daniel Elson, Ara Darzi and Guang-Zhong Yang. A Stereoscopic Fibroscope for Camera Motion and 3D Depth Recovery During Minimally Invasive Surgery. In proc ICRA 2009 , pp. 4463-4468.
[2] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/ [3] http:/ / www. newsweek. com/ video/ 2007/ 03/ 06/ videogames-organic-motion. html [4] http:/ / venturebeat. com/ 2009/ 08/ 04/ xsens-technologies-captures-every-human-motion-with-body-suit/ [5] http:/ / tools. wmflabs. org/ ftl/ cgi-bin/ ftl?st=wp& su=Motion+ capture
Newell's algorithm Newell's Algorithm is a 3D computer graphics procedure for elimination of polygon cycles in the depth sorting required in hidden surface removal. It was proposed in 1972 by brothers Martin Newell and Dick Newell, and Tom Sancha, while all three were working at CADCentre. In the depth sorting phase of hidden surface removal, if two polygons have no overlapping extents or extreme minimum and maximum values in the x, y, and z directions, then they can be easily sorted. If two polygons, Q and P, do have overlapping extents in the Z direction, then it is possible that cutting is necessary. In that case Newell's algorithm tests the following: 1. Test for Z overlap; implied in the selection of the face Q from the sort list 2. The extreme coordinate values in X of the two faces do not overlap (minimax test in X) 3. The extreme coordinate values in Y of the two faces do not overlap (minimax test in Y) 4. All vertices of P lie deeper than the plane of Q 5. All vertices of Q lie closer to the viewpoint than the plane of P 6. The rasterisation of P and Q do not overlap
Cyclic polygons must be eliminated to correctly sort them by depth
Note that the tests are given in order of increasing computational difficulty. Note also that the polygons must be planar. If the tests are all false, then the polygons must be split. Splitting is accomplished by selecting one polygon and cutting it along the line of intersection with the other polygon. The above tests are again performed, and the algorithm continues until all polygons pass the above tests.
Newell's algorithm
134
References • Sutherland, Ivan E.; Sproull, Robert F.; Schumacker, Robert A. (1974), "A characterization of ten hidden-surface algorithms", Computing Surveys 6 (1): 1–55, doi:10.1145/356625.356626 [1]. • Newell, M. E.; Newell, R. G.; Sancha, T. L. (1972), "A new approach to the shaded picture problem", Proc. ACM National Conference, pp. 443–450.
References [1] http:/ / dx. doi. org/ 10. 1145%2F356625. 356626
Non-uniform rational B-spline Non-uniform rational basis spline (NURBS) is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces. It offers great flexibility and precision for handling both analytic (surfaces defined by common mathematical formulae) and modeled shapes.
History Development of NURBS began in the 1950s by engineers who were in need of a mathematically precise representation of freeform surfaces like those used for ship hulls, aerospace exterior surfaces, and car bodies, which could be exactly reproduced whenever technically needed. Prior representations of this kind of surface only existed as a single physical model created by a designer.
Three-dimensional NURBS surfaces can have complex, organic shapes. Control points influence the directions the surface takes. The outermost square below delineates the X/Y extents of the surface.
A NURBS curve.
The pioneers of this development were Animated version Pierre Bézier who worked as an engineer at Renault, and Paul de Casteljau who worked at Citroën, both in France. Bézier worked nearly parallel to de Casteljau, neither knowing about the work of the other. But because Bézier published the results of his work, the average computer graphics user today recognizes splines — which are represented with control points lying off the curve itself — as Bézier splines, while de Casteljau’s name is only known and used for the algorithms he developed to evaluate parametric surfaces. In the 1960s it became clear that non-uniform, rational B-splines are a generalization of Bézier splines, which can be regarded as uniform, non-rational B-splines. At first NURBS were only used in the proprietary CAD packages of car companies. Later they became part of standard computer graphics packages.
Non-uniform rational B-spline Real-time, interactive rendering of NURBS curves and surfaces was first made available on Silicon Graphics workstations in 1989. In 1993, the first interactive NURBS modeller for PCs, called NöRBS, was developed by CAS Berlin, a small startup company cooperating with the Technical University of Berlin. Today most professional computer graphics applications available for desktop use offer NURBS technology, which is most often realized by integrating a NURBS engine from a specialized company.
Use NURBS are commonly used in computer-aided design (CAD), manufacturing (CAM), and engineering (CAE) and are part of numerous industry wide standards, such as IGES, STEP, ACIS, and PHIGS. NURBS tools are also found in various 3D modelling and animation software packages. They can be efficiently handled by the computer programs and yet allow for easy human interaction. NURBS surfaces are functions of two parameters mapping to a surface in three-dimensional space. The Motoryacht design. shape of the surface is determined by control points. NURBS surfaces can represent simple geometrical shapes in a compact form. T-splines and subdivision surfaces are more suitable for complex organic shapes because they reduce the number of control points twofold in comparison with the NURBS surfaces. In general, editing NURBS curves and surfaces is highly intuitive and predictable. Control points are always either connected directly to the curve/surface, or act as if they were connected by a rubber band. Depending on the type of user interface, editing can be realized via an element’s control points, which are most obvious and common for Bézier curves, or via higher level tools such as spline modeling or hierarchical editing. A surface under construction, e.g. the hull of a motor yacht, is usually composed of several NURBS surfaces known as patches. These patches should be fitted together in such a way that the boundaries are invisible. This is mathematically expressed by the concept of geometric continuity. Higher-level tools exist which benefit from the ability of NURBS to create and establish geometric continuity of different levels: Positional continuity (G0) holds whenever the end positions of two curves or surfaces are coincidental. The curves or surfaces may still meet at an angle, giving rise to a sharp corner or edge and causing broken highlights. Tangential continuity (G1) requires the end vectors of the curves or surfaces to be parallel and pointing the same way, ruling out sharp edges. Because highlights falling on a tangentially continuous edge are always continuous and thus look natural, this level of continuity can often be sufficient. Curvature continuity (G2) further requires the end vectors to be of the same length and rate of length change. Highlights falling on a curvature-continuous edge do not display any change, causing the two surfaces to appear as one. This can be visually recognized as “perfectly smooth”. This level of continuity is very useful in the creation of models that require many bi-cubic patches composing one continuous surface. Geometric continuity mainly refers to the shape of the resulting surface; since NURBS surfaces are functions, it is also possible to discuss the derivatives of the surface with respect to the parameters. This is known as parametric
135
Non-uniform rational B-spline
136
continuity. Parametric continuity of a given degree implies geometric continuity of that degree. First- and second-level parametric continuity (C0 and C1) are for practical purposes identical to positional and tangential (G0 and G1) continuity. Third-level parametric continuity (C2), however, differs from curvature continuity in that its parameterization is also continuous. In practice, C2 continuity is easier to achieve if uniform B-splines are used. The definition of the continuity 'Cn' requires that the nth derivative of the curve/surface (
) are equal
[1]
at a joint. Note that the (partial) derivatives of curves and surfaces are vectors that have a direction and a magnitude. Both should be equal. Highlights and reflections can reveal the perfect smoothing, which is otherwise practically impossible to achieve without NURBS surfaces that have at least G2 continuity. This same principle is used as one of the surface evaluation methods whereby a ray-traced or reflection-mapped image of a surface with white stripes reflecting on it will show even the smallest deviations on a surface or set of surfaces. This method is derived from car prototyping wherein surface quality is inspected by checking the quality of reflections of a neon-light ceiling on the car surface. This method is also known as "Zebra analysis".
Technical specifications A NURBS curve is defined by its order, a set of weighted control points, and a knot vector . NURBS curves and surfaces are generalizations of both B-splines and Bézier curves and surfaces, the primary difference being the weighting of the control points which makes NURBS curves rational (non-rational B-splines are a special case of rational B-splines). Whereas Bézier curves evolve into only one parametric direction, usually called s or u, NURBS surfaces evolve into two parametric directions, called s and t or u and v. By evaluating a Bézier or a NURBS curve at various values of the parameter, the curve can be represented in Cartesian two- or three-dimensional space. Likewise, by evaluating a NURBS surface at various values of the two parameters, the surface can be represented in Cartesian space. NURBS curves and surfaces are useful for a number of reasons: • They are invariant under affine transformations:[2] operations like rotations and translations can be applied to NURBS curves and surfaces by applying them to their control points. • They offer one common mathematical form for both standard analytical shapes (e.g., conics) and free-form shapes. • They provide the flexibility to design a large variety of shapes. • They reduce the memory consumption when storing shapes (compared to simpler methods). • They can be evaluated reasonably quick by numerically stable and accurate algorithms. In the next sections, NURBS is discussed in one dimension (curves). It should be noted that all of it can be generalized to two or even more dimensions.
Non-uniform rational B-spline
Control points The control points determine the shape of the curve.[3] Typically, each point of the curve is computed by taking a weighted sum of a number of control points. The weight of each point varies according to the governing parameter. For a curve of degree d, the weight of any control point is only nonzero in d+1 intervals of the parameter space. Within those intervals, the weight changes according to a polynomial function (basis functions) of degree d. At the boundaries of the intervals, the basis functions go smoothly to zero, the smoothness being determined by the degree of the polynomial. As an example, the basis function of degree one is a triangle function. It rises from zero to one, then falls to zero again. While it rises, the basis function of the previous control point falls. In that way, the curve interpolates between the two points, and the resulting curve is a polygon, which is continuous, but not differentiable at the interval boundaries, or knots. Higher degree polynomials have correspondingly more continuous derivatives. Note that within the interval the polynomial nature of the basis functions and the linearity of the construction make the curve perfectly smooth, so it is only at the knots that discontinuity can arise. The fact that a single control point only influences those intervals where it is active is a highly desirable property, known as local support. In modeling, it allows the changing of one part of a surface while keeping other parts equal. Adding more control points allows better approximation to a given curve, although only a certain class of curves can be represented exactly with a finite number of control points. NURBS curves also feature a scalar weight for each control point. This allows for more control over the shape of the curve without unduly raising the number of control points. In particular, it adds conic sections like circles and ellipses to the set of curves that can be represented exactly. The term rational in NURBS refers to these weights. The control points can have any dimensionality. One-dimensional points just define a scalar function of the parameter. These are typically used in image processing programs to tune the brightness and color curves. Three-dimensional control points are used abundantly in 3D modeling, where they are used in the everyday meaning of the word 'point', a location in 3D space. Multi-dimensional points might be used to control sets of time-driven values, e.g. the different positional and rotational settings of a robot arm. NURBS surfaces are just an application of this. Each control 'point' is actually a full vector of control points, defining a curve. These curves share their degree and the number of control points, and span one dimension of the parameter space. By interpolating these control vectors over the other dimension of the parameter space, a continuous set of curves is obtained, defining the surface.
Knot vector The knot vector is a sequence of parameter values that determines where and how the control points affect the NURBS curve. The number of knots is always equal to the number of control points plus curve degree plus one (i.e. number of control points plus curve order). The knot vector divides the parametric space in the intervals mentioned before, usually referred to as knot spans. Each time the parameter value enters a new knot span, a new control point becomes active, while an old control point is discarded. It follows that the values in the knot vector should be in nondecreasing order, so (0, 0, 1, 2, 3, 3) is valid while (0, 0, 2, 1, 3, 3) is not. Consecutive knots can have the same value. This then defines a knot span of zero length, which implies that two control points are activated at the same time (and of course two control points become deactivated). This has impact on continuity of the resulting curve or its higher derivatives; for instance, it allows the creation of corners in an otherwise smooth NURBS curve. A number of coinciding knots is sometimes referred to as a knot with a certain multiplicity. Knots with multiplicity two or three are known as double or triple knots. The multiplicity of a knot is limited to the degree of the curve; since a higher multiplicity would split the curve into disjoint parts and it would leave control points unused. For first-degree NURBS, each knot is paired with a control point. The knot vector usually starts with a knot that has multiplicity equal to the order. This makes sense, since this activates the control points that have influence on the first knot span. Similarly, the knot vector usually ends with a
137
Non-uniform rational B-spline
138
knot of that multiplicity. Curves with such knot vectors start and end in a control point. The individual knot values are not meaningful by themselves; only the ratios of the difference between the knot values matter. Hence, the knot vectors (0, 0, 1, 2, 3, 3) and (0, 0, 2, 4, 6, 6) produce the same curve. The positions of the knot values influences the mapping of parameter space to curve space. Rendering a NURBS curve is usually done by stepping with a fixed stride through the parameter range. By changing the knot span lengths, more sample points can be used in regions where the curvature is high. Another use is in situations where the parameter value has some physical significance, for instance if the parameter is time and the curve describes the motion of a robot arm. The knot span lengths then translate into velocity and acceleration, which are essential to get right to prevent damage to the robot arm or its environment. This flexibility in the mapping is what the phrase non uniform in NURBS refers to. Necessary only for internal calculations, knots are usually not helpful to the users of modeling software. Therefore, many modeling applications do not make the knots editable or even visible. It's usually possible to establish reasonable knot vectors by looking at the variation in the control points. More recent versions of NURBS software (e.g., Autodesk Maya and Rhinoceros 3D) allow for interactive editing of knot positions, but this is significantly less intuitive than the editing of control points.
Comparison of Knots and Control Points A common misconception is that each knot is paired with a control point. This is true only for degree 1 NURBS (polylines). For higher degree NURBS, there are groups of 2 x degree knots that correspond to groups of (degree+1) control points. For example, suppose we have a degree 3 NURBS with 7 control points and knots 0,0,0,1,2,5,8,8,8. The first four control points are grouped with the first six knots. The second through fifth control points are grouped with the knots 0,0,1,2,5,8. The third through sixth control points are grouped with the knots 0,1,2,5,8,8. The last four control points are grouped with the last six knots. Some modelers that use older algorithms for NURBS evaluation require two extra knot values for a total of (degree+N+1) knots. When Rhino is exporting and importing NURBS geometry, it automatically adds and removes these two superfluous knots as the situation requires.
Order The order of a NURBS curve defines the number of nearby control points that influence any given point on the curve. The curve is represented mathematically by a polynomial of degree one less than the order of the curve. Hence, second-order curves (which are represented by linear polynomials) are called linear curves, third-order curves are called quadratic curves, and fourth-order curves are called cubic curves. The number of control points must be greater than or equal to the order of the curve. In practice, cubic curves are the ones most commonly used. Fifth- and sixth-order curves are sometimes useful, especially for obtaining continuous higher order derivatives, but curves of higher orders are practically never used because they lead to internal numerical problems and tend to require disproportionately large calculation times.
Construction of the basis functions The B-spline basis functions used in the construction of NURBS curves are usually denoted as corresponds to the
-th control point, and
corresponds with the degree of the basis function.
dependence is frequently left out, so we can write The degree-0 functions
The parameter
. The definition of these basis functions is recursive in
.
are piecewise constant functions. They are one on the corresponding knot span and
zero everywhere else. Effectively, are non-zero for
, in which [4]
is a linear interpolation of
knot spans, overlapping for
and
knot spans. The function
. The latter two functions is computed as
Non-uniform rational B-spline
139
rises linearly from zero to one on the interval where non-zero, while
is
falls from one to zero on the interval where
is non-zero. As mentioned before,
is a triangular
function, nonzero over two knot spans rising from zero to one on the first, and falling to zero on the second knot span. Higher order basis functions are non-zero over corresponding more knot spans and have correspondingly higher degree. If is the parameter, and is the -th knot, we can write the functions
and
as
From bottom to top: Linear basis functions (blue) and (green), their weight
and
functions
and
and the resulting quadratic
basis function. The knots are 0, 1, 2 and 2.5
The functions
and
are positive when the corresponding lower
order basis functions are non-zero. By induction on n it follows that the basis functions are non-negative for all values of and . This makes the computation of the basis functions numerically stable. Again by induction, it can be proved that the sum of the basis functions for a particular value of the parameter is unity. This is known as the partition of unity property of the basis functions. The figures show the linear and the quadratic basis functions for the knots {..., 0, 1, 2, 3, 4, 4.1, 5.1, 6.1, 7.1, ...} One knot span is considerably shorter than the others. On that knot span, the peak in the quadratic basis function is more distinct, reaching almost one. Conversely, the adjoining basis functions fall to zero more quickly. In the geometrical interpretation, this means that the curve approaches the corresponding control point closely. In case of a double knot, the length of the knot span becomes zero and the peak reaches one exactly. The basis function is no longer differentiable at that point. The curve will have a sharp corner if the neighbour control points are not collinear.
Linear basis functions
Quadratic basis functions
General form of a NURBS curve Using the definitions of the basis functions
from the previous paragraph, a NURBS curve takes the following
[5]
form:
In this,
is the number of control points
and
are the corresponding weights. The denominator is a
normalizing factor that evaluates to one if all weights are one. This can be seen from the partition of unity property of the basis functions. It is customary to write this as
in which the functions
Non-uniform rational B-spline
140
are known as the rational basis functions.
General form of a NURBS surface A NURBS surface is obtained as the tensor product of two NURBS curves, thus using two independent parameters and (with indices and respectively):[6]
with
as rational basis functions.
Manipulating NURBS objects A number of transformations can be applied to a NURBS object. For instance, if some curve is defined using a certain degree and N control points, the same curve can be expressed using the same degree and N+1 control points. In the process a number of control points change position and a knot is inserted in the knot vector. These manipulations are used extensively during interactive design. When adding a control point, the shape of the curve should stay the same, forming the starting point for further adjustments. A number of these operations are discussed below.[7][8]
Knot insertion As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is control points are replaced by
, then
new ones. The shape of the curve stays the same.
A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion.
Knot removal Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is used to clean up after an interactive session in which control points may have been added manually, or after importing a curve from a different representation, where a straightforward conversion process leads to redundant control points.
Non-uniform rational B-spline
141
Degree elevation A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree. This is frequently used when combining separate NURBS curves, e.g. when creating a NURBS surface interpolating between a set of NURBS curves or when unifying adjacent curves. In the process, the different curves should be brought to the same degree, usually the maximum degree of the set of curves. The process is known as degree elevation.
Curvature The most important property in differential geometry is the curvature . It describes the local properties (edges, corners, etc.) and relations between the first and second derivative, and thus, the precise curve shape. Having determined the derivatives it is easy to compute second derivate
or approximated as the arclength from the
. The direct computation of the curvature
with these equations is the big
advantage of parameterized curves against their polygonal representations.
Example: a circle Non-rational splines or Bézier curves may approximate a circle, but they cannot represent it exactly. Rational splines can represent any conic section, including the circle, exactly. This representation is not unique, but one possibility appears below: x
y
z
weight
1
0
0
1
1
1
0
0
1
0
−1
1
0
−1
0
0
1
1
−1 −1 0 0
−1 0
1
−1 0
1
0
0
1
1
The order is three, since a circle is a quadratic curve and the spline's order is one more than the degree of its piecewise polynomial segments. The knot vector is . The circle is composed of four quarter circles, tied together with double knots. Although double knots in a third order NURBS curve would normally result in loss of continuity in the first derivative, the control points are positioned in such a way that the first derivative is continuous. In fact, the curve is infinitely differentiable everywhere, as it must be if it exactly represents a circle. The curve represents a circle exactly, but it is not exactly parametrized in the circle's arc length. This means, for example, that the point at does not lie at (except for the start, middle and end point of each quarter circle, since the representation is symmetrical). This would be impossible, since the x coordinate of the circle would provide an exact rational polynomial expression for , which is impossible. The circle does make one full revolution as its parameter as multiples of
.
goes from 0 to
, but this is only because the knot vector was arbitrarily chosen
Non-uniform rational B-spline
References • Les Piegl & Wayne Tiller: The NURBS Book, Springer-Verlag 1995–1997 (2nd ed.). The main reference for Bézier, B-Spline and NURBS; chapters on mathematical representation and construction of curves and surfaces, interpolation, shape modification, programming concepts. • Dr. Thomas Sederberg, BYU NURBS, http://cagd.cs.byu.edu/~557/text/ch6.pdf • Dr. Lyle Ramshaw. Blossoming: A connect-the-dots approach to splines, Research Report 19, Compaq Systems Research Center, Palo Alto, CA, June 1987, http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-19. pdf • David F. Rogers: An Introduction to NURBS with Historical Perspective, Morgan Kaufmann Publishers 2001. Good elementary book for NURBS and related issues. • Gershenfeld, Neil A. The nature of mathematical modeling. Cambridge university press, 1999.
Notes [1] [2] [3] [4]
Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison-Wesley 1996 (2nd ed.). David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1 Gershenfeld: The Nature of Mathematical Modeling, page 141, Cambridge-University-Press 1999 Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2
[5] [6] [7] [8]
Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2 Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 4 Les Piegl & Wayne Tiller: The NURBS Book, chapter 5 L. Piegl, Modifying the shape of rational B-splines. Part 1: curves, Computer-Aided Design, Volume 21, Issue 8, October 1989, Pages 509-518, ISSN 0010-4485, http:/ / dx. doi. org/ 10. 1016/ 0010-4485(89)90059-6.
External links • Clear explanation of NURBS for non-experts (http://www.rw-designer.com/NURBS) • Interactive NURBS demo (http://geometrie.foretnik.net/files/NURBS-en.swf) • About Nonuniform Rational B-Splines - NURBS (http://www.cs.wpi.edu/~matt/courses/cs563/talks/nurbs. html) • An Interactive Introduction to Splines (http://ibiblio.org/e-notes/Splines/Intro.htm) • http://www.cs.bris.ac.uk/Teaching/Resources/COMS30115/all.pdf • http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/DONAVANIK/bezier.html • http://mathcs.holycross.edu/~croyden/csci343/notes.html (Lecture 33: Bézier Curves, Splines) • http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/notes.html • A free software package for handling NURBS curves, surfaces and volumes (http://octave.sourceforge.net/ nurbs) in Octave and Matlab
142
Nonobtuse mesh
143
Nonobtuse mesh A nonobtuse triangle mesh is composed of a set of triangles in which every angle is less than or equal to 90° we call these triangles nonobtuse triangles. If each (triangle) face angle is strictly less than 90◦, then the triangle mesh is said to be acute. The immediate benefits of having a nonobtuse or acute mesh include more efficient and more accurate geodesic computation on meshes using fast marching, and guaranteed validity for planar mesh embeddings via discrete harmonic maps. The first guaranteed nonobtuse mesh generation in 3D was introduced in Eurographics Symposium on Geometry Processing [1] 2006 by Li [2] and Zhang [3].
References • Nonobtuse Remeshing and Mesh Decimation [4] • Guaranteed Nonobtuse Meshes via Constrainted Optimizations [5]
References [1] http:/ / www. geometryprocessing. org/ [2] [3] [4] [5]
http:/ / www. cs. sfu. ca/ ~ysl/ personal/ http:/ / www. cs. sfu. ca/ ~haoz/ http:/ / www. cs. sfu. ca/ ~ysl/ personal/ publication/ sgp06_electronic. pdf http:/ / www. cs. sfu. ca/ ~ysl/ personal/ publication/ TR-CMPT2006-13. pdf
Normal (geometry) In geometry, a normal is an object such as a line or vector that is perpendicular to a given object. For example, in the two-dimensional case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point. In the three-dimensional case a surface normal, or simply normal, to a surface at a point P is a vector that is perpendicular to the tangent plane to that surface at P. The word "normal" is also used as an adjective: a line normal to a plane, the normal component of a force, the normal vector, etc. The concept of normality generalizes to orthogonality. The concept has been generalized to differentiable manifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at a point P is the set of the vectors which are orthogonal to the tangent space at P. In the case of differential curves, the curvature vector is a normal vector of special interest.
A polygon and two of its normal vectors
The normal is often used in computer graphics to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the corners (vertices) to mimic a curved surface with Phong shading.
Normal (geometry)
144
Normal to surfaces in 3D space Calculating a surface normal For a convex polygon (such as a triangle), a surface normal can be calculated as the vector cross product of two (non-parallel) edges of the polygon. For a plane given by the equation vector
, the
is a normal.
For a plane given by the equation , i.e., a is a point on the plane and b and c are (non-parallel) vectors lying on the plane, the normal to the plane is a vector normal to both b and c which can be found as the cross product .
A normal to a surface at a point is the same as a normal to the tangent plane to that surface at that point.
For a hyperplane in n+1 dimensions, given by the equation , where a0 is a point on the hyperplane and ai for i = 1, ..., n are non-parallel vectors lying on the hyperplane, a normal to the hyperplane is any vector in the null space of A where A is given by . That is, any vector orthogonal to all in-plane vectors is by definition a surface normal. If a (possibly non-flat) surface S is parameterized by a system of curvilinear coordinates x(s, t), with s and t real variables, then a normal is given by the cross product of the partial derivatives
If a surface S is given implicitly as the set of points
satisfying
, then, a normal at a point
on the surface is given by the gradient since the gradient at any point is perpendicular to the level set, and
(the surface) is a level set of
. For a surface S given explicitly as a function
of the independent variables
(e.g.,
), its normal can be found in at least two equivalent ways. The first one is obtaining its implicit form
, from which the normal follows readily as the
gradient . (Notice that the implicit form could be defined alternatively as ; these two forms correspond to the interpretation of the surface being oriented upwards or downwards, respectively, as a consequence of the difference in the sign of the partial derivative .) The second way of obtaining the normal follows directly from the gradient of the explicit form, ; by inspection,
Normal (geometry)
145 , where
is the upward unit vector.
Note that this is equal to
, where
and
are the x and y unit
vectors. If a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a cone does not have a normal at its tip nor does it have a normal along the edge of its base. However, the normal to the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface that is Lipschitz continuous.
Uniqueness of the normal A normal to a surface does not have a unique direction; the vector pointing in the opposite direction of a surface normal is also a surface normal. For a surface which is the topological boundary of a set in three dimensions, one can distinguish between the inward-pointing normal and outer-pointing normal, which can help define the normal in a unique way. For an oriented surface, the surface normal is A vector field of normals to a surface usually determined by the right-hand rule. If the normal is constructed as the cross product of tangent vectors (as described in the text above), it is a pseudovector.
Transforming normals When applying a transform to a surface it is sometimes convenient to derive normals for the resulting surface from the original normals. All points P on tangent plane are transformed to P′. We want to find n′ perpendicular to P. Let t be a vector on the tangent plane and Ml be the upper 3x3 matrix (translation part of transformation does not apply to normal or tangent vectors).
So use the inverse transpose of the linear transformation (the upper 3x3 matrix) when transforming surface normals. Also note that the inverse transpose is equal to the original matrix if the matrix is orthonormal, i.e. purely rotational with no scaling or shearing.
Hypersurfaces in n-dimensional space The definition of a normal to a surface in three-dimensional space can be extended to
-dimensional
hypersurfaces in a -dimensional space. A hypersurface may be locally defined implicitly as the set of points satisfying an equation , where is a given scalar function. If is continuously differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the points where the gradient is not null. At these points the normal vector space has dimension one and is generated by the gradient
Normal (geometry)
The normal line at a point of the hypersurface is defined only if the gradient is not null. It is the line passing through the point and having the gradient as direction.
Varieties defined by implicit equations in n-dimensional space A differential variety defined by implicit equations in the n-dimensional space is the set of the common zeros of a finite set of differential functions in n variables
The Jacobian matrix of the variety is the k×n matrix whose i-th row is the gradient of fi. By implicit function theorem, the variety is a manifold in the neighborhood of a point of it where the Jacobian matrix has rank k. At such a point P, the normal vector space is the vector space generated by the values at P of the gradient vectors of the fi. In other words, a variety is defined as the intersection of k hypersurfaces, and the normal vector space at a point is the vector space generated by the normal vectors of the hypersurfaces at the point. The normal (affine) space at a point P of the variety is the affine subspace passing through P and generated by the normal vector space at P. These definitions may be extended verbatim to the points where the variety is not a manifold.
Example Let V be the variety defined in the 3-dimensional space by the equations
This variety is the union of the x-axis and the y-axis. At a point (a, 0, 0) where a≠0, the rows of the Jacobian matrix are (0, 0, 1) and (0, a, 0). Thus the normal affine space is the plane of equation x=a. Similarly, if b≠0, the normal plane at (0, b, 0) is the plane of equation y=b. At the point (0, 0, 0) the rows of the Jacobian matrix are (0, 0, 1) and (0,0,0). Thus the normal vector space and the normal affine space have dimension 1 and the normal affine space is the z-axis.
Uses • • • •
Surface normals are essential in defining surface integrals of vector fields. Surface normals are commonly used in 3D computer graphics for lighting calculations; see Lambert's cosine law. Surface normals are often adjusted in 3D computer graphics by normal mapping. Render layers containing surface normal information may be used in Digital compositing to change the apparent lighting of rendered elements.
146
Normal (geometry)
147
Normal in geometric optics The normal is the line perpendicular to the surface of an optical medium. In reflection of light, the angle of incidence and the angle of reflection are respectively the angle between the normal and the incident ray and the angle between the normal and the reflected ray.
References External links • An explanation of normal vectors (http://msdn.microsoft.com/ en-us/library/bb324491(VS.85).aspx) from Microsoft's MSDN • Clear pseudocode for calculating a surface normal (http://www. opengl.org/wiki/Calculating_a_Surface_Normal) from either a triangle or polygon.
Diagram of specular reflection
Painter's algorithm The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in 3D computer graphics. When projecting a 3D scene onto a 2D plane, it is necessary at some point to decide which polygons are visible, and which are hidden. The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts that are normally not visible — thus solving the visibility problem — at the cost of having painted invisible areas of distant objects.
The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the trees, are painted.Wikipedia:Please clarify
Painter's algorithm
The algorithm can fail in some cases, including cyclic overlap or piercing polygons. In the case of cyclic overlap, as shown in the figure to the right, Polygons A, B, and C overlap each other in such a way that it is impossible to determine which polygon is above the others. In this case, the offending polygons must be cut to allow sorting. Newell's algorithm, proposed in 1972, provides a method for cutting such polygons. Numerous methods have also been proposed in the field of computational geometry. The case of piercing polygons arises when one polygon intersects another. As with cyclic overlap, this problem may be resolved by cutting the offending polygons. In basic implementations, the painter's algorithm can be inefficient. It Overlapping polygons can cause the algorithm to forces the system to render each point on every polygon in the visible fail set, even if that polygon is occluded in the finished scene. This means that, for detailed scenes, the painter's algorithm can overly tax the computer hardware. A reverse painter's algorithm is sometimes used, in which objects nearest to the viewer are painted first — with the rule that paint must never be applied to parts of the image that are already painted. In a computer graphic system, this can be very efficient, since it is not necessary to calculate the colors (using lighting, texturing and such) for parts of the more distant scene that are hidden by nearby objects. However, the reverse algorithm suffers from many of the same problems as the standard version. These and other flaws with the algorithm led to the development of Z-buffer techniques, which can be viewed as a development of the painter's algorithm, by resolving depth conflicts on a pixel-by-pixel basis, reducing the need for a depth-based rendering order. Even in such systems, a variant of the painter's algorithm is sometimes employed. As Z-buffer implementations generally rely on fixed-precision depth-buffer registers implemented in hardware, there is scope for visibility problems due to rounding error. These are overlaps or gaps at joins between polygons. To avoid this, some graphics engine implementations "overrender"[citation needed], drawing the affected edges of both polygons in the order given by painter's algorithm. This means that some pixels are actually drawn twice (as in the full painter's algorithm) but this happens on only small parts of the image and has a negligible performance effect.
References • Foley, James; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1990). Computer Graphics: Principles and Practice. Reading, MA, USA: Addison-Wesley. p. 1174. ISBN 0-201-12110-7.
148
Parallax barrier
Parallax barrier A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. Placed in front of the normal LCD, it consists of a layer of material with a series of precision slits, allowing each eye to see a different set of pixels, so creating a sense of depth through parallax in an effect similar to what lenticular printing produces for printed products and lenticular lenses for other displays. A disadvantage of the technology is that the viewer must be positioned in a well-defined spot to experience the 3D effect. Another disadvantage is that the effective horizontal pixel count viewable for each eye is reduced by one half; however, there is research attempting to improve these limitations.
History The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical Comparison of parallax-barrier and lenticular results,[1] and by Frederic E. Ives, who made and exhibited the first autostereoscopic displays. Note: The figure is not known functional autostereoscopic image in 1901. About two years to scale. Lenticules can be modified and more later, Ives began selling specimen images as novelties, the first known pixels can be used to make automultiscopic displays commercial use. Nearly a century later, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens. These displays are no longer available from Sharp but still being manufactured and further developed from other companies like Tridelity and SpatialView. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009, Fujifilm released the Fujifilm FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD display measuring 2.8" diagonal. Nintendo has also implemented this technology on its latest portable gaming console, the Nintendo 3DS.
Applications In addition to films and computer games, the technique has found uses in areas such as molecular modelling[citation needed] and airport security. It is also being used for the navigation system in the 2010-model Range Rover, allowing the driver to view (for example) GPS directions, while a passenger watches a movie. It is also used in the Nintendo 3DS hand-held game console and LG's Optimus 3D and Thrill smartphones, HTC's EVO 3D[2] as well as Sharp's Galapagos Android SmartPhone series. The technology is harder to apply for 3D television sets, because of the requirement for a wide range of possible viewing angles. A Toshiba 21-inch 3D display uses parallax barrier technology with 9 pairs of images, to cover a viewing angle of 30 degrees.
149
Parallax barrier
150
Parallax barrier design The slits in the parallax barrier allow the viewer to see only left image pixels from the position of their left eye, right image pixels from the right eye. When choosing the geometry of the parallax barrier the important parameters that need to be optimised are; the pixel – barrier separation d, the parallax barrier pitch f, the pixel aperture a, and the parallax barrier slit width b.
Parallax barrier – pixel separation The closer the parallax barrier is to the pixels, the wider the angle of separation between the left and right images. For a stereoscopic display the left and right images must hit the left and right eyes, which means the views must be separated by only a few degrees. The pixel- barrier separation d for this case can be derived as follows. From Snell’s law: For small angles:
and
Therefore: For a typical auto-stereoscopic display of pixel pitch 65 micrometers, eye separation 63mm, viewing distance 30 cm, and refractive index 1.52, the pixel-barrier separation needs to be about 470 micrometers.
A cross sectional diagram of a parallax barrier, with all its important dimensions labelled.
Parallax barrier
151
Parallax barrier pitch The pitch of a parallax barrier should ideally be roughly two times the pitch of the pixels, however the optimum design should be slightly less than this. This perturbation to the barrier pitch compensates for the fact that the edges of a display are viewed at a different angle to that of the centre, it enables the left and right images target the eyes appropriately from all positions of the screen.
Optimum pixel aperture and barrier slit width In a parallax barrier system for a high resolution display, the performance (brightness and crosstalk) can be simulated by Fresnel diffraction theory. From these simulations, the following can be deduced. If the slit width is small, light passing the slits is diffracted heavily causing crosstalk. The brightness of the display is also reduced. If the slit width is large, light passing the slit does not diffract so much, but the wider slits create crosstalk due to geometric ray paths. Therefore the design suffers more crosstalk. The brightness of the display is increased. Therefore the best slit width is given by a trade off between crosstalk and brightness.
Barrier position Note that the parallax barrier may also be placed behind the LCD pixels. In this case, light from a slit passes the left image pixel in the left direction, and vice versa. This produces the same basic effect as a front parallax barrier.
Techniques for switching
a). If the parallax barrier had exactly twice the pitch of the pixels, it would be aligned in synchronisation with the pixel across whole of the display. The left and right views would be emitted at the same angles all across the display. It can be seen that the viewer’s left eye does not receive the left image from all points on the screen. The display does not work well. b). If the barrier pitch is modified, the views can be made to converge, such that the viewer sees the correct images from all points on the screen. c). Shows the calculation which determines the pitch of the barrier that is needed. p is the pixel pitch, d is the pixel barrier separation, f is the barrier pitch.
In a parallax barrier system, the left eye sees only half the pixels (that is to say the left image pixels) and the same is true for the right eye. Therefore the resolution of the display is reduced, and so it can be advantageous to make a parallax barrier that can be switched on when 3D is needed or off when a 2D image is required. One method of switching the parallax barrier on and off is to form it from a liquid crystal material, the parallax barrier can then be created similar to the way that an image is formed in a liquid crystal display.
Parallax barrier
152
Time multiplexing to increase resolution Time multiplexing provides a means of increasing the resolution of a parallax barrier system. In the design shown each eye is able to see the full resolution of the panel. An autostereoscopic display that is switchable between 2D and 3D. In 3D mode the parallax barrier is formed with an LC cell, in a similar way to how an image is created on an LCD. In 2D mode the LC cell is switched into a transparent state such that no parallax barrier exists. In this case the light from the LCD pixels can go in any direction and the display acts like a normal 2D LCD.
The design requires a display that can switch fast enough to avoid image flicker as the images swap each frame.
Tracking barriers for increased viewing freedom In a standard parallax barrier system the viewer must position themselves in an appropriate position so that the left and right eye views can be seen by their left and right eyes respectively. In a ‘tracked 3D system’ the viewing freedom can be increased considerably by tracking the position of the user and adjusting parallax barrier so that the left and right views are always directed to the users eyes correctly. Identification of the users viewing angle can be done by using a forward facing camera above the display and image processing software that can recognise the position of the users face. Adjustment of the angle at which the left and right views are projected can be done by shifting the parallax barrier with respect to the pixels (for example mechanically or electronically),.
Crosstalk in a parallax barrier system
A diagram showing how 3D can be created using time multiplexed parallax barrier. In the first time cycle, the slits in the barrier are arranged in a conventional way for a 3D display, and the left and right eyes see the left and right eye pixels. In the next time cycle, the positions of the slits are changed (possible because each slit is formed with an LC shutter). In the new barrier position, the right eye can see the pixels that were hidden in the previous time cycle. These uncovered pixels are set to show the right image (rather than the left image which they showed in the previous time cycle). The same is true for the left eye. This cycling between the two positions of the barrier, and the interlacing pattern, enables both eyes to see the correct image from half the pixels in the first time cycle, and the correct image from the other half of the pixels in the other time cycle. The cycles repeats every 50th of a second so that the switching is not noticeable to the user, but user has the impression that the appearance each eye is seeing an image from all the pixels. Consequently the display appears to have full resolution.
Crosstalk is the interference that exists between the left and right views in a 3D display. In a display with high crosstalk left eye would be able to see the right eye image faintly in the background. The perception of crosstalk in stereoscopic displays has been studied widely. It is widely acknowledged that the presence of high levels of crosstalk in a stereoscopic display are detrimental. The effects of crosstalk in an image include: ghosting and loss of contrast, loss of 3D effect and depth resolution, and viewer discomfort. The visibility of crosstalk (ghosting) increases with increasing contrast and increasing binocular parallax of the image. For example, a stereoscopic image with high-contrast will exhibit more ghosting on a particular stereoscopic display than will an image with low contrast.
Parallax barrier
153
Measurement A technique to quantify the level of crosstalk from a 3D display involves measuring the percentage of light that deviates from one view to the other. The crosstalk in a typical parallax-barrier based 3D system at the best eye position might be 3%. Results of subjective tests carried out to determine the image quality of 3D images conclude that for high quality 3D, crosstalk should be 'no greater than around 1 to 2%'.
Causes of crosstalk and counter measures Diffraction can be a major cause of crosstalk. Theoretical simulations of diffraction have been found to be a good predictor of experimental crosstalk measurements in emulsion parallax barrier systems. These Measurement of crosstalk in 3D displays. Crosstalk is the percentage of light from one view simulations predict that the amount of crosstalk caused by the parallax leaking to the other view. The measurements and barrier will be highly dependent on the sharpness of the edges of the calculations above show how crosstalk is defined slits. For example, if the transmission of the barrier goes from opaque when measuring crosstalk in the left image. to transparent sharply as it moves from barrier to slit then this produces Diagrams a) sketch the intensity measurements that need to be made for different outputs from a wide diffraction pattern and consequently more crosstalk. If the the 3D display. Table b) describe their purpose. transition is smoother then the diffraction will not spread so widely and Equation c) is used to derive the crosstalk. It is less crosstalk will be produced. This prediction is consistent with the ratio of the light leakage from the right image experimental results for a slightly soft edged barrier (whose pitch was into the left image, but note that the imperfect black level of the LCD is subtracted out from the 182 micrometers, slit width was 48 micrometers, and transition result so that it does not change the crosstalk between opaque and transmissive occurred over a region of about 3 ratio. micrometers). The slightly soft edged barrier has a crosstalk of 2.3% which is slightly lower than the crosstalk from a harder edged barrier which was about 2.7%. The diffraction simulations also suggest that if the parallax barrier slit edges had a transmission that decreases over a 10 micrometers region, then crosstalk could become as 0.1. Image processing is an alternative crosstalk counter measure. The figure shows the principle behind crosstalk correction. There are 3 main types of Autostereoscopy displays with parallax barrier • Early experimental prototypes would just put a series of precision slits on regular LCD screen to see if it had any potential. • Pros • Easily attachable • Cons • Lowest image quality • First fully developed "Parallax barrier displays" would have precision slits as one of its optical components over the pixels. This blocks the image from one eye and shows it to another. • Pros • Cheaper for mass production • Cons • Least efficient with backlight, • Needs twice as much backlight as normal displays • Small viewing angles
The principle of crosstalk correction.
Parallax barrier • The newest and most convenient design, commercial products like the Nintendo 3DS, HTC Evo 3D, and LG Optimus 3D do not have the physical parallax barrier in front of the pixels, but behind the pixels and in front of the backlight. They thus send not different images to the two eyes but different light to each. This allows the two channels of light to pass through the pixels, allowing glare over the opposite pixels giving the best image quality. • Pros • Clear image • Largest viewing angle • Cons • More expensive for mass production • Uses 20-25% more backlight than normal displays
References [1] Berthier, Auguste. (May 16 and 23, 1896). "Images stéréoscopiques de grand format" (in French). Cosmos 34 (590, 591): 205–210, 227-233 (see 229-231) [2] HTC EVO 3D (http:/ / www. gsmarena. com/ htc_evo_3d-3895. php), from GSMArena
External links • Video explaining how the parallax barrier works (http://vimeo.com/44261419) • Principle of autostereo display (http://mrl.nyu.edu/~perlin/experiments/autostereo/) - Java applet illustrating the idea
Parallel rendering Parallel rendering (or Distributed rendering) is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Rendering is an embarrassingly parallel workload in multiple domains (e.g., pixels, objects, frames) and thus has been the subject of much research.
Workload Distribution There are two, often competing, reasons for using parallel rendering. Performance scaling allows frames to be rendered more quickly while data scaling allows larger data sets to be visualized. Different methods of distributing the workload tend to favor one type of scaling over the other. There can also be other advantages and disadvantages such as latency and load balancing issues. The three main options for primitives to distribute are entire frames, pixels, or objects (e.g. triangle meshes).
154
Parallel rendering
Frame distribution Each processing unit can render an entire frame from a different point of view or moment in time. The frames rendered from different points of view can improve image quality with anti-aliasing or add effects like depth-of-field and three dimensional display output. This approach allows for good performance scaling but no data scaling. When rendering sequential frames in parallel there will be a lag for interactive sessions. The lag between user input and the action being displayed is proportional to the number of sequential frames being rendered in parallel.
Pixel distribution Sets of pixels in the screen space can be distributed among processing units in what is often referred to as sort first rendering.[1] Distributing interlaced lines of pixels gives good load balancing but makes data scaling impossible. Distributing contiguous 2D tiles of pixels allows for data scaling by culling data with the view frustum. However, there is a data overhead from objects on frustum boundaries being replicated and data has to be loaded dynamically as the view point changes. Dynamic load balancing is also needed to maintain performance scaling.
Object distribution Distributing objects among processing units is often referred to as sort last rendering.[2] It provides good data scaling and can provide good performance scaling, but it requires the intermediate images from processing nodes to be alpha composited to create the final image. As the image resolution grows, the alpha compositing overhead also grows. A load balancing scheme is also needed to maintain performance regardless of the viewing conditions. This can be achieved by over partitioning the object space and assigning multiple pieces to each processing unit in a random fashion, however this increases the number of alpha compositing stages required to create the final image. Another option is to assign a contiguous block to each processing unit and update it dynamically, but this requires dynamic data loading.
Hybrid distribution The different types of distributions can be combined in a number of fashions. A couple of sequential frames can be rendered in parallel while also rendering each of those individual frames in parallel using a pixel or object distribution. Object distributions can try to minimize their overlap in screen space in order to reduce alpha compositing costs, or even use a pixel distribution to render portions of the object space.
Open source applications The open source software package Chromium (http:/ / chromium. sourceforge. net) provides a parallel rendering mechanism for existing applications. It intercepts the OpenGL calls and processes them, typically to send them to multiple rendering units driving a display wall. Equalizer (http:/ / www. equalizergraphics. com) is an open source rendering framework and resource management system for multipipe applications. Equalizer provides an API to write parallel, scalable visualization applications which are configured at run-time by a resource server. OpenSG (http:/ / www. opensg. org) is an open source scenegraph system that provides parallel rendering capabilities, especially on clusters. It hides the complexity of parallel multi-threaded and clustered applications and supports sort-first as well as sort-last rendering.
155
Parallel rendering
156
References [1] Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. “A Sorting Classification of Parallel Rendering.” IEEE Computer Graphics and Algorithms, pages 23-32, July 1994. [2] Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. “A Sorting Classification of Parallel Rendering.” IEEE Computer Graphics and Algorithms, pages 23-32, July 1994.
External links • Cluster Rendering at Princeton University (http://www.cs.princeton.edu/~rudro/cluster-rendering/)
Particle system The term particle system refers to a computer graphics technique that uses a large number of very small sprites or other graphic objects to simulate certain kinds of "fuzzy" phenomena, which are otherwise very hard to reproduce with conventional rendering techniques - usually highly chaotic systems, natural phenomena, and/or processes caused by chemical reactions. Examples of such phenomena which are commonly replicated using particle systems include fire, explosions, smoke, moving water (such as a waterfall), sparks, falling leaves, clouds, fog, snow, dust, meteor tails, stars and galaxies, or abstract visual effects like glowing trails, magic spells, etc. - these use particles that fade out quickly and are then re-emitted from the effect's source. Another technique can be used for things that contain many strands - such as fur, hair, and grass involving rendering an entire particle's lifetime at once, which can then be drawn and manipulated as a single strand of the material in question.
A particle system used to simulate a fire, created in 3dengfx.
Particle systems may be two-dimensional or three-dimensional.
Typical implementation Typically a particle system's position and motion in 3D space are Ad hoc particle system used to simulate a galaxy, controlled by what is referred to as an emitter. The emitter acts as the created in 3dengfx. source of the particles, and its location in 3D space determines where they are generated and whence they proceed. A regular 3D mesh object, such as a cube or a plane, can be used as an emitter. The emitter has attached to it a set of particle behavior parameters. These parameters can include the spawning rate (how many particles are generated per unit of time), the particles' initial velocity vector (the direction they are emitted upon creation), particle lifetime (the length of time each individual particle exists before disappearing), particle color, and many more. It is common for all or most of these parameters to be "fuzzy" — instead of a precise numeric value, the artist specifies
Particle system
157
a central value and the degree of randomness allowable on either side of the center (i.e. the average particle's lifetime might be 50 frames ±20%). When using a mesh object as an emitter, the initial velocity vector is often set to be normal to the individual face(s) of the object, making the particles appear to "spray" directly from each face. A typical particle system's update loop (which is performed for each frame of animation) can be separated into two distinct stages, the parameter update/simulation stage and the rendering stage. A particle system used to simulate a bomb explosion, created in particleIllusion.
Simulation stage During the simulation stage, the number of new particles that must be created is calculated based on spawning rates and the interval between updates, and each of them is spawned in a specific position in 3D space based on the emitter's position and the spawning area specified. Each of the particle's parameters (i.e. velocity, color, etc.) is initialized according to the emitter's parameters. At each update, all existing particles are checked to see if they have exceeded their lifetime, in which case they are removed from the simulation. Otherwise, the particles' position and other characteristics are advanced based on a physical simulation, which can be as simple as translating their current position, or as complicated as performing physically accurate trajectory calculations which take into account external forces (gravity, friction, wind, etc.). It is common to perform collision detection between particles and specified 3D objects in the scene to make the particles bounce off of or otherwise interact with obstacles in the environment. Collisions between particles are rarely used, as they are computationally expensive and not visually relevant for most simulations.
Rendering stage After the update is complete, each particle is rendered, usually in the form of a textured billboarded quad (i.e. a quadrilateral that is always facing the viewer). However, this is not necessary; a particle may be rendered as a single pixel in small resolution/limited processing power environments. Particles can be rendered as Metaballs in off-line rendering; isosurfaces computed from particle-metaballs make quite convincing liquids. Finally, 3D mesh objects can "stand in" for the particles — a snowstorm might consist of a single 3D snowflake mesh being duplicated and rotated to match the positions of thousands or millions of particles.
"Snowflakes" versus "Hair" Particle systems can be either animated or static; that is, the lifetime of each particle can either be distributed over time or rendered all at once. The consequence of this distinction is similar to the difference between snowflakes and hair - animated particles are akin to snowflakes, which move around as distinct points in space, and static particles are akin to hair, which consists of a distinct number of curves. The term "particle system" itself often brings to mind only the animated aspect, which is commonly used to create moving particulate simulations — sparks, rain, fire, etc. In these implementations, each frame of the animation contains each particle at a specific position in its life cycle, and each particle occupies a single point position in space. For effects such as fire or smoke that dissipate, each particle is given a fade out time or fixed lifetime; effects such as snowstorms or rain instead usually terminate the lifetime of the particle once it passes out of a particular field of view.
Particle system
158
However, if the entire life cycle of each particle is rendered simultaneously, the result is static particles — strands of material that show the particles' overall trajectory, rather than point particles. These strands can be used to simulate hair, fur, grass, and similar materials. The strands can be controlled with the same velocity vectors, force fields, spawning rates, and deflection parameters that animated particles obey. In addition, the rendered thickness of the strands can be controlled and in some implementations may be varied along the length of the strand. Different combinations of parameters can impart stiffness, limpness, heaviness, bristliness, or any number of other properties. The strands may also use texture mapping to vary the strands' color, length, or other properties across the emitter surface.
A cube emitting 5000 animated particles, obeying a "gravitational" force in the negative Y direction.
The same cube emitter rendered using static particles, or strands.
Artist-friendly particle system tools Particle systems can be created and modified natively in many 3D modeling and rendering packages including Cinema 4D, Lightwave, Houdini, Maya, XSI, 3D Studio Max and Blender. These editing programs allow artists to have instant feedback on how a particle system will look with properties and constraints that they specify. There is also plug-in software available that provides enhanced particle effects.
Developer-friendly particle system tools Particle systems code that can be included in game engines, digital content creation systems, and effects applications can be written from scratch or downloaded. Havok provides multiple particle system APIs. Their Havok FX API focuses especially on particle system effects. Ageia - now a subsidiary of Nvidia - provides a particle system and other game physics API that is used in many games, including Unreal Engine 3 games. Game Maker provides a two-dimensional particle system often used by indie, hobbyist, or student game developers, though it cannot be imported into other engines. Many other solutions also exist, and particle systems are frequently written from scratch if non-standard effects or behaviors are desired.
Particle system
159
External links • Particle Systems: A Technique for Modeling a Class of Fuzzy Objects [1] — William T. Reeves (ACM Transactions on Graphics, April 1983) • The Particle Systems API [2] - David K. McAllister • The ocean spray in your face. [3] — Jeff Lander (Graphic Content, July 1998) • Building an Advanced Particle System [4] — John van der Burg (Gamasutra, June 2000) • Particle Engine Using Triangle Strips [5] — Jeff Molofee (NeHe) • Designing an Extensible Particle System using C++ and Templates [6] — Kent Lai (GameDev.net) • repository of public 3D particle scripts in LSL Second Life format [7] - Ferd Frederix • GPU-Particlesystems using WebGL [8] - Particle effects directly in the browser using WebGL for calculations.
References [1] [2] [3] [4] [5] [6]
http:/ / portal. acm. org/ citation. cfm?id=357320 http:/ / particlesystems. org/ https:/ / www. lri. fr/ ~mbl/ ENS/ IG2/ devoir2/ files/ docs/ particles. pdf http:/ / www. gamasutra. com/ view/ feature/ 3157/ building_an_advanced_particle_. php http:/ / nehe. gamedev. net/ data/ lessons/ lesson. asp?lesson=19 http:/ / archive. gamedev. net/ archive/ reference/ articles/ article1982. html
[7] http:/ / secondlife. mitsi. com/ cgi/ llscript. plx?Category=Particles [8] http:/ / www. gpu-particlesystems. de
Point cloud A point cloud is a set of data points in some coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and often are intended to represent the external surface of an object. Point clouds may be created by 3D scanners. These devices measure in an automatic way a large number of points on the surface of an object, and often output a point cloud as a data file. The point cloud represents the set of points that the device has measured. As the result of a 3D scanning process point clouds are used for many purposes, including to create 3D CAD models for manufactured parts, metrology/quality inspection, and a multitude of visualization, animation, rendering and mass customization applications.
A point cloud image of a torus.
Point cloud While point clouds can be directly rendered and inspected,[2] usually point clouds themselves are generally not directly usable in most 3D applications, and therefore are usually converted to polygon mesh or triangle mesh models, NURBS surface models, or CAD models through a process commonly referred to as surface reconstruction. There are many techniques for converting a point cloud to a 3D surface. Some approaches, like Delaunay triangulation, alpha shapes, [1] and ball pivoting, build a network of triangles over the existing vertices Geo-referenced point cloud by DroneMapper of the point cloud, while other approaches convert the point cloud into a volumetric distance field and reconstruct the implicit surface so defined through a marching cubes algorithm.[3] One application in which point clouds are directly usable is industrial metrology or inspection. The point cloud of a manufactured part can be aligned to a CAD model (or even another point cloud), and compared to check for differences. These differences can be displayed as color maps that give a visual indicator of the deviation between the manufactured part and the CAD model. Geometric dimensions and tolerances can also be extracted directly from the point cloud. Point clouds can also be used to represent volumetric data used for example in medical imaging. Using point clouds multi-sampling and data compression are achieved.[4] In geographic information system, point clouds are one of the sources to make digital elevation model of the terrain.[5] The point clouds are also employed in order to generate 3D model of urban environment, e.g.[6]
References [1] http:/ / dronemapper. com [2] Rusinkiewicz, S. and Levoy, M. 2000. QSplat: a multiresolution point rendering system for large meshes. In Siggraph 2000. ACM , New York, NY, 343–352. DOI= http:/ / doi. acm. org/ 10. 1145/ 344779. 344940 [3] Meshing Point Clouds (http:/ / meshlabstuff. blogspot. com/ 2009/ 09/ meshing-point-clouds. html) A short tutorial on how to build surfaces from point clouds [4] Sitek et al. "Tomographic Reconstruction Using an Adaptive Tetrahedral Mesh Defined by a Point Cloud" IEEE Trans. Med. Imag. 25 1172 (2006) (http:/ / dx. doi. org/ 10. 1109/ TMI. 2006. 879319) [5] From Point Cloud to Grid DEM: A Scalable Approach (http:/ / terrain. cs. duke. edu/ pubs/ lidar_interpolation. pdf) [6] K. Hammoudi, F. Dornaika, B. Soheilian, N. Paparoditis. Extracting Wire-frame Models of Street Facades from 3D Point Clouds and the Corresponding Cadastral Map. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences (IAPRS), vol. 38, part 3A, pp. 91–96, Saint-Mandé, France, 1–3 September 2010. (http:/ / www. isprs. org/ proceedings/ XXXVIII/ part3/ a/ pdf/ 91_XXXVIII-part3A. pdf)
External links • PCL (Point Cloud Library) – a comprehensive BSD open source library for n-D Point Clouds and 3D geometry processing. http://pointclouds.org
160
Polygon (computer graphics)
Polygon (computer graphics) Polygons are used in computer graphics to compose images that are three-dimensional in appearance. Usually (but not always) triangular, polygons arise when an object's surface is modeled, vertices are selected, and the object is rendered in a wire frame model. This is quicker to display than a shaded model; thus the polygons are a stage in computer animation. The polygon count refers to the number of polygons being rendered per frame.
Competing methods for rendering polygons that avoid seams • Point • • • •
Floating Point Fixed-Point Polygon because of rounding, every scanline has its own direction in space and may show its front or back side to the viewer. • Fraction (mathematics) • Bresenham's line algorithm • Polygons have to be split into triangles • The whole triangle shows the same side to the viewer • The point numbers from the Transform and lighting stage have to converted to Fraction (mathematics) • Barycentric coordinates (mathematics) • Used in raytracing
Polygon mesh A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes. The study of polygon meshes is a large sub-field of computer graphics and geometric modeling. Different representations of polygon meshes are used for Example of a triangle mesh representing a dolphin. different applications and goals. The variety of operations performed on meshes may include Boolean logic, smoothing, simplification, and many others. Network representations, "streaming" and "progressive" meshes, are used to transmit polygon meshes over a network. Volumetric meshes are distinct from polygon meshes in that they explicitly represent both the surface and volume of a structure, while polygon meshes only explicitly represent the surface (the volume is implicit). As polygonal meshes are extensively used in computer graphics, algorithms also exist for ray tracing, collision detection, and rigid-body dynamics of polygon meshes.
161
Polygon mesh
Elements of mesh modeling
Objects created with polygon meshes must store different types of elements. These include vertices, edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored. A renderer may support only 3-sided faces, so polygons must be constructed of many of these, as shown in Figure 1. However, many renderers either support quads and higher-sided polygons, or are able to convert polygons to triangles on the fly, making it unnecessary to store a mesh in a triangulated form. Also, in certain applications like head modeling, it is desirable to be able to create both 3- and 4-sided polygons. A vertex is a position along with other information such as color, normal vector and texture coordinates. An edge is a connection between two vertices. A face is a closed set of edges, in which a triangle face has three edges, and a quad face has four edges. A polygon is a coplanar set of faces. In systems that support multi-sided faces, polygons and faces are equivalent. However, most rendering hardware supports only 3- or 4-sided faces, so polygons are represented as multiple faces. Mathematically a polygonal mesh may be considered an unstructured grid, or undirected graph, with additional properties of geometry, shape and topology. Surfaces, more often called smoothing groups, are useful, but not required to group smooth regions. Consider a cylinder with caps, such as a soda can. For smooth shading of the sides, all surface normals must point horizontally away from the center, while the normals of the caps must point straight up and down. Rendered as a single, Phong-shaded surface, the crease vertices would have incorrect normals. Thus, some way of determining where to cease smoothing is needed to group smooth parts of a mesh, just as polygons group 3-sided faces. As an alternative to providing surfaces/smoothing groups, a mesh may contain other data for calculating the same data, such as a splitting angle (polygons with normals above this threshold are either automatically treated as separate smoothing groups or some technique such as splitting or chamfering is automatically applied to the edge between them). Additionally, very high resolution meshes are less subject to issues that would require smoothing groups, as their polygons are so small as to make the need irrelevant. Further, another alternative exists in the possibility of simply detaching the surfaces themselves from the rest of the mesh. Renderers do not attempt to smooth edges across noncontiguous polygons. Mesh format may or may not define other useful data. Groups may be defined which define separate elements of the mesh and are useful for determining separate sub-objects for skeletal animation or separate actors for non-skeletal animation. Generally materials will be defined, allowing different portions of the mesh to use different shaders when rendered. Most mesh formats also suppose some form of UV coordinates which are a separate 2d representation of the mesh "unfolded" to show what portion of a 2-dimensional texture map to apply to different polygons of the mesh.
162
Polygon mesh
Representations Polygon meshes may be represented in a variety of ways, using different methods to store the vertex, edge and face data. These include: • Face-vertex meshes: A simple list of vertices, and a set of polygons that point to the vertices it uses. • Winged-edge meshes, in which each edge points to two vertices, two faces, and the four (clockwise and counterclockwise) edges that touch it. Winged-edge meshes allow constant time traversal of the surface, but with higher storage requirements. • Half-edge meshes: Similar to winged-edge meshes except that only half the edge traversal information is used. (see OpenMesh [1]) • Quad-edge meshes, which store edges, half-edges, and vertices without any reference to polygons. The polygons are implicit in the representation, and may be found by traversing the structure. Memory requirements are similar to half-edge meshes. • Corner-tables, which store vertices in a predefined table, such that traversing the table implicitly defines polygons. This is in essence the triangle fan used in hardware graphics rendering. The representation is more compact, and more efficient to retrieve polygons, but operations to change polygons are slow. Furthermore, corner-tables do not represent meshes completely. Multiple corner-tables (triangle fans) are needed to represent most meshes. • Vertex-vertex meshes: A "VV" mesh represents only vertices, which point to other vertices. Both the edge and face information is implicit in the representation. However, the simplicity of the representation does not allow for many efficient operations to be performed on meshes. Each of the representations above have particular advantages and drawbacks, further discussed in Smith (2006).[2] The choice of the data structure is governed by the application, the performance required, size of the data, and the operations to be performed. For example, it is easier to deal with triangles than general polygons, especially in computational geometry. For certain operations it is necessary to have a fast access to topological information such as edges or neighboring faces; this requires more complex structures such as the winged-edge representation. For hardware rendering, compact, simple structures are needed; thus the corner-table (triangle fan) is commonly incorporated into low-level rendering APIs such as DirectX and OpenGL.
163
Polygon mesh
Vertex-vertex meshes
Vertex-vertex meshes represent an object as a set of vertices connected to other vertices. This is the simplest representation, but not widely used since the face and edge information is implicit. Thus, it is necessary to traverse the data in order to generate a list of faces for rendering. In addition, operations on edges and faces are not easily accomplished. However, VV meshes benefit from small storage space and efficient morphing of shape. Figure 2 shows the four-sided cylinder example represented using VV meshes. Each vertex indexes its neighboring vertices. Notice that the last two vertices, 8 and 9 at the top and bottom center of the "box-cylinder", have four connected vertices rather than five. A general system must be able to handle an arbitrary number of vertices connected to any given vertex. For a complete description of VV meshes see Smith (2006).
164
Polygon mesh
Face-vertex meshes
Face-vertex meshes represent an object as a set of faces and a set of vertices. This is the most widely used mesh representation, being the input typically accepted by modern graphics hardware. Face-vertex meshes improve on VV-mesh for modeling in that they allow explicit lookup of the vertices of a face, and the faces surrounding a vertex. Figure 3 shows the "box-cylinder" example as an FV mesh. Vertex v5 is highlighted to show the faces that surround it. Notice that, in this example, every face is required to have exactly 3 vertices. However, this does not mean every vertex has the same number of surrounding faces. For rendering, the face list is usually transmitted to the GPU as a set of indices to vertices, and the vertices are sent as position/color/normal structures (in the figure, only position is given). This has the benefit that changes in shape, but not geometry, can be dynamically updated by simply resending the vertex data without updating the face connectivity. Modeling requires easy traversal of all structures. With face-vertex meshes it is easy to find the vertices of a face. Also, the vertex list contains a list of faces connected to each vertex. Unlike VV meshes, both faces and vertices are explicit, so locating neighboring faces and vertices is constant time. However, the edges are implicit, so a search is still needed to find all the faces surrounding a given face. Other dynamic operations, such as splitting or merging a face, are also difficult with face-vertex meshes.
165
Polygon mesh
Winged-edge meshes
Introduced by Baumgart 1975, winged-edge meshes explicitly represent the vertices, faces, and edges of a mesh. This representation is widely used in modeling programs to provide the greatest flexibility in dynamically changing the mesh geometry, because split and merge operations can be done quickly. Their primary drawback is large storage requirements and increased complexity due to maintaining many indices. A good discussion of implementation issues of Winged-edge meshes may be found in the book Graphics Gems II. Winged-edge meshes address the issue of traversing from edge to edge, and providing an ordered set of faces around an edge. For any given edge, the number of outgoing edges may be arbitrary. To simplify this, winged-edge meshes provide only four, the nearest clockwise and counter-clockwise edges at each end. The other edges may be traversed incrementally. The information for each edge therefore resembles a butterfly, hence "winged-edge" meshes. Figure 4 shows the "box-cylinder" as a winged-edge mesh. The total data for an edge consists of 2 vertices (endpoints), 2 faces (on each side), and 4 edges (winged-edge). Rendering of winged-edge meshes for graphics hardware requires generating a Face index list. This is usually done only when the geometry changes. Winged-edge meshes are ideally suited for dynamic geometry, such as subdivision surfaces and interactive modeling, since changes to the mesh can occur locally. Traversal across the mesh, as might be needed for collision detection, can be accomplished efficiently. See Baumgart (1975) for more details.[3]
166
Polygon mesh
167
Render dynamic meshes Winged-edge meshes are not the only representation which allows for dynamic changes to geometry. A new representation which combines winged-edge meshes and face-vertex meshes is the render dynamic mesh, which explicitly stores the vertices of a face (like FV meshes), the faces of a vertex (like FV meshes), and the faces and vertices of an edge (like winged-edge). Render dynamic meshes require slightly less storage space than standard winged-edge meshes, and can be directly rendered by graphics hardware since the face list contains an index of vertices. In addition, traversal from vertex to face is explicit (constant time), as is from face to vertex. RD meshes do not require the four outgoing edges since these can be found by traversing from edge to face, then face to neighboring edge. RD meshes benefit from the features of winged-edge meshes by allowing for geometry to be dynamically updated. See Tobler & Maierhofer (WSCG 2006) for more details.[4]
Summary of mesh representation Operation
Vertex-vertex
Face-vertex
Winged-edge
Render dynamic
V-V
All vertices around vertex
Explicit
V → f1, f2, f3, ... → v1, v2, v3, ...
V → e1, e2, e3, ... → v1, v2, v3, ...
V → e1, e2, e3, ... → v1, v2, v3, ...
E-F
All edges of a face
F(a,b,c) → {a,b}, {b,c}, {a,c}
F → {a,b}, {b,c}, {a,c}
Explicit
Explicit
V-F
All vertices of a face
F(a,b,c) → {a,b,c}
Explicit
F → e1, e2, e3 → a, b, c
Explicit
F-V
All faces around a vertex
Pair search
Explicit
V → e1, e2, e3 → f1, f2, f3, ...
Explicit
E-V
All edges around a vertex
V → {v,v1}, {v,v2}, {v,v3}, ...
V → f1, f2, f3, ... → v1, v2, v3, ...
Explicit
Explicit
F-E
Both faces of an edge
List compare
List compare
Explicit
Explicit
V-E
Both vertices of an edge
E(a,b) → {a,b}
E(a,b) → {a,b}
Explicit
Explicit
Flook
Find face with given vertices
F(a,b,c) → {a,b,c}
Set intersection of v1,v2,v3
Set intersection of v1,v2,v3
Set intersection of v1,v2,v3
3F + V*avg(F,V)
3F + 8E + V*avg(E,V)
6F + 4E + V*avg(E,V)
3*16 + 8*24 + 10*5 = 290
6*16 + 4*24 + 10*5 = 242
Storage size
V*avg(V,V)
Example with 10 vertices, 16 faces, 24 edges: 10 * 5 = 50
3*16 + 10*5 = 98
Figure 6: summary of mesh representation operations
In the above table, explicit indicates that the operation can be performed in constant time, as the data is directly stored; list compare indicates that a list comparison between two lists must be performed to accomplish the operation; and pair search indicates a search must be done on two indices. The notation avg(V,V) means the average number of vertices connected to a given vertex; avg(E,V) means the average number of edges connected to a given vertex, and avg(F,V) is the average number of faces connected to a given vertex. The notation "V → f1, f2, f3, ... → v1, v2, v3, ..." describes that a traversal across multiple elements is required to perform the operation. For example, to get "all vertices around a given vertex V" using the face-vertex mesh, it is necessary to first find the faces around the given vertex V using the vertex list. Then, from those faces, use the face list to find the vertices around them. Notice that winged-edge meshes explicitly store nearly all information, and other operations always traverse to the edge first to get additional info. Vertex-vertex meshes are the only representation that explicitly stores the neighboring vertices of a given vertex.
Polygon mesh
168
As the mesh representations become more complex (from left to right in the summary), the amount of information explicitly stored increases. This gives more direct, constant time, access to traversal and topology of various elements but at the cost of increased overhead and space in maintaining indices properly. Figure 7 shows the connectivity information for each of the four technique described in this article. Other representations also exist, such as half-edge and corner tables. These are all variants of how vertices, faces and edges index one another. As a general rule, face-vertex meshes are used whenever an object must be rendered on graphics hardware that does not change geometry (connectivity), but may deform or morph shape (vertex positions) such as real-time rendering of static or morphing objects. Winged-edge or render dynamic meshes are used when the geometry changes, such as in interactive modeling packages or for computing subdivison surfaces. Vertex-vertex meshes are ideal for efficient, complex changes in geometry or topology so long as hardware rendering is not of concern.
Other representations Streaming meshes store faces in an ordered, yet independent, way so that the mesh can be transmitted in pieces. The order of faces may be spatial, spectral, or based on other properties of the mesh. Streaming meshes allow a very large mesh to be rendered even while it is still being loaded. Progressive meshes transmit the vertex and face data with increasing levels of detail. Unlike streaming meshes, progressive meshes give the overall shape of the entire object, but at a low level of detail. Additional data, new edges and faces, progressively increase the detail of the mesh. Normal meshes transmit progressive changes to a mesh as a set of normal displacements from a base mesh. With this technique, a series of textures represent the desired incremental modifications. Normal meshes are compact, since only a single scalar value is needed to express displacement. However, the technique requires a complex series of transformations to create the displacement textures.
File formats There exist many different file formats for storing polygon mesh data. Each format is most effective when used for the purpose intended by its creator. Some of these formats are presented below: File suffix
Format name
Organization(s)
Program(s)
Description
.raw
Raw mesh
Unknown
Various
Open, ASCII-only format. Each line contains 3 vertices, separated by spaces, to form a triangle, like so: X1 Y1 Z1 X2 Y2 Z2 X3 Y3 Z3
.blend
Blender File Format
Blender Foundation
Blender 3D
Open source, binary-only format
.fbx
Autodesk FBX Format Autodesk
Various
Proprietary. Binary and ASCII specifications exist.
.3ds
3ds Max File
Autodesk
3ds Max
.dae
Digital Asset Exchange (COLLADA)
Sony Computer N/A Entertainment, Khronos Group
.dgn
MicroStation File
Bentley Systems
MicroStation
.3dm
Rhino File
Robert McNeel & Associates
Rhinoceros 3D
.dxf
Drawing Exchange Format
Autodesk
AutoCAD
.obj
Wavefront OBJ
Wavefront Technologies
Various
Stands for "COLLAborative Design Activity". A universal format designed to prevent incompatibility.
ASCII format describing 3D geometry alone. All faces' vertices are ordered counter-clockwise, thus removing the need to specify normals.
Polygon mesh
169
.ply
Polygon File Format
Stanford University
Unknown
Binary and ASCII
.pmd
Polygon Movie Maker Yu Higuchi data
MikuMikuDance Proprietary binary file format for storing humanoid model geometry with rigging, material, and physics information.
.stl
Stereolithography Format
3D Systems
N/A
Binary and ASCII format originally designed to aid in "3D printing".
.amf
Additive Manufacturing File Format
ASTM International
N/A
Like the STL format, but with added native color, material, and constellation support.
.wrl
Virtual Reality Modeling Language
Web3D Consortium
Web Browsers
ISO Standard 14772-1:1997
.wrz
VRML Compressed
Web3D Consortium
Web Browsers
.x3d, .x3db, .x3dv
Extensible 3D
Web3D Consortium
Web Browsers
.x3dz, .x3dbz, .x3dvz
X3D Compressed Binary
Web3D Consortium
Web Browsers
.c4d
Cinema 4D File
MAXON
CINEMA 4D
.lwo
LightWave 3D object File
NewTek
LightWave 3D
.msh
Gmsh Mesh
GMsh Developers
GMsh Project
Open source, providing an ASCII mesh description for linear and polynomially interpolated elements in 1 to 3 dimensions.
.mesh
OGRE XML
OGRE Development Team
OGRE, purebasic
Open Source. Binary (.mesh) and ASCII (.mesh.xml) format available. Includes data for vertex animation and Morph target animation (blendshape). Skeletal animation data in separate file (.skeleton).
.z3d
Z3d
Oleg Melashenko
Zanoza Modeler
-
.vtk
VTK mesh
VTK, Kitware
VTK, Paraview
Open, ASCII or binary format that contains many different data fields, including point data, cell data, and field data.
ISO Standard 19775/19776/19777
References [1] http:/ / www. openmesh. org/ [2] Colin Smith, On Vertex-Vertex Meshes and Their Use in Geometric and Biological Modeling, http:/ / algorithmicbotany. org/ papers/ smithco. dis2006. pdf [3] Bruce Baumgart, Winged-Edge Polyhedron Representation for Computer Vision. National Computer Conference, May 1975. http:/ / www. baumgart. org/ winged-edge/ winged-edge. html [4] Tobler & Maierhofer, A Mesh Data Structure for Rendering and Subdivision. 2006. (http:/ / wscg. zcu. cz/ wscg2006/ Papers_2006/ Short/ E17-full. pdf)
Polygon mesh
External links • Weisstein, Eric W., " Simplicial complex (http://mathworld.wolfram.com/SimplicialComplex.html)", MathWorld. • Weisstein, Eric W., " Triangulation (http://mathworld.wolfram.com/Triangulation.html)", MathWorld. • OpenMesh (http://www.openmesh.org/) open source half-edge mesh representation.
Polygon soup A Polygon soup is a group of unorganized triangles, with generally no relationship whatsoever. Polygon soups are a geometry storage format in a 3D modeling package, such as Maya, Houdini, or Blender. Polygon soup can help save memory, load/write time, and disk space for large polygon meshes compared to the equivalent polygon mesh. The larger the polygon soup the larger the savings, for instance, fluid simulations, particle simulations, rigid body simulations, environments, and character models can reach into the millions of polygon for feature films causing large disk space and read/write overhead. As soon as any kind of hierarchical sorting or clustering scheme is applied, then it becomes something else (one example being an octree, a subdivided cube). Any kind of polygonal geometry that hasn't been grouped in any way can be considered polygon soup. Optimized meshes may contain grouped items to make drawing faster.
References
Polygonal modeling In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygons. Polygonal modeling is well suited to scanline rendering and is therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D objects include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray tracers. See polygon mesh for a description of how polygonal models are represented and stored.
Geometric theory and polygons The basic object used in mesh modeling is a vertex, a point in three dimensional space. Two vertices connected by a straight line become an edge. Three vertices, connected to each other by three edges, define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons (generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each of the polygons making up an element is called a face. In Euclidean geometry, any three non-collinear points determine a plane. For this reason, triangles always inhabit a single plane. This is not necessarily true of more complex polygons, however. The flat nature of triangles makes it simple to determine their surface normal, a three-dimensional vector perpendicular to the triangle's surface. Surface normals are useful for determining light transport in ray tracing, and are a key component of the popular Phong shading model. Some rendering systems use vertex normals instead of face normals to create a better-looking lighting system at the cost of more processing. Note that every triangle has two face normals, which point to opposite directions from each other. In many systems only one of these normals is considered valid – the other side of the polygon is referred to as a backface, and can be made visible or invisible depending on the programmer’s desires.
170
Polygonal modeling Many modeling programs do not strictly enforce geometric theory; for example, it is possible for two vertices to have two distinct edges connecting them, occupying exactly the same spatial location. It is also possible for two vertices to exist at the same spatial coordinates, or two faces to exist at the same location. Situations such as these are usually not desired and many packages support an auto-cleanup function. If auto-cleanup is not present, however, they must be deleted manually. A group of polygons which are connected by shared vertices is referred to as a mesh. In order for a mesh to appear attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge passes through a polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also desirable that the mesh not contain any errors such as doubled vertices, edges, or faces. For some purposes it is important that the mesh be a manifold – that is, that it does not contain holes or singularities (locations where two distinct sections of the mesh are connected by a single vertex).
Construction of polygonal meshes Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3d graphics software packages are available for use in constructing polygon meshes. One of the more popular methods of constructing meshes is box modeling, which uses two simple tools: • The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a square would be subdivided by adding one vertex in the center and one on each edge, creating four smaller squares. • The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. Thus, performing the extrude operation on a square face would create a cube connected to the surface at the location of the face. A second common modeling method is sometimes referred to as inflation modeling or extrusion modeling. In this method, the user creates a 2d shape which traces the outline of an object from a photograph or a drawing. The user then uses a second image of the subject from a different angle and extrudes the 2d shape into 3d, again following the shape’s outline. This method is especially common for creating faces and heads. In general, the artist will model half of the head and then duplicate the vertices, invert their location relative to some plane, and connect the two pieces together. This ensures that the model will be symmetrical. Another common method of creating a polygonal mesh is by connecting together various primitives, which are predefined polygonal meshes created by the modeling environment. Common primitives include: • • • • • •
Cubes Pyramids Cylinders 2D primitives, such as squares, triangles, and disks Specialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blender's monkey mascot. Spheres - Spheres are commonly represented in one of two ways: • Icospheres are icosahedrons which possess a sufficient number of triangles to resemble a sphere. • UV Spheres are composed of quads, and resemble the grid seen on some globes - quads are larger near the "equator" of the sphere and smaller near the "poles," eventually terminating in a single vertex.
Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a user-friendly interface for constructing low-detail models quickly, while 3d scanners can be used to create high detail meshes based on existing real-world objects in almost automatic way. These devices are very expensive, and are generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital representations.
171
Polygonal modeling
Operations There are a very large number of operations which may be performed on polygonal meshes. Some of these roughly correspond to real-world manipulations of 3D objects, while others do not. Polygonal mesh operations: Creations - Create new geometry from some other mathematical object Loft - generate a mesh by sweeping a shape along a path Extrude - same as loft, except the path is always a line Revolve - generate a mesh by revolving (rotating) a shape around an axis Marching cubes - algorithm to construct a mesh from an implicit function Binary Creations - Create a new mesh from a binary operation of two other meshes Add - boolean addition of two meshes Subtract - boolean subtraction of two meshes Intersect - boolean intersection Union - boolean union of two meshes Attach - attach one mesh to another (removing the interior surfaces) Chamfer - create a beveled surface which smoothly connected two surfaces Deformations - Move only the vertices of a mesh Deform - systematically move vertices (according to certain functions or rules) Weighted Deform - move vertices based on localized weights per vertex Morph - move vertices smoothly between a source and target mesh Bend - move vertices to "bend" the object Twist - move vertices to "twist" the object Manipulations - Modify the geometry of the mesh, but not necessarily topology Displace - introduce additional geometry based on a "displacement map" from the surface Simplify - systematically remove and average vertices Subdivide - smooth a course mesh by subdividing the mesh (Catmull-Clark, etc.) Convex Hull - generate another mesh which minimally encloses a given mesh (think shrink-wrap) Cut - create a hole in a mesh surface Stitch - close a hole in a mesh surface Measurements - Compute some value of the mesh Volume - compute the 3D volume of a mesh (discrete volumetric integral) Surface Area - compute the surface area of a mesh (discrete surface integral) Collision Detection - determine if two complex meshes in motion have collided Fitting - construct a parametric surface (NURBS, bicubic spline) by fitting it to a given mesh Point-Surface Distance - compute distance from a point to the mesh Line-Surface Distance - compute distance from a line to the mesh Line-Surface Intersection - compute intersection of line and the mesh Cross Section - compute the curves created by a cross-section of a plane through a mesh Centroid - compute the centroid, geometric center, of the mesh Center-of-Mass - compute the center of mass, balance point, of the mesh Circumcenter - compute the center of a circle or sphere enclosing an element of the mesh Incenter - compute the center of a circle or sphere enclosed by an element of the mesh
172
Polygonal modeling
Extensions Once a polygonal mesh has been constructed, further steps must be taken before it is useful for games, animation, etc. The model must be texture mapped to add colors and texture to the surface and it must be given a skeleton for animation. Meshes can also be assigned weights and centers of gravity for use in physical simulation. To display a model on a computer screen outside of the modeling environment, it is necessary to store that model in one of the file formats listed below, and then use or write a program capable of loading from that format. The two main methods of displaying 3d polygon models are OpenGL and Direct3D. Both of these methods can be used with or without a 3d accelerated graphics card.
Advantages and disadvantages There are many disadvantages to representing an object using polygons. Polygons are incapable of accurately representing curved surfaces, so a large number of them must be used to approximate curves in a visually appealing manner. The use of complex models has a cost in lowered speed. In scanline conversion, each polygon must be converted and displayed, regardless of size, and there are frequently a large number of models on the screen at any given time. Often, programmers must use multiple models at varying levels of detail to represent the same object in order to cut down on the number of polygons being rendered. The main advantage of polygons is that they are faster than other representations. While a modern graphics card can show a highly detailed scene at a frame rate of 60 frames per second or higher, raytracers, the main way of displaying non-polygonal models, are incapable of achieving an interactive frame rate (10 frame/s or higher) with a similar amount of detail.
File formats A variety of formats are available for storing 3d polygon data. The most popular are: • • • • • • • • • • • • • • • • • • •
.3ds, .max, which is associated with 3D Studio Max .blend, which is associated with Blender .c4d associated with Cinema 4D .dae (COLLADA) .dxf, .dwg, .dwf, associated with AutoCAD .fbx (Autodesk former. Kaydara Filmbox) .jt originally developed by Siemens PLM Software; now an ISO standard. .lwo, which is associated with Lightwave .lxo, which is associated with modo (software) .mb and .ma, which are associated with Maya .md2, .md3, associated with the Quake series of games .mdl used with Valve Software's Source Engine .nif (NetImmerse/gamebryo) .obj (Wavefront's "The Advanced Visualizer") .ply used to store data from 3D scanners .rwx (Renderware) .stl used in rapid prototyping .u3d (Universal 3D) .wrl (VRML 2.0)
173
Polygonal modeling
References 1. OpenGL SuperBible (3rd ed.), by Richard S Wright and Benjamin Lipchak ISBN 0-672-32601-9 2. OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 1.4, Fourth Edition by OpenGL Architecture Review Board ISBN 0-321-17348-1 3. OpenGL(R) Reference Manual : The Official Reference Document to OpenGL, Version 1.4 (4th Edition) by OpenGL Architecture Review Board ISBN 0-321-17383-X 4. Blender documentation: http://www.blender.org/cms/Documentation.628.0.html 5. Maya documentation: packaged with Alias Maya, http://www.alias.com/eng/index.shtml
Pre-rendering Pre-rendering is the process in which video footage is not rendered in real-time by the hardware that is outputing or playing back the video. Instead, the video is a recording of a footage that was previously rendered on a different equipment (typically one that is more powerful than the hardware used for playback). Pre-rendered assets (typically movies) may also be outsourced by the developer to an outside production company. Such assets usually have a level of complexity that is too great for the target platform to render in real-time. The term pre-rendered describes anything that is not rendered in real-time. This includes content that could have been run in real-time with more effort on the part of the developer (e.g. video that covers a large number of a game's environments without pausing to load, or video of a game in an early state of development that is rendered in slow-motion and then played back at regular speed). The term is generally not used to describe video captures of real-time rendered graphics despite the fact that video is technically pre-rendered by its nature. The term is also not used to describe hand drawn assets or photographed assets (these assets not being computer rendered in the first place).
Advantage and disadvantage The advantage of pre-rendering is the ability to use graphic models that are more complex and computationally intensive than those that can be rendered in real-time, due to the possibility of using multiple computers over extended periods of time to render the end results. For instance, a comparison could be drawn between rail-shooters Maximum Force (which used pre-rendered 3D levels but 2D sprites for enemies) and Virtua Cop (using 3D polygons); Maximum Force was more realistic looking due to the limitations of Virtua Cop's 3D engine, but Virtua Cop has actual depth (able to portray enemies close and far away, along with body-specific hits and multiple hits) compared to the limits of the 2D sprite enemies in Maximum Force. The disadvantage of pre-rendering, in the case of video game graphics, is a generally lower level of interactivity, if any, with the player. Another negative side of pre-rendered assets is that changes cannot be made during gameplay. A game with pre-rendered backgrounds is forced to use fixed camera angles, and a game with pre-rendered video generally cannot reflect any changes the game's characters might have undergone during gameplay (such as wounds or customized clothing) without having an alternate version of the video stored. This is generally not feasible due to the large amount of space required to store pre-rendered assets of high quality. However, in some advanced implementations, such as in Final Fantasy VIII, real-time assets were composited with pre-rendered video, allowing dynamic backgrounds and changing camera angles. Another problem is that a game with pre-rendered lighting cannot easily change the state of the lighting in a convincing manner. As the technology continued to advance in the mid-2000s, video game graphics were able to achieve the photorealism that was previously limited to pre-rendering, as seen in the growth of Machinima.
174
Pre-rendering
Usage Pre-rendered graphics are used primarily as cut scenes in modern video games, where they are also known as full motion video. In the late 1990s and early 2000s, when most 3D game engines had pre-calculated/fixed Lightmaps and texture mapping, developers often turned to pre-rendered graphics which had a much higher level of realism. However this has lost favor since the mid-2000s, as advances in consumer PC and video game graphics have enabled the use of the game's own engine to render these cinematics. For instance, the id Tech 4 engine used in Doom 3 allowed bump mapping and dynamic per-pixel lighting, previously only found in pre-rendered videos. One of the first games to use pre-rendering was the Sharp X68000 enhanced remake of Ys I: Ancient Ys Vanished released in 1991. It used 3D pre-rendered graphics for the boss sprites, though this ended up creating what is considered "a bizarre contrast" with the game's mostly 2D graphics.[1] One of the first games to extensively use pre-rendered graphics along with full motion video was The 7th Guest. Released in 1992 as one of the first PC games exclusively on CD-ROM, the game was hugely popular, although reviews from critics were mixed. The game featured pre-rendered video sequences that were at a resolution of 640x320 at 15 frames per second, a feat previously thought impossible on personal computers. Shortly after, the release of Myst in 1993 made the use of pre-rendered graphics and CD-ROMs even more popular; interestingly most of the rendered work of Myst would later be the basis for the re-make realMyst: Interactive 3D Edition with its free-roaming real-time 3D graphics. The most graphically advanced use of entirely pre-rendered graphics in games is often claimed to be Myst IV: Revelation, released in 2004. The use of pre-rendered backgrounds and movies also was made popular by the Resident Evil and Final Fantasy franchises on the original PlayStation, both of which use pre-rendered backgrounds and movies extensively to provide a visual presentation that is far greater than the console can provide with real-time 3D. These games include real-time elements (characters, items, etc.) in addition to pre-rendered backgrounds to provide interactivity. Often a game using pre-rendered backgrounds can devote additional processing power to the remaining interactive elements resulting in a level of detail greater than the norm for the host platform. In some cases the visual quality of the interactive elements is still far behind the pre-rendered backgrounds. Games such as Warcraft III: Reign of Chaos have used both types of cutscenes; pre-rendered for the beginning and end of a campaign, and the in-game engine for level briefings and character dialogue during a mission. Some games also use 16-bit pre-rendered skybox, like Half-Life (only GoldSrc version), Re-Volt, Quake II, and others. CG movies such as Toy Story, Shrek and Final Fantasy: The Spirits Within are entirely pre-rendered.
Other methods Another increasingly common pre-rendering method is the generation of texture sets for 3D games, which are often used with complex real-time algorithms to simulate extraordinarily high levels of detail. While making Doom 3, id Software used pre-rendered models as the basis for generating normal, specular and diffuse lighting maps that simulate the detail of the original model in real-time. Pre-rendered lighting is a technique that is losing popularity. Processor-intensive ray tracing algorithms can be used during a game's production to generate light textures, which are simply applied on top of the usual hand drawn textures.
175
Pre-rendering
References [1] (cf. )
Precomputed Radiance Transfer Precomputed Radiance Transfer (PRT) is a computer graphics technique used to render a scene in real time with complex light interactions being precomputed to save time. Radiosity methods can be used to determine the diffuse lighting of the scene, however PRT offers a method to dynamically change the lighting environment. In essence, PRT computes the illumination of a point as a linear combination of incident irradiance. An efficient method must be used to encode this data, such as spherical harmonics. When spherical harmonics is used to approximate the light transport function, only low frequency effect can be handled with a reasonable number of parameters. Ren Ng extended this work to handle higher frequency shadows by replacing spherical harmonics with non-linear wavelets. Teemu Mäki-Patola gives a clear introduction to the topic based on the work of Peter-Pike Sloan et al. At SIGGRAPH 2005, a detailed course on PRT was given.
References • Peter-Pike Sloan, Jan Kautz, and John Snyder. "Precomputed Radiance Transfer for Real-time rendering in Dynamic, Low-Frequency Lighting Environments". ACM Transactions on Graphics, Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 527-536. New York, NY: ACM Press, 2002. (http://www.mpi-inf.mpg.de/~jnkautz/projects/prt/prtSIG02.pdf) • NG, R., RAMAMOORTHI, R., AND HANRAHAN, P. 2003. All-Frequency Shadows Using Non-Linear Wavelet Lighting Approximation. ACM Transactions on Graphics 22, 3, 376–381. (http://graphics.stanford. edu/papers/allfreq/allfreq.press.pdf)
176
Procedural modeling
Procedural modeling Procedural modeling is an umbrella term for a number of techniques in computer graphics to create 3D models and textures from sets of rules. L-Systems, fractals, and generative modeling are procedural modeling techniques since they apply algorithms for producing scenes. The set of rules may either be embedded into the algorithm, configurable by parameters, or the set of rules is separate from the evaluation engine. The output is called procedural content, which can be used in computer games, films, be uploaded to the internet, or the user may edit the content manually. Procedural models often exhibit database amplification, meaning that large scenes can be generated from a much smaller amount of rules. If the employed algorithm produces the same output every time, the output need not be stored. Often, it suffices to start the algorithm with the same random seed to achieve this. Although all modeling techniques on a computer require algorithms to manage and store data at some point, procedural modeling focuses on creating a model from a rule set, rather than editing the model via user input. Procedural modeling is often applied when it would be too cumbersome to create a 3D model using generic 3D modelers, or when more specialized tools are required. This is often the case for plants, architecture or landscapes.
Procedural modeling suites • • • • • • • • • • • •
Acropora [1] BRL-CAD Bryce CityEngine Derivative Touch Designer [2] Generative Modelling Language Grome Houdini HyperFun Softimage Terragen 3ds Max
External links • "Texturing and Modeling: A Procedural Approach" [3], Ebert, D., Musgrave, K., Peachey, P., Perlin, K., and Worley, S • Procedural Inc. [4] • CityEngine [5] • "Procedural Modeling of Cities" [6], Yoav I H Parish, Pascal Müller • "Procedural Modeling of Buildings" [7], Pascal Müller, Peter Wonka, Simon Haegler, Andreas Ulmer and Luc Van Gool • "King Kong – The Building of 1933 New York City" [8], Chris White, Weta Digital. Siggraph 2006. • Tree Editors Compared: • List at Vterrain.org [9] • List at TreeGenerator [10]
177
Procedural modeling
References [1] [2] [3] [4] [5] [6] [7] [8]
http:/ / www. voxelogic. com http:/ / www. derivative. ca/ http:/ / www. cs. umbc. edu/ ~ebert/ book/ book. html http:/ / www. procedural. com http:/ / www. procedural. com/ http:/ / www. vision. ee. ethz. ch/ ~pmueller/ documents/ procedural_modeling_of_cities__siggraph2001. pdf http:/ / www. vision. ee. ethz. ch/ ~pmueller/ documents/ mueller. procedural_modeling_of_buildings. SG2006. web-version. pdf http:/ / delivery. acm. org/ 10. 1145/ 1180000/ 1179969/ p96-white. pdf?key1=1179969& key2=7979228711& coll=& dl=& CFID=15151515& CFTOKEN=6184618 [9] http:/ / www. vterrain. org/ Plants/ plantsw. html [10] http:/ / www. treegenerator. com/ compare. htm
Procedural texture A procedural texture is a computer-generated image created using an algorithm intended to create a realistic representation of natural elements such as wood, marble, granite, metal, stone, and others. Usually, the natural look of the rendered result is achieved by the usage of fractal noise and turbulence functions. These functions are used as a numerical representation of the “randomness” found in nature.
Solid texturing Solid texturing is a process where the texture generating function is evaluated over at each visible surface point of the model. Traditionally these functions use Perlin noise as A procedural floor grate texture generated with the texture their basis function, but some simple functions may use more [1] editor Genetica . trivial methods such as the sum of sinusoidal functions for instance. Solid textures are an alternative to the traditional 2D texture images which are applied to the surfaces of a model. It is a difficult and tedious task to get multiple 2D textures to form a consistent visual appearance on a model without it looking obviously tiled. Solid textures were created to specifically solve this problem. Instead of editing images to fit a model, a function is used to evaluate the colour of the point being textured. Points are evaluated based on their 3D position, not their 2D surface position. Consequently, solid textures are unaffected by distortions of the surface parameter space, such as you might see near the poles of a sphere. Also, continuity between the surface parameterization of adjacent patches isn’t a concern either. Solid textures will remain consistent and have features of constant size regardless of distortions in the surface coordinate systems. [2]
Cellular texturing Cellular texturing differs from the majority of other procedural texture generating techniques as it does not depend on noise functions as its basis, although it is often used to complement the technique. Cellular textures are based on feature points which are scattered over a three dimensional space. These points are then used to split up the space into small, randomly tiled regions called cells. These cells often look like “lizard scales,” “pebbles,” or “flagstones”. Even though these regions are discrete, the cellular basis function itself is continuous and can be evaluated anywhere
178
Procedural texture
179
in space. [3]
Genetic textures Genetic texture generation is highly experimental approach for generating textures. It is a highly automated process that uses a human to completely moderate the eventual outcome. The flow of control usually has a computer generate a set of texture candidates. From these, a user picks a selection. The computer then generates another set of textures by mutating and crossing over elements of the user selected textures.[4] For more information on exactly how this mutation and cross over generation method is achieved, see Genetic algorithm. The process continues until a suitable texture for the user is generated. This isn't a commonly used method of generating textures as it’s very difficult to control and direct the eventual outcome. Because of this, it is typically used for experimentation or abstract textures only.
Self-organizing textures Starting from a simple white noise, self-organization processes lead to structured patterns - still with a part of randomness. Reaction-diffusion systems are a good example to generate such kind of textures.
Example of a procedural marble texture (Taken from The Renderman Companion Book, by Steve Upstill) /* Copyrighted Pixar 1988 */ /* From the RenderMan Companion p. 355 */ /* Listing 16.19 Blue marble surface shader*/ /* * blue_marble(): a marble stone texture in shades of blue * surface */ blue_marble( float
color
Ks = .4, Kd = .6, Ka = .1, roughness = .1, txtscale = 1; specularcolor = 1)
{ point float point point float
PP; /* csp; /* Nf; /* V; /* pixelsize, twice,
scaled point in shader space */ color spline parameter */ forward-facing normal */ for specular() */ scale, weight, turbulence;
/* Obtain a forward-facing normal for lighting calculations. */ Nf = faceforward( normalize(N), I); V = normalize(-I);
Procedural texture /* * Compute "turbulence" a la [PERLIN85]. Turbulence is a sum of * "noise" components with a "fractal" 1/f power spectrum. It gives the * visual impression of turbulent fluid flow (for example, as in the * formation of blue_marble from molten color splines!). Use the * surface element area in texture space to control the number of * noise components so that the frequency content is appropriate * to the scale. This prevents aliasing of the texture. */ PP = transform("shader", P) * txtscale; pixelsize = sqrt(area(PP)); twice = 2 * pixelsize; turbulence = 0; for (scale = 1; scale > twice; scale /= 2) turbulence += scale * noise(PP/scale); /* Gradual fade out of highest-frequency component near limit */ if (scale > pixelsize) { weight = (scale / pixelsize) - 1; weight = clamp(weight, 0, 1); turbulence += weight * scale * noise(PP/scale); } /* * Magnify the upper part of the turbulence range 0.75:1 * to fill the range 0:1 and use it as the parameter of * a color spline through various shades of blue. */ csp = clamp(4 * turbulence - 3, 0, 1); Ci = color spline(csp, color (0.25, 0.25, 0.35), /* pale blue */ color (0.25, 0.25, 0.35), /* pale blue */ color (0.20, 0.20, 0.30), /* medium blue */ color (0.20, 0.20, 0.30), /* medium blue */ color (0.20, 0.20, 0.30), /* medium blue */ color (0.25, 0.25, 0.35), /* pale blue */ color (0.25, 0.25, 0.35), /* pale blue */ color (0.15, 0.15, 0.26), /* medium dark blue */ color (0.15, 0.15, 0.26), /* medium dark blue */ color (0.10, 0.10, 0.20), /* dark blue */ color (0.10, 0.10, 0.20), /* dark blue */ color (0.25, 0.25, 0.35), /* pale blue */ color (0.10, 0.10, 0.20) /* dark blue */ ); /* Multiply this color by the diffusely reflected light. */ Ci *= Ka*ambient() + Kd*diffuse(Nf);
180
Procedural texture
/* Adjust for opacity. */ Oi = Os; Ci = Ci * Oi; /* Add in specular highlights. */ Ci += specularcolor * Ks * specular(Nf,V,roughness); } This article was taken from The Photoshop Roadmap [5] with written authorization
References [1] [2] [3] [4] [5]
http:/ / www. spiralgraphics. biz/ gallery. htm Ebert et al: Texturing and Modeling A Procedural Approach, page 10. Morgan Kaufmann, 2003. Ebert et al: Texturing and Modeling A Procedural Approach, page 135. Morgan Kaufmann, 2003. Ebert et al: Texturing and Modeling A Procedural Approach, page 547. Morgan Kaufmann, 2003. http:/ / www. photoshoproadmap. com
Some programs for creating textures using procedural texturing • • • • • • • •
Allegorithmic Substance Designer Filter Forge Genetica (program) (http://www.spiralgraphics.biz/genetica.htm) DarkTree (http://www.darksim.com/html/dt25_description.html) Context Free Art (http://www.contextfreeart.org/index.html) TexRD (http://www.texrd.com) (based on reaction-diffusion: self-organizing textures) Texture Garden (http://texturegarden.com) Enhance Textures (http://www.shaders.co.uk)
181
Progressive meshes
Progressive meshes Progressive meshes is one of the techniques of dynamic level of detail (LOD). This technique was introduced by Hugues Hoppe in 1996. This method uses saving a model to the structure - the progressive mesh, which allows a smooth choice of detail levels depending on the current view. Practically, this means that it is possible to display whole model with the lowest level of detail at once and then it gradually shows even more details. Among the disadvantages belongs considerable memory consumption. The advantage is that it can work in real time. Progressive meshes could be used also in other areas of computer technology such as a gradual transfer of data through the Internet or compression. [1]
Basic principle Progressive meshes is a data structure which is created as the original model of the best quality simplifies a suitable decimation algorithm, which removes step by step some of the edges in the model (edge-collapse operation). It is necessary to undertake as many simplifications as needed to achieve the minimal model. The resultant model, in a full quality, is then represented by the minimal model and by the sequence of inverse operations to simplistic (vertex split operation). This forms a hierarchical structure which helps to create a model in the chosen level of detail.
Edge collapse This simplistic operation - ecol takes two connected vertices and replaces them with a single vertex. Two triangles {vs, vt, vl} and {vt, vs, vr} which were connected by the edge are also removed during this operation.
Vertex split Vertex split (vsplit) is the inverse operation to the edge collapse that divides the vertex into two new vertexes. Therefore a new edge {vt, vs} and two new triangles {vs, vt, vl} and {vt, vs, vr} arise.
References [1] D. Luebke, M. Reddy, J. D. Cohen, A. Varshney, B. Watson, R. Huebner: Level of Detail for 3D Graphics, Morgan Kaufmann, 2002, ISBN 0-321-19496-9
182
3D projection
183
3D projection Part of a series on
Graphical projection
• • •
v t
e [1]
3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.
Orthographic projection When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic projection ignores this effect to allow the creation of to-scale drawings for construction and engineering. Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and elevation. If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point , , onto the 2D point , using an orthographic projection parallel to the y axis (profile view), the following equations can be used:
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become: . While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened as they would be in a perspective projection.
3D projection
184
Weak perspective projection A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice-versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths replaced by an average constant depth , or simply as an orthographic projection plus a scaling. The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective. It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance from the camera without significant errors in the projection (compared to the full perspective model).
Perspective projection When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition shows distant objects as smaller to provide additional realism. The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation: • •
- the 3D position of a point A that is to be projected. - the 3D position of a point C representing the camera.
• •
- The orientation of the camera (represented, for instance, by Tait–Bryan angles). - the viewer's position relative to the display surface.
Which results in: •
- the 2D projection of
When Otherwise, to compute
.
and
the 3D vector
we first define a vector from
.
as the position of point A with respect to a coordinate
system defined by the camera, with origin in C and rotated by achieved by subtracting
is projected to the 2D vector
with respect to the initial coordinate system. This is
and then applying a rotation by
to the result. This transformation is often
called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):
This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". Note that if the camera is not rotated ( ), then the matrices drop out (as identities), and this reduces to simply a shift: Alternatively, without using matrices (let's replace (ax-cx) with x and so on, and abbreviate cosθ to c and sinθ to s):
3D projection
185
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):
Or, in matrix form using homogeneous coordinates, the system
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
The distance of the viewer from the display surface, , directly relates to the field of view, where is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface) The above equations can also be rewritten as:
In which
is the display size,
is the recording surface size (CCD or film),
recording surface to the entrance pupil (camera center), and
is the distance from the
is the distance, from the 3D point being projected, to
the entrance pupil. Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.
Diagram
To determine which screen x-coordinate corresponds to a point at
multiply the point coordinates by:
3D projection
186
where is the screen x coordinate is the model x coordinate is the focal length—the axial distance from the camera center to the image plane is the subject distance. Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
References External links • A case study in camera projection (http://nccasymposium.bmth.ac.uk/2007/muhittin_bilginer/index.html) • Creating 3D Environments from Digital Photographs (http://nccasymposium.bmth.ac.uk/2009/ McLaughlin_Chris/McLaughlin_C_WebBasedNotes.pdf)
Further reading • Kenneth C. Finney (2004). 3D Game Programming All in One (http://books.google.com/ ?id=cknGqaHwPFkC&pg=PA93&dq="3D+projection"). Thomson Course. p. 93. ISBN 978-1-59200-136-1. • Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code. ISBN 0759611874.
Projective texture mapping Projective texture mapping is a method of texture mapping that allows a textured image to be projected onto a scene as if by a slide projector. Projective texture mapping is useful in a variety of lighting techniques and it is the starting point for shadow mapping. Projective texture mapping is essentially a special matrix transformation which is performed per-vertex and then linearly interpolated as standard texture mapping.
Fixed function pipeline approach Historically[1], using projective texture mapping involved considering a special form of eye linear texture coordinate generation[2] transform (tcGen for short). This transform was then multiplied by another matrix representing the projector's properties which was stored in texture coordinate transform matrix[3]. The resulting concatenated matrix was basically a function of both projector properties and vertex eye positions. The key points of this approach are that eye linear tcGen is a function of vertex eye coordinates, which is a result of both eye properties and object space vertex coordinates (more specifically, the object space vertex position is transformed by the model-view-projection matrix). Because of that, the corresponding texture matrix can be used to "shift" the eye properties so the concatenated result is the same as using an eye linear tcGen from a point of view which can be different from the observer.
Projective texture mapping
Programmable pipeline approach A less involved method to compute this approach became possible with vertex shaders. Readers are encouraged to check this method is essentially the same as before. For readers not familiar with this newer graphics technology, this feature allows to override the default vertex and pixel processing allowing a user defined program to be used. The previous algorithm can then be reformulated by simply considering two model-view-projection matrices: one from the eye point of view and the other from the projector point of view. In this case, the projector model-view-projection matrix is essentially the aforementioned concatenation of eye-linear tcGen with the intended projector shift function. By using those two matrices, a few instructions are sufficient to output the transformed eye space vertex position and a projective texture coordinate. This coordinate is simply obtained by considering the projector's model-view-projection matrix: in other words, this is the eye-space vertex position if the considered projector would have been an observer.
Caveats In both the proposed approaches there are two little problems which can be trivially solved and comes from the different conventions used by eye space and texture space. Defining properties of those spaces is beyond the scope of this article but it's well known that textures should usually be addressed in the range [0..1] while eye space coordinates are addressed in the range [-1..1]. According to the used texture wrap mode various artifacts may occur but it's obvious a shift and scale operation is definitely necessary to get the expected result. The other problem is actually a mathematical issue. It is well known the matrix math used produces a back projection. This artifact has historically been avoided by using a special black and white texture to cut away unnecessary projecting contributions. Using pixel shaders a different approach can be used: a coordinate check is sufficient to discriminate between forward (correct) contributions and backward (wrong, to be avoided) ones.
References 1. ^ The original paper [4] from the nVIDIA developer web site [5] includes all the needed documentation on this issue. The same site also contains additional hints [6]. 2. ^ Texture coordinate generation is covered in section 2.11.4 "Generating Texture Coordinates" from the OpenGL 2.0 specification [7]. Eye linear texture coordinate generation is a special case. 3. ^ Texture matrix is introduced in section 2.11.2 "Matrices" of the OpenGL 2.0 specification [7].
External links • http://www.3dkingdoms.com/weekly/weekly.php?a=20 A tutorial showing how to implement projective texturing using the programmable pipeline approach in OpenGL.
References [1] [2] [3] [4] [5] [6] [7]
http:/ / en. wikipedia. org/ wiki/ Projective_texture_mapping#endnote_nvsdk_ptm http:/ / en. wikipedia. org/ wiki/ Projective_texture_mapping#endnote_glEyeLinear http:/ / en. wikipedia. org/ wiki/ Projective_texture_mapping#endnote_glTCXform http:/ / developer. nvidia. com/ object/ Projective_Texture_Mapping. html http:/ / developer. nvidia. com http:/ / developer. nvidia. com/ object/ projective_textures. html http:/ / www. opengl. org/ documentation/ specs/
187
Pyramid of vision
188
Pyramid of vision Pyramid of vision is a 3D computer graphics term: the infinite pyramid into the real world, with an apex at the observer's eye and faces passing through the edges of the viewport ("window").
Pyramid of vision
Quantitative Invisibility In CAD/CAM, quantitative invisibility (QI) is the number of solid bodies that obscure a point in space as projected onto a plane. Often, CAD engineers project a model into a plane (a 2D drawing) in order to denote edges that are visible with a solid line, and those that are hidden with dashed or dimmed lines.
Algorithm Tracking the number of obscuring bodies gave rise to an algorithm that propagates the quantitative invisibility throughout the model. This technique uses edge coherence to speed calculations in the algorithm. However, QI really only works well when bodies are larger solids, non-interpenetrating, and not transparent. A technique like this falls apart when applied to soft organic tissue as found in the human body, because there is not always a clear delineation of structures. Also, when images become too cluttered and intertwined, the contribution of this algorithm is marginal. Arthur Appel of the graphics group at IBM Watson Research coined the term quantitative invisibility and used it in several of his papers.
External links • Vector Hidden Line Removal and Fractional Quantitative Invisibility [1]
References • Appel, A., "The Notion of Quantitative Invisibility and the Machine Rendering of Solids," Proceedings ACM National Conference, Thompson Books, Washington, DC, 1967, pp. 387–393, pp. 214–220.
References [1] http:/ / wheger. tripod. com/ vhl/ vhl. htm
Quaternions and spatial rotation
189
Quaternions and spatial rotation Unit quaternions, also known as versors, provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions. Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock. Compared to rotation matrices they are more numerically stable and may be more efficient. Quaternions have found their way into applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics,[1] and orbital mechanics of satellites.[2] When used to represent rotation, unit quaternions are also called rotation quaternions. When used to represent an orientation (rotation relative to a reference position), they are called orientation quaternions or attitude quaternions.
Using quaternion rotations According to Euler's rotation theorem, any rotation or sequence of rotations of a rigid body or Coordinate system about a fixed point is equivalent to a single rotation by a given angle θ about a fixed axis (called Euler axis) that runs through the fixed point. The Euler axis is typically represented by a unit vector u→. Therefore, any rotation in three dimensions can be represented as a combination of a vector u→ and a scalar θ. Quaternions give a simple way to encode this axis–angle representation in four numbers, and to apply the corresponding rotation to a position vector representing a point relative to the origin in R3. A Euclidean vector such as (2, 3, 4) or (ax, ay, az) can be rewritten as 2 i + 3 j + 4 k or ax i + ay j + az k, where i, j, k are unit vectors representing the three Cartesian axes. A rotation through an angle of θ around the axis defined by a unit vector
is represented by a quaternion using an extension of Euler's formula:
The rotation is clockwise if our line of sight points in the same direction as u→. It can be shown that this rotation can be applied to an ordinary vector
in
3-dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation of p by q: using the Hamilton product, where p′ = (px′, py′, pz′) is the new position vector of the point after the rotation. In this instance, q is a unit quaternion and
It follows that conjugation by the product of two quaternions is the composition of conjugations by these quaternions. If p and q are unit quaternions, then rotation (conjugation) by pq is , which is the same as rotating (conjugating) by q and then by p. The scalar component of the result is necessarily zero. The quaternion inverse of a rotation is the opposite rotation, since
. The square of a n
quaternion rotation is a rotation by twice the angle around the same axis. More generally q is a rotation by n times the angle around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatial orientations; see Slerp. Two rotation quaternions can be combined into one equivalent quaternion by the relation:
Quaternions and spatial rotation
190
in which q′ corresponds to the rotation q1 followed by the rotation q2. (Note that quaternion multiplication is not commutative.) Thus, an arbitrary number of rotations can be composed together and then applied as a single rotation.
Example The conjugation operation Conjugating p by q refers to the operation p ↦ q p q−1. Consider the rotation f around the axis , with a rotation angle of 120°, or 2π/3 radians.
A rotation of 120° around the first diagonal permutes i, j, and k cyclically.
The length of v→ is √3, the half angle is π/3 (60°) with cosine 1/2, (cos 60° = 0.5) and sine √3/2, (sin 60° ≈ 0.866). We are therefore dealing with a conjugation by the unit quaternion
p ↦ q p for q = 1 + i + j + k/2 on the unit 3-sphere. Note this one-sided (namely, left) multiplication yields a 60° rotation of quaternions
Quaternions and spatial rotation If f is the rotation function,
It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components. As a consequence,
and
This can be simplified, using the ordinary rules for quaternion arithmetic, to
As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the long diagonal through the fixed point (observe how the three axes are permuted cyclically). Quaternion arithmetic in practice Let's show how we reached the previous result. Let's develop the expression of f (in two stages), and apply the rules
It gives us:
which is the expected result. As we can see, such computations are relatively long and tedious if done manually; however, in a computer program, this amounts to calling the quaternion multiplication routine twice.
191
Quaternions and spatial rotation
192
Quaternion-derived rotation matrix A quaternion rotation can be algebraically manipulated into a quaternion-derived rotation matrix. By simplifying the quaternion multiplications q p q*, they can be rewritten as a rotation matrix given an axis–angle representation:
where s and c are shorthand for sin θ and cos θ, respectively. Although care should be taken (due to degeneracy as the quaternion approaches the identity quaternion (1) or the sine of the angle approaches zero) the axis and angle can be extracted via:
Note that the θ equality holds only when the square root of the sum of the squared imaginary terms takes the same sign as qr. As with other schemes to apply rotations, the centre of rotation must be translated to the origin before the rotation is applied and translated back to its original position afterwards.
Explanation Quaternions briefly The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra and additionally the rule i2 = −1. This is sufficient to reproduce all of the rules of complex number arithmetic: for example: . In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2 = k2 = i j k = −1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic follow: for example, one can show that:
. The imaginary part
of a quaternion behaves like a vector
in three dimension vector
space, and the real part a behaves like a scalar in R. When quaternions are used in geometry, it is more convenient to define them as a scalar plus a vector: . Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects of very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and another one with zero scalar/real part: . We can express quaternion multiplication in the modern language of vector cross and dot products (which were actually inspired by the quaternions in the first place [citation needed]). In place of the rules i2 = j2 = k2 = ijk = −1 we
Quaternions and spatial rotation
193
have the quaternion multiplication rule:
where: • • •
is the resulting quaternion, is vector cross product (a vector), is vector scalar product (a scalar).
Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while scalar–scalar and scalar–vector multiplications commute. From these rules it follows immediately that (see details): . The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm ratio (see details): , as can be verified by direct calculation.
Proof of the quaternion rotation identity Let
be a unit vector (the rotation axis) and let
yields the vector
where
and
rotated by an angle
around the axis
. Our goal is to show that
. Expanding out, we have
are the components of v→ perpendicular and parallel to u→ respectively. This is the formula of a
rotation by α around the u→ axis.
Quaternions and spatial rotation
Quaternion rotation operations A very formal explanation of the properties used in this section is given by Altman.[3]
The hypersphere of rotations Visualizing the space of rotations Unit quaternions represent the group of Euclidean rotations in three dimensions in a very straightforward way. The correspondence between rotations and quaternions can be understood by first visualizing the space of rotations itself. In order to visualize the space of rotations, it helps to consider a simpler case. Any rotation in three dimensions can be described by a rotation by some angle about some axis; for our purposes, we will use an axis vector to establish handedness for our angle. Consider the special case in which the axis of rotation lies in the xy plane. We can then specify the axis of one of these rotations by a point on a circle through which the vector crosses, and we can select the radius of the circle to denote the angle of rotation. Similarly, a rotation whose axis of rotation lies in the xy plane can be described as a point on a sphere of fixed radius in three dimensions. Beginning at the north pole of a sphere in three dimensional space, we specify the point at the north pole to be the identity rotation (a zero angle Two rotations by different angles and different axes in the space of rotation). Just as in the case of the identity rotation, no rotations. The length of the vector is related to the magnitude of the axis of rotation is defined, and the angle of rotation rotation. (zero) is irrelevant. A rotation having a very small rotation angle can be specified by a slice through the sphere parallel to the xy plane and very near the north pole. The circle defined by this slice will be very small, corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative z direction, and the circles become larger until the equator of the sphere is reached, which will correspond to a rotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding to the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached, the circles shrink once more to the identity rotation, which is also specified as the point at the south pole. Notice that a number of characteristics of such rotations and their representations can be seen by this visualization. The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360 degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of rotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarily give a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will not be the case with a general rotation in 3-space, in which rotations do form a closed set under composition.
194
Quaternions and spatial rotation
This visualization can be extended to a general rotation in 3-dimensional space. The identity rotation is a point, and a small angle of rotation about some axis can be represented as a point on a sphere with a small radius. As the angle of rotation grows, the sphere grows, until the angle of rotation reaches 180 degrees, at which point the sphere begins to shrink, becoming a point as the angle approaches 360 degrees (or zero degrees from the negative direction). This set of expanding and contracting spheres represents a hypersphere in four dimensional space (a 3-sphere). Just as in the simpler example above, each rotation represented as a point on the hypersphere is matched by its antipodal point on that hypersphere. The "latitude" on the hypersphere will be half of the corresponding angle of rotation, and the neighborhood of any point will become "flatter" (i.e. be represented by a 3-D Euclidean space of points) as the neighborhood shrinks. This behavior is matched The sphere of rotations for the rotations that have a "horizontal" axis by the set of unit quaternions: A general quaternion (in the xy plane). represents a point in a four dimensional space, but constraining it to have unit magnitude yields a three dimensional space equivalent to the surface of a hypersphere. The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part of a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the cosine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.
Parameterizing the space of rotations We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude and longitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically different from any other points on the sphere. At the poles (latitudes +90° and −90°), the longitude becomes meaningless. It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problems by embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates (w, x, y), placing the north pole at (w, x, y) = (1, 0, 0), the south pole at (w, x, y) = (−1, 0, 0), and the equator at w = 0, x2 + y2 = 1. Points on the sphere satisfy the constraint w2 + x2 + y2 = 1, so we still have just two degrees of freedom though there are three coordinates. A point (w, x, y) on the sphere represents a rotation in the ordinary space around the horizontal axis directed by the vector (x, y, 0) by an angle . In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), but any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock. We can avoid this by using four Euclidean coordinates w, x, y, z, with w2 + x2 + y2 + z2 = 1. The point (w, x, y, z) represents a rotation around the axis directed by the vector (x, y, z) by an angle
195
Quaternions and spatial rotation
Explaining quaternions' properties with rotations Non-commutativity The multiplication of quaternions is non-commutative. This fact explains how the p ↦ q p q−1 formula can work at all, having q q−1 = 1 by definition. Since the multiplication of unit quaternions corresponds to the composition of three dimensional rotations, this property can be made intuitive by showing that three dimensional rotations are not commutative in general. Set two books next to each other. Rotate one of them 90 degrees clockwise around the z axis, then flip it 180 degrees around the x axis. Take the other book, flip it 180° around x axis first, and 90° clockwise around z later. The two books do not end up parallel. This shows that, in general, the composition of two different rotations around two distinct spatial axes will not commute.
Orientation The vector cross product, used to define the axis–angle representation, does confer an orientation ("handedness") to space: in a three-dimensional vector space, the three vectors in the equation a × b = c will always form a right-handed set (or a left-handed set, depending on how the cross product is defined), thus fixing an orientation in the vector space. Alternatively, the dependence on orientation is expressed in referring to such u→ that specifies a rotation as to axial vectors. In quaternionic formalism the choice of an orientation of the space corresponds to order of multiplication: ij = k but ji = −k. If one reverses the orientation, then the formula above becomes p ↦ q−1 p q, i.e. a unit q is replaced with the conjugate quaternion – the same behaviour as of axial vectors.
Comparison with other representations of rotations Advantages of quaternions The representation of a rotation as a quaternion (4 numbers) is more compact than the representation as an orthogonal matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the corresponding quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are much harder with matrices or Euler angles. In video games and other applications, one is often interested in “smooth rotations”, meaning that the scene should slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the spherical linear interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initial rotation) and the other being the intended final rotation. This is more problematic with other representations of rotations. When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that’s slightly off still represents a rotation after being normalised: a matrix that’s slightly off may not be orthogonal anymore and is harder to convert back to a proper orthogonal matrix. Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/roll rotational systems, the pitch is rotated 90° up or down, so that yaw and roll then correspond to the same motion, and a degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, this could have disastrous results if the aircraft is in a steep dive or ascent.
196
Quaternions and spatial rotation
197
Conversion to and from the matrix representation From a quaternion to an orthogonal matrix The orthogonal matrix corresponding to a rotation by the unit quaternion z = a + b i + c j + d k (with | z | = 1) when post-multiplying with a column vector is given by
From an orthogonal matrix to a quaternion One must be careful when converting a rotation matrix to a quaternion, as several straightforward methods tend to be unstable when the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. For a stable method of converting an orthogonal matrix to a quaternion, see Rotation matrix #Quaternion. Fitting quaternions The above section described how to recover a quaternion q from a 3×3 rotation matrix Q. Suppose, however, that we have some matrix Q that is not a pure rotation—due to round-off errors, for example—and we wish to find the quaternion q that most accurately represents Q. In that case we construct a symmetric 4×4 matrix
and find the eigenvector (x, y, z, w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q Wikipedia:Disputed statement
Performance comparisons This section discusses the performance implications of using quaternions versus other methods (axis/angle or rotation matrices) to perform rotations in 3D. Results
Storage requirements Method
Storage
Rotation matrix 9 Quaternion
4
Angle/axis
3*
* Note: angle/axis can be stored as 3 elements by multiplying the unit rotation axis by half of the rotation angle, forming the logarithm of the quaternion, at the cost of additional calculations.
Quaternions and spatial rotation
198
Performance comparison of rotation chaining operations Method
# multiplies # add/subtracts total operations
Rotation matrices 27
18
45
Quaternions
12
28
16
Performance comparison of vector rotating operations Method
# multiplies # add/subtracts # sin/cos total operations
Rotation matrix 9
6
0
15
Quaternions
15
15
0
30
Angle/axis
23
16
2
41
Used methods There are three basic approaches to rotating a vector v→: 1. Compute the matrix product of a 3 × 3 rotation matrix R and the original 3 × 1 column matrix representing v→. This requires 3 × (3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method for rotating a vector. 2. A rotation can be represented by a unit-length quaternion q = (w, r→) with scalar (real) part w and vector (imaginary) part r→. The rotation can be applied to a 3D vector v→ via the formula . This requires only 15 multiplications and 15 additions to evaluate (or 18 muls and 12 adds if the factor of 2 is done via multiplication.) This yields the same result as the less efficient but more compact formula . 3. Use the angle/axis formula to convert an angle/axis to a rotation matrix R then multiplying with a vector. Converting the angle/axis to R using common subexpression elimination costs 14 multiplies, 2 function calls (sin, cos), and 10 add/subtracts; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for a total of 23 multiplies, 16 add/subtracts, and 2 function calls (sin, cos).
Pairs of unit quaternions as rotations in 4D space A pair of unit quaternions zl and zr can represent any rotation in 4D space. Given a four dimensional vector v→, and pretending that it is a quaternion, we can rotate the vector v→ like this:
It is straightforward to check that for each matrix: M MT = I, that is, that each matrix (and hence both matrices together) represents a rotation. Note that since , the two matrices must commute. Therefore, there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotations have 6 degrees of freedom, each matrix represents 3 of those 6 degrees of freedom. Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all (non-infinitesimal) four-dimensional rotations can also be represented.
Quaternions and spatial rotation
References [1] Amnon Katz (1996) Computational Rigid Vehicle Dynamics, Krieger Publishing Co. ISBN 978-1575240169 [2] J. B. Kuipers (1999) Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality, Princeton University Press ISBN 978-0-691-10298-6 [3] Simon L. Altman (1986) Rotations, Quaternions, and Double Groups, Dover Publications (see especially Ch. 12).
E. P. Battey-Pratt & T. J. Racey (1980) Geometric Model for Fundamental Particles International Journal of Theoretical Physics. Vol 19, No. 6
External links and resources • Shoemake, Ken. Quaternions (http://www.cs.caltech.edu/courses/cs171/quatut.pdf) • Simple Quaternion type and operations in over thirty computer languages (http://rosettacode.org/wiki/ Simple_Quaternion_type_and_operations) on Rosetta Code • Hart, Francis, Kauffman. Quaternion demo (http://graphics.stanford.edu/courses/cs348c-95-fall/software/ quatdemo/) • Dam, Koch, Lillholm. Quaternions, Interpolation and Animation (http://www.diku.dk/publikationer/tekniske. rapporter/1998/98-5.ps.gz) • Vicci, Leandra. Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation (http:// www.cs.unc.edu/techreports/01-014.pdf) • Howell, Thomas and Lafon, Jean-Claude. The Complexity of the Quaternion Product, TR75-245, Cornell University, 1975 (http://world.std.com/~sweetser/quaternions/ps/cornellcstr75-245.pdf) • Berthold K.P. Horn. Some Notes on Unit Quaternions and Rotation (http://people.csail.mit.edu/bkph/articles/ Quaternions.pdf).
199
Andreas Raab
200
Andreas Raab Dr. Andreas Raab
Born
November 24, 1968 Rostock, East Germany
Died
January 14, 2013 Berlin, Germany
Citizenship
German
Fields
Computer science
Institutions
Walt Disney Imagineering Viewpoints Research Institute HP Labs 3D ICC SAP Innovation Center
Alma mater University of Magdeburg Known for
Squeak Croquet Project OpenQwaq Tweak programming environment Etoys
Spouse
Kathleen Raab
Children
Theodor Andreas Raab, born August 7, 2013
Andreas Raab (November 24, 1968 — January 14, 2013) was a German computer scientist who developed new concepts and applications in 3D graphics. Raab was a key contributor to the Squeak platform and the Croquet virtual world project. He was an early and longstanding member of the Squeak Central team headed by Alan Kay, and later an elected member of the Squeak Oversight Board. He authored the initial Windows port of the Squeak virtual machine, and created the Tweak programming environment used in virtual world applications.
Andreas Raab
Background and education Raab attended the University of Magdeburg (Germany) where he graduated in 1994, receiving a Diplom-Informatiker degree (equivalent to MSc in Computer Science) and in 1998 a degree as PhD in Computer Science.[1][2]
Accomplishments Andreas Raab was a key contributor and participant in the Squeak community. He was the largest contributor to the code base. Colleagues consider him to have been a brilliant and artistic coder known for his solid design and lack of bugs. He ported the Squeak virtual machine to Windows while he was a Ph.D. student at Magdeburg University in 1997. The Squeak Central team at Walt Disney Imagineering, led by Alan Kay, was very much impressed with his talent. When Raab graduated, Kay hired him and brought him to California. It didn't take long for him he became a productive member of the core team. In 2001, it became clear that the Etoys architecture in Squeak had reached limits in the capabilities of its Morphic interface infrastructure. Andreas Raab proposed defining a "script process" and providing a default scheduling mechanism that avoids several more general problems. The result was a new user interface, proposed to replace the Squeak Morphic user interface in the future. Tweak provides mechanisms of islands, asynchronous messaging, players and costumes, language extensions, projects, and tile scripting.[3] Its underlying object system is class-based, but to users (during programming) it acts like it is prototype-based. Tweak objects are created and run in Tweak project windows. Tweak was used extensively in version 1.0 of the Sophie project under the direction of Robert Stein. At Alan Kay's Viewpoints Research Institute, Kay and Raab worked with David P. Reed, and David A. Smith, implementing the concepts of David Reed's Ph.D. thesis by creating the first working model of Croquet. In 2007, Smith and Raab started Qwaq, an immersive collaboration company, which further developed the Croquet prototype for business applications, such as simulations for the United States Department of Defense. Qwaq was later renamed to Teleplace and then became 3D Immersive Collaboration Consulting. In 2009, Raab proposed and implemented a special event-driven version of Squeak VM which does not contain an event loop, but instead acts as a handler for an externally-provided queue of events, and returns to the caller once all the events have been processed. Such modification of the VM makes it very convenient for embedding with any other (e. g. another language) runtime. Squeak on Android is just an example of such embedding with Java/Dalvik VM.
Articles • "Coherent Zooming of Illustrations with 3D-Graphics and Text" - Proceedings of the conference on Graphics interface '97 [4] • "User-centred design in the development of a navigational aid for blind travellers" - '97 Proceedings of the IFIP TC13 Interantional Conference on Human-Computer Interaction Pages 220-227 [5] • TinLizzieWysiWiki andWikiPhone: Alternative approaches to asynchronous and synchronous collaboration on the Web [6] • Croquet: a menagerie of new user interfaces [7] • Filters and tasks in Croquet [8] • Wouldn't you like to have your own studio in Croquet? [9] • The media messenger [10] About a new messaging system which sends media (video, presentations, animations, audio, interactive games, 3D spaces) to other users on the Internet. • Scalability of Collaborative Environments [11]
201
Andreas Raab
202
External links • Andreas' Blog Squeaking Along [12]
References [1] [2] [3] [4] [5] [6]
Press release about his dissertation (german): http:/ / idw-online. de/ de/ news6763 (archived at Squeak Wiki) Tweak: Whitepapers (http:/ / web. archive. org/ web/ 20070323064400/ http:/ / tweakproject. org/ TECHNOLOGY/ Whitepapers/ ) http:/ / www. vismd. de/ lib/ exe/ fetch. php?media=files:hci:preim_1997_gi. pdf http:/ / dl. acm. org/ citation. cfm?id=647403. 723524& coll=DL& dl=GUIDE& CFID=256678779& CFTOKEN=38905774 http:/ / ieeexplore. ieee. org/ xpl/ login. jsp?tp=& arnumber=4144932& url=http%3A%2F%2Fieeexplore. ieee. org%2Fstamp%2Fstamp. jsp%3Ftp%3D%26arnumber%3D4144932 [7] http:/ / ieeexplore. ieee. org/ xpl/ mostRecentIssue. jsp?punumber=9189 [8] http:/ / ieeexplore. ieee. org/ xpl/ mostRecentIssue. jsp?punumber=9721 [9] http:/ / ieeexplore. ieee. org/ xpl/ articleDetails. jsp?tp=& arnumber=4019389& contentType=Conference+ Publications& searchWithin%3Dp_Authors%3A. QT. Raab%2C+ A. . QT. [10] http:/ / ieeexplore. ieee. org/ xpl/ login. jsp?tp=& arnumber=1419794& url=http%3A%2F%2Fieeexplore. ieee. org%2Fstamp%2Fstamp. jsp%3Ftp%3D%26arnumber%3D1419794 [11] http:/ / ieeexplore. ieee. org/ xpl/ articleDetails. jsp?tp=& arnumber=4019391& contentType=Conference+ Publications& searchWithin%3Dp_Authors%3A. QT. Raab%2C+ A. . QT. [12] http:/ / squeakingalong. wordpress. com/
RealityEngine RealityEngine refers to a 3D graphics hardware architecture and a family of graphics systems that implemented the aforementioned hardware architecture that was developed and manufactured by Silicon Graphics during the early to mid 1990s. The RealityEngine was positioned as Silicon Graphics' high-end visualization hardware for their MIPS/IRIX platform and was used exclusively in their Crimson and Onyx family of visualization systems, which are sometimes referred to as "graphics supercomputers" or "visualization supercomputers". The RealityEngine was marketed to and used by large organizations such as companies and universities that are involved in computer simulation, digital content creation, engineering and research. It was succeeded by the InfiniteReality in early 1996, but coexisted with it for a time as an entry-level option for older systems.
Geometry Engine board.
RealityEngine The RealityEngine was a board set comprising a Geometry Engine board, one to four Raster Memory board(s), and a DG2 Display Generator board. These boards plugged into a midplane on the host system. The Geometry Engine was based around the 50 MHz Intel i860XP.
RealityEngine
203
VTX The VTX was a cost-reduced RealityEngine and as a consequence, its features and performance was below that of the RealityEngine. It should not be mistaken as the VGX or VGXT board set.
RealityEngine2 The RealityEngine2, branded RealityEngine2, is an upgraded RealityEngine with twelve instead of eight Geometry Engines introduced towards the end of the RealityEngine's life. It was succeeded by the InfiniteReality in early 1996. Raster Memory board.
It uses the GE10 Geometry Engine board, RM4 Raster Memory board and DG2 Display Generator board.
References • Akeley, Kurt. "RealityEngine Graphics" [1]. Proceedings of SIGGRAPH '93, pp. 109-116.
References [1] http:/ / www1. cs. columbia. edu/ ~ravir/ 6160/ papers/ p109-akeley. pdf
Reflection (computer graphics)
204
Reflection (computer graphics) Reflection in computer graphics is used to emulate reflective objects like mirrors and shiny surfaces. Reflection is accomplished in a ray trace renderer by following a ray from the eye to the mirror and then calculating where it bounces from, and continuing the process until no surface is found, or a non-reflective surface is found. Reflection on a shiny surface like wood or tile can add to the photorealistic effects of a 3D rendering. • Polished - A Polished Reflection is an undisturbed reflection, like a mirror or chrome. • Blurry - A Blurry Reflection means that tiny random bumps on the surface of the material cause the reflection to be blurry. • Metallic - A reflection is Metallic if the highlights and reflections retain the color of the reflective object. • Glossy - This term can be misused. Sometimes it is a setting which Ray traced model demonstrating specular reflection. is the opposite of Blurry. (When "Glossiness" has a low value, the reflection is blurry.) However, some people use the term "Glossy Reflection" as a synonym for "Blurred Reflection." Glossy used in this context means that the reflection is actually blurred.
Examples Polished or Mirror reflection Mirrors are usually almost 100% reflective.
Mirror on wall rendered with 100% reflection.
Reflection (computer graphics)
205
Metallic Reflection Normal, (nonmetallic), objects reflect light and colors in the original color of the object being reflected. Metallic objects reflect lights and colors altered by the color of the metallic object itself.
The large sphere on the left is blue with its reflection marked as metallic. The large sphere on the right is the same color but does not have the metallic property selected.
Blurry Reflection Many materials are imperfect reflectors, where the reflections are blurred to various degrees due to surface roughness that scatters the rays of the reflections.
The large sphere on the left has sharpness set to 100%. The sphere on the right has sharpness set to 50% which creates a blurry reflection.
Reflection (computer graphics)
206
Glossy Reflection Fully glossy reflection, shows highlights from light sources, but does not show a clear reflection from objects.
The sphere on the left has normal, metallic reflection. The sphere on the right has the same parameters, except that the reflection is marked as "glossy".
Relief mapping (computer graphics) In computer graphics, relief mapping is a texture mapping technique used to render the surface details of three dimensional objects accurately and efficiently. It can produce accurate depictions of self-occlusion, self-shadowing, and parallax. It is a form of short-distance raytrace done on a pixel shader.[citation needed] Relief mapping is highly comparable in both function and approach to another displacement texture mapping technique, Parallax occlusion mapping, considering that they both rely on raytraces, though the two are not to be confused with each other, as parallax occlusion mapping uses reverse heightmap tracing.
References External links • Manuel's Relief texture mapping (http://www.inf.ufrgs.br/~oliveira/RTM.html)
Retained mode
Retained mode In computing, retained mode rendering is a style for application programming interfaces of graphics libraries, in which the libraries retain a complete model of the objects to be rendered.[1]
Overview By using a "retained mode" approach, client calls do not directly cause actual rendering, but instead update an internal model (typically a list of objects) which is maintained within the library's data space. This allows the library to optimize when actual rendering takes place along with the processing of related objects. Some techniques to optimize rendering include:[citation needed] • managing double buffering • performing occlusion culling • only transferring data that has changed from one frame to the next from the application to the library Immediate mode is an alternative approach; the two styles can coexist in the same library and are not necessarily exclusionary in practice. For example, OpenGL has immediate mode functions that can use previously defined server side objects (textures, vertex and index buffers, shaders, etc.) without resending unchanged data.[citation needed]
References [1] Retained Mode Versus Immediate Mode (http:/ / msdn. microsoft. com/ en-us/ library/ windows/ desktop/ ff684178(v=vs. 85). aspx)
Scene description language A scene description language refers to any description language used to describe a scene to be rendered by a 3D renderer such as a ray tracer. The scene is written in a text editor (which may include syntax highlighting) as opposed to be modeled in a graphical way. Some scene description language may include variables, constants, conditional statements, while loops and for loops. 3DMLW and X3D are XML-based scene description languages. The Tao Presentations application uses XL as a dynamic document description language.
Example POV-Ray #declare the_angle = 0; #while (the_angle < 360) box { <-0.5, -0.5, -0.5> <0.5, 0.5, 0.5> texture { pigment { color Red } finish { specular 0.6 } normal { agate 0.25 scale 1/2 } } rotate the_angle } #declare the_angle = the_angle + 45; #end
207
Scene description language
208
3DMLW
X3D
Tao Presentations
Tao Presentations real-time 3D rendering of a scene described using its document description language
clear_color 0, 0, 0, 1 light 0 light_position 1000, 1000, 1000 rotatey 0.05 * mouse_x text_box 0, 0, 800, 600, extrude_depth 25 extrude_radius 5 align_center
Scene description language
209
vertical_align_center font "Arial", 300 color "white" text "3D" line_break font_size 80 text zero hours & ":" & zero minutes & ":" & zero seconds draw_sphere N -> locally color_hsv 20 * N, 0.3, 1 translate 300*cos(N*0.1+time), 300*sin(N*0.17+time), 500*sin(N*0.23+time) sphere 50 zero N -> if N < 10 then "0" & text N else text N
Schlick's approximation In 3D computer graphics, Schlick's approximation is a formula for approximating the contribution of the Fresnel term in the specular reflection of light from a non-conducting interface (surface) between two media. According to Schlick's model, the specular reflection coefficient R can be approximated by:
where
is the angle between the viewing direction and the half-angle direction, which is halfway between the
incident light direction and the viewing direction, hence
. And
are the indices of
refraction of the two medias at the interface and
is the reflection coefficient for light incoming parallel to the
normal (i.e., the value of the Fresnel term when
or minimal reflection). In computer graphics, one of the
interfaces is usually air, meaning that
very well can be approximated as 1.
References • Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum 13 (3): 233. doi:10.1111/1467-8659.1330233 [1].
References [1] http:/ / dx. doi. org/ 10. 1111%2F1467-8659. 1330233
Sculpted prim
210
Sculpted prim A sculpted prim(itive) (or sculpty, sculptie, or just sculpt) is a Second Life 3D parametric object whose 3D shape is determined by a texture. These textures are UV maps that form the rendered 3D sculpted prim. Sculpted prims can be used to create more complex, organic shapes that are not possible with Second Life's primitive system.
Technical details
Sculpted fruit created for Second Life
A sculpty is a standard RGB texture where the R (red), G (green) and B (blue) channels are mapped onto X, Y, and Z space. Sculpt textures are similar to normal maps, but instead of encoding surface normals they encode surface positions. They are also similar to displacement maps, but instead of a single scalar distance, there are three values — one each for the X, Y, and Z coordinates. Sculpt textures are also very similar to parametric (e.g. NURBS) surfaces. See Sculpted Prims: Under the Hood [1] article for details. Minimum recognized size is 8 x 8. Maximum recognized size is 128 x 128. larger can be uploaded but will be treated as an image and compressed. The UV map is embedded into the sculpty on creation. When uploaded into the Second Life asset server, and rezzed (rendered), it will form the shape imposed on it. However, the actual texturing must be done in a separate file, the map of which will be based on the conceived shape. For example, a liquor bottle would have its label on its forward face. However, there is only one face on a sculpty as it is either a sphere, torus, cone, or cylinder (cube also exists but the texture covers all the perceived faces so the rule applies). For the liquor label it would be located on a specific section of the lower left quadrant.
External links Free sculpted prim creation software • • • •
Blender with the Domino Designs scripts [2] Wings 3D [3] with the sculptie plugin [4] InWorld Sculptor Tool Kit [5] - not available anymore. Is now sold as Sculpt Studio for 4999 L$! Sculpted Prims: 3D Software Guide [6]
Commercial sculpted prim creation software • • • • • • • • •
Aartform Curvy 3D [7] Autodesk Maya [8] Autodesk Media and Entertainment's 3ds Max [9] Hexagon by Daz3d [10] Inivis AC3D [11] Moment of Inspiration (MoI) [13] (save as a 3dm (opennurbs) file and use 3dm2sculpt [12] to convert to a sculpty) NewTek LightWave 3D [13] Pixel Lab SculptyPaint [14] Pixologic zBrush [15]
• ROKURO(lathe) - Sculpted Prim Maker [16] - A simple NURB curve maker and revolver (a "lathed" object is one that is perfectly symmetrical around the center axis) • Sculptie-O-matic [17] transform inworld linksets into sculpties directly
Sculpted prim • Strata 3D [18] • TheBlack Box - Sculpt Studio [19] • TATARA - Sculpted Prim Previewer and Editor [20] - Kanae Project top of the line; Sculpted Prims can be edited in the five modes: ROKURO/TOKOROTEN/MAGE/WAPPA/TSUCHI (contains ROKURO tool(above) and TOKOROTEN tool (below) ) ~L$5000 • TOKOROTEN(extruder) [21] - Makes sculpted prim texture TGA file of the pushed-out/extruded objects (makes cookie-cutter style shapes). As of mid-2008 the maker of Rokuro ceased making either available for free; it is now L$2500 (~$8 USD).
External links • • • • • • •
Second Life Wiki: Sculpted Prims [22] Second Life Wiki: Sculpted Prims: FAQ [23] Second Life Wiki: Talk:Sculpted Prims [24] How to Make Sculpted Prims with Blender [25] Video tutorials about sculpted prims with blender [26] Blender and Second Life [27] How to Make Sculpted Prims from Existing 3D Models with AC3D [28]
References [1] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prim_Explanation [2] http:/ / dominodesigns. info/ second_life/ blender_scripts. html [3] http:/ / www. wings3d. com/ [4] http:/ / pkpounceworks. sljoint. com/ index. php?option=com_remository& Itemid=28& func=fileinfo& id=119 [5] http:/ / www. slexchange. com/ modules. php?name=Marketplace& file=item& ItemID=266428 [6] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims:_3d_Software_Guide [7] http:/ / www. curvy3d. com/ [8] http:/ / www. alias. com/ [9] http:/ / www. autodesk. com/ 3dsmax [10] http:/ / artzone. daz3d. com/ wiki/ doku. php/ pub/ software/ hexagon/ start/ [11] http:/ / www. inivis. com/ [12] http:/ / wiki. secondlife. com/ wiki/ 3dm2sculpt [13] http:/ / www. newtek. com/ lightwave/ [14] http:/ / www. xs4all. nl/ ~elout/ sculptpaint/ [15] http:/ / www. pixologic. com/ [16] http:/ / www. kanae. net/ secondlife/ [17] http:/ / slurl. com/ secondlife/ Sri%20Syadasti/ 21/ 85/ 37 [18] http:/ / www. strata. com/ [19] http:/ / www. slexchange. com/ modules. php?name=Marketplace& file=item& ItemID=278458 [20] http:/ / kanae. net/ secondlife/ tatara. html [21] http:/ / kanae. net/ secondlife/ tokoroten. html [22] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims [23] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims:_FAQ [24] http:/ / wiki. secondlife. com/ wiki/ Talk:Sculpted_Prims [25] http:/ / amandalevitsky. googlepages. com/ sculptedprims [26] http:/ / blog. machinimatrix. org/ video-tutorials [27] http:/ / www. blendernation. com/ 2007/ 05/ 21/ blender-and-second-life/ [28] http:/ / independentdeveloper. com/ archive/ 2007/ 09/ 27/ sculpted_prims_from_existing_3
211
Silhouette edge
Silhouette edge In computer graphics, a silhouette edge on a 3D body projected onto a 2D plane (display plane) is the collection of points whose outwards surface normal is perpendicular to the view vector. Due to discontinuities in the surface normal, a silhouette edge is also an edge which separates a front facing face from a back facing face. Without loss of generality, this edge is usually chosen to be the closest one on a face, so that in parallel view this edge corresponds to the same one in a perspective view. Hence, if there is an edge between a front facing face and a side facing face, and another edge between a side facing face and back facing face, the closer one is chosen. The easy example is looking at a cube in the direction where the face normal is collinear with the view vector. The first type of silhouette edge is sometimes troublesome to handle because it does not necessarily correspond to a physical edge in the CAD model. The reason that this can be an issue is that a programmer might corrupt the original model by introducing the new silhouette edge into the problem. Also, given that the edge strongly depends upon the orientation of the model and view vector, this can introduce numerical instabilities into the algorithm (such as when a trick like dilution of precision is considered).
Computation To determine the silhouette edge of an object, we first have to know the plane equation of all faces. Then, by examining the sign of the point-plane distance from the light-source to each face
Using this result, we can determine if the face is front- or back facing. The silhouette edge(s) consist of all edges separating a front facing face from a back facing face.
Similar Technique A convenient and practical implementation of front/back facing detection is to use the unit normal of the plane (which is commonly precomputed for lighting effects anyway), then simply applying the dot product of the light position to the plane's unit normal and adding the D component of the plane equation (a scalar value):
Where plane_D is easily calculated as a point on the plane dot product with the unit normal of the plane:
Note: The homogeneous coordinates, L_w and d, are not always needed for this computation. After doing this calculation, you may notice indicator is actually the signed distance from the plane to the light position. This distance indicator will be negative if it is behind the face, and positive if it is in front of the face.
This is also the technique used in the 2002 SIGGRAPH paper, "Practical and Robust Stenciled Shadow Volumes for Hardware-Accelerated Rendering"
212
Silhouette edge
213
External links • http://wheger.tripod.com/vhl/vhl.htm
Skeletal animation Skeletal animation is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive and the same technique can be used to control the deformation of any object — a door, a spoon, a building, or a galaxy. This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other "goal-oriented" techniques. In principle, however, the intention of the technique is never to imitate real anatomy or physical processes, but only to control the deformation of the mesh data.
'Bones' (in green) used to pose a hand. In practice, the 'bones' themselves are often hidden and replaced by more user-friendly objects. In this example from the open source project Sintel, these 'handles' (in blue) have been scaled down to bend the fingers. The bones are still controlling the deformation, but the animator only sees the 'handles'.
Technique "Rigging is making our characters able to move. The process of rigging is we take that digital sculpture, and we start building the skeleton, the muscles, and we attach the skin to the character, and we also create a set of animation controls, which our animators use to push and pull the body around." — Frank Hanner, character CG supervisor of the Walt Disney Animation Studios, provided a basic understanding on the technique of character rigging.
This technique is used by constructing a series of 'bones,' sometimes referred to as rigging. Each bone has a three dimensional transformation (which includes its position, scale and orientation), and an optional parent bone. The bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own transform. So moving a thigh-bone will move the lower leg too. As the character is animated, the bones change their transformation over time, under the influence of some animation controller. A rig is generally composed of both forward kinematics and inverse kinematics parts that may interact with each other. Skeletal animation is referring to the forward kinematics part of the rig, where a complete set of bones configurations identifies a unique pose. Each bone in the skeleton is associated with some portion of the character's visual representation. Skinning is the process of creating this association. In the most common case of a polygonal mesh character, the bone is associated with a group of vertices; for example, in a model of a human being, the 'thigh' bone would be associated with the vertices making up the polygons in the model's thigh. Portions of the character's skin can normally be associated with multiple bones, each one having a scaling factors called vertex weights, or blend weights. The movement of skin near the joints of two bones, can therefore be influenced by both bones. In most state-of-the-art graphical engines, the skinning process is done on the GPU thanks to a shader program. For a polygonal mesh, each vertex can have a blend weight for each bone. To calculate the final position of the vertex, each bone transformation is applied to the vertex position, scaled by its corresponding weight. This algorithm is called matrix palette skinning, because the set of bone transformations (stored as transform matrices) form a
Skeletal animation palette for the skin vertex to choose from.
Benefits and drawbacks Strengths • Bone represent set of vertices (or some other objects, which represent for example a leg). • Animator controls fewer characteristics of the model • Animator can focus on the large scale motion. • Bones are independently movable. An animation can be defined by simple movements of the bones, instead of vertex by vertex (in the case of a polygonal mesh). Weaknesses • Bone represents set of vertices (or some other object). • Does not provide realistic muscle movement and skin motion • Possible solutions to this problem: • Special muscle controllers attached to the bones • Consultation with physiology experts (increase accuracy of musculoskeletal realism with more thorough virtual anatomy simulations)
Applications Skeletal animation is the standard way to animate characters or mechanical objects for a prolonged period of time (usually over 100 frames). It is commonly used by video game artists and in the movie industry, and can also be applied to mechanical objects and any other object made up of rigid elements and joints. Performance capture (or motion capture) can speed up development time of skeletal animation, as well as increasing the level of realism. For motion that is too dangerous for performance capture, there are computer simulations that automatically calculate physics of motion and resistance with skeletal frames. Virtual anatomy properties such as weight of limbs, muscle reaction, bone strength and joint constraints may be added for realistic bouncing, buckling, fracture and tumbling effects known as virtual stunts. Virtual stunts are controversial[citation needed] due to their potential to replace stunt performers. However, there are other applications of virtual anatomy simulations such as military and emergency response. Virtual soldiers, rescue workers, patients, passengers and pedestrians can be used for training, virtual engineering and virtual testing of equipment. Virtual anatomy technology may be combined with artificial intelligence for further enhancement of animation and simulation technology.
References
214
Sketch-based modeling
Sketch-based modeling Sketch-based modeling is a method of creating 3D models for use in 3D computer graphics applications. Sketch-based modeling is differentiated from other types of 3D modeling by its interface - instead of creating a 3D model by directly editing polygons, the user draws a 2D shape which is converted to 3D automatically by the application.
Purpose Many computer users think that traditional 3D modeling programs such as Blender or Maya have a high learning curve. Novice users often have difficulty creating models in traditional modeling programs without first completing a lengthy series of tutorials. Sketch-based modeling tools aim to solve this problem by creating a User interface which is similar to drawing, which most users are familiar with.
Uses Sketch-based modeling is primarily designed for use by persons with artistic ability, but no experience with 3D modeling programs. Curvy3D and Teddy, below, have largely been designed for this purpose. However, sketch-based modeling is also used for other applications. One popular application is rapid modeling of low-detail objects for use in prototyping and design work.
Operation There are two main types of sketch-based modeling. In the first, the user draws a shape in the workspace using a mouse or a tablet. The system then interprets this shape as a 3D object. Users can then alter the object by cutting off or adding sections. The process of adding sections to a model is generally referred to as overdrawing. The user is never required to interact directly with the vertices or Nurbs control points. In the second type of sketch-based modeling, the user draws one or more images on paper, then scans in the images. The system then automatically converts the sketches to a 3D model.
Examples • Aartform Curvy 3D - http://www.curvy3d.com • Alias Studio Tools • Teddy - http://www-ui.is.s.u-tokyo.ac.jp/~takeo/teddy/teddy.htm • Paint3D - http://www.paint3d.net • ShapeShop - http://www.shapeshop3d.com
215
Sketch-based modeling
Research A great deal of research is currently being done on sketch-based modeling. A number of papers on this topic are presented each year at the ACM Siggraph conference. The European graphics conference Eurographics has held four special conferences on sketch-based modeling: • • • •
2007 [1] 2006 [2] 2005 [3] 2004 [4]
References [1] [2] [3] [4]
http:/ / www. eg. org/ sbm/ 2007 http:/ / www. eg. org/ sbm/ 2006 http:/ / www. eg. org/ sbm/ 2005 http:/ / www. eg. org/ sbm/ 2004
Smoothing group In 3D computer graphics, a smoothing group is a group of polygons in a polygon mesh which should appear to form a smooth surface. Smoothing groups are useful for describing shapes where some polygons are connected smoothly to their neighbors, and some are not. For example, in a mesh representing a cylinder, all of the polygons are smoothly connected except along the edges of the end caps. One could make a smoothing group containing all of the polygons in one end cap, another containing the polygons in the other end cap, and a last group containing the polygons in the tube shape between the end caps. By identifying the polygons in a mesh that should appear to be smoothly connected, smoothing groups allow 3D modeling software to estimate the surface normal at any point on the mesh, by averaging the surface normals or vertex normals in the mesh data that describes the mesh. The software can use this data to determine how light interacts with the model. If each polygon lies in a plane, the software could calculate a polygon's surface normal by calculating the normal of the polygon's plane, meaning this data would not have to be stored in the mesh. Thus, early 3D modeling software like 3D Studio Max DOS used smoothing groups as a way to avoid having to store accurate vertex normals for each vertex of the mesh, as a strategy for computer representation of surfaces.
References
216
Soft body dynamics
Soft body dynamics Soft body dynamics is a field of computer graphics that focuses on visually realistic physical simulations of the motion and properties of deformable objects (or soft bodies). The applications are mostly in video games and film. Unlike in simulation of rigid bodies, the shape of soft bodies can change, meaning that the relative distance of two points on the object is not fixed. While the relative distances of points are not fixed, the body is expected to retain its shape to some degree (unlike a fluid). The scope of soft body dynamics is quite broad, including simulation of soft organic materials such as muscle, fat, hair and vegetation, as well as other deformable materials such as clothing and fabric. Generally, these methods only provide visually plausible emulations rather than accurate scientific/engineering simulations, though there is some crossover with scientific methods, particularly in the case of finite element simulations. Several physics engines currently provide software for soft-body simulation.
Deformable solids The simulation of volumetric solid soft bodies can be realised by using a variety of approaches.
Spring/mass models In this approach, the body is modeled as a set of point masses (nodes) connected by ideal weightless elastic springs obeying some variant of Hooke's law. The nodes may either derive from the edges of a two-dimensional polygonal mesh representation of the surface of the Two nodes as mass points connected by a parallel object, or from a three-dimensional network of nodes and edges circuit of a spring and a damper. modeling the internal structure of the object (or even a one-dimensional system of links, if for example a rope or hair strand is being simulated). Additional springs between nodes can be added, or the force law of the springs modified, to achieve desired effects. Applying Newton's second law to the point masses including the forces applied by the springs and any external forces (due to contact, gravity, air resistance, wind, and so on) gives a system of differential equations for the motion of the nodes, which is solved by standard numerical schemes for solving ODEs. Rendering of a three-dimensional mass-spring lattice is often done using free-form deformation, in which the rendered mesh is embedded in the lattice and distorted to conform to the shape of the lattice as it evolves. Assuming all point masses equal to zero one can obtain the Stretched grid method aimed at several engineering problems solution relative to the elastic grid behavior.
Finite element simulation This is a more physically accurate approach, which uses the widely used finite element method to solve the partial differential equations which govern the dynamics of an elastic material. The body is modeled as a three-dimensional elastic continuum by breaking it into a large number of solid elements which fit together, and solving for the stresses and strains in each element using a model of the material. The elements are typically tetrahedral, the nodes being the vertices of the tetrahedra (relatively simple methods exist to tetrahedralize a three dimensional region bounded by a polygon mesh into tetrahedra, similarly to how a two-dimensional polygon may be triangulated into triangles). The strain (which measures the local deformation of the points of the material from their rest state) is quantified by the strain tensor . The stress (which measures the local forces per-unit area in all directions acting on the material) is quantified by the Cauchy stress tensor . Given the current local strain, the local stress can be computed via the generalized form of Hooke's law: where is the "elasticity tensor" which encodes the material properties (parametrized in linear elasticity for an isotropic material by the Poisson ratio and Young's modulus). The equation of motion of the element nodes is obtained by integrating the stress field over each element and relating this, via Newton's second law, to the node accelerations.
217
Soft body dynamics Pixelux (developers of the Digital Molecular Matter system) use a finite-element-based approach for their soft bodies, using a tetrahedral mesh and converting the stress tensor directly into node forces. Rendering is done via a form of free-form deformation.
Energy minimization methods This approach is motivated by variational principles and the physics of surfaces, which dictate that a constrained surface will assume the shape which minimizes the total energy of deformation (analogous to a soap bubble). Expressing the energy of a surface in terms of its local deformation (the energy is due to a combination of stretching and bending), the local force on the surface is given by differentiating the energy with respect to position, yielding an equation of motion which can be solved in the standard ways.
Shape matching In this scheme, penalty forces or constraints are applied to the model to drive it towards its original shape (i.e. the material behaves as if it has shape memory). To conserve momentum the rotation of the body must be estimated properly, for example via polar decomposition. To approximate finite element simulation, shape matching can be applied to three dimensional lattices and multiple shape matching constraints blended.
Rigid-body based deformation Deformation can also be handled by a traditional rigid-body physics engine, modeling the soft-body motion using a network of multiple rigid bodies connected by constraints, and using (for example) matrix-palette skinning to generate a surface mesh for rendering. This is the approach used for deformable objects in Havok Destruction.
Cloth simulation In the context of computer graphics, cloth simulation refers to the simulation of soft bodies in the form of two dimensional continuum elastic membranes, that is, for this purpose, the actual structure of real cloth on the yarn level can be ignored (though modeling cloth on the yarn level has been tried). Via rendering effects, this can produce a visually plausible emulation of textiles and clothing, used in a variety of contexts in video games, animation, and film. It can also be used to simulate two dimensional sheets of materials other than textiles, such as deformable metal panels or vegetation. In video games it is often used to enhance the realism of clothed characters, which otherwise would be entirely animated. Cloth simulators are generally based on mass-spring models, but a distinction must be made between force-based and position-based solvers.
Force-based cloth The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically unstable for sufficiently stiff systems). Therefore implicit solvers must be used, requiring solution of a large sparse matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive frame rates. An alternative is to use an explicit method with low stiffness, with ad hoc methods to avoid instability and excessive stretching (e.g. strain limiting corrections).
218
Soft body dynamics
Position-based dynamics To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably PhysX, Havok Cloth, and Maya nCloth) use position based dynamics (PBD), an approach based on constraint relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained. This is similar to a Gauss-Seidel solution of the implicit matrix system for the mass-spring model. Care must be taken though to solve the constraints in the same sequence each timestep, to avoid spurious oscillations, and to make sure that the constraints do not violate linear and angular momentum conservation. Additional position constraints can be applied, for example to keep the nodes within desired regions of space (sufficiently close to an animated model for example), or to maintain the body's overall shape via shape matching.
Collision detection for deformable objects Realistic interaction of simulated soft objects with their environment may be important for obtaining visually realistic results. Cloth self-intersection is important in some applications for acceptably realistic simulated garments. This is challenging to achieve at interactive frame rates, particularly in the case of detecting and resolving self collisions and mutual collisions between two or more deformable objects. Collision detection may be discrete/a posteriori (meaning objects are advanced in time through a pre-determined interval, and then any penetrations detected and resolved), or continuous/a priori (objects are advanced only until a collision occurs, and the collision is handled before proceeding). The former is easier to implement and faster, but leads to failure to detect collisions (or detection of spurious collisions) if objects move fast enough. Real-time systems generally have to use discrete collision detection, with other ad hoc ways to avoid failing to detect collisions. Detection of collisions between cloth and environmental objects with a well defined "inside" is straightforward since the system can detect unambiguously whether the cloth mesh vertices and faces are intersecting the body and resolve them accordingly. If a well defined "inside" does not exist (e.g. in the case of collision with a mesh which does not form a closed boundary), an "inside" may be constructed via extrusion. Mutual- or self-collisions of soft bodies defined by tetrahedra is straightforward, since it reduces to detection of collisions between solid tetrahedra. However, detection of collisions between two polygonal cloths (or collision of a cloth with itself) via discrete collision detection is much more difficult, since there is no unambiguous way to locally detect after a timestep whether a cloth node which has penetrated is on the "wrong" side or not. Solutions involve either using the history of the cloth motion to determine if an intersection event has occurred, or doing a global analysis of the cloth state to detect and resolve self-intersections. Pixar has presented a method which uses a global topological analysis of mesh intersections in configuration space to detect and resolve self-interpenetration of cloth. Currently, this is generally too computationally expensive for real-time cloth systems. To do collision detection efficiently, primitives which are certainly not colliding must be identified as soon as possible and discarded from consideration to avoid wasting time. To do this, some form of spatial subdivision scheme is essential, to avoid a brute force test of primitive collisions. Approaches used include: • Bounding volume hierarchies (AABB trees, OBB trees, sphere trees) • Grids, either uniform (using hashing for memory efficiency) or hierarchical (e.g. Octree, kd-tree) • Coherence-exploiting schemes, such as sweep and prune with insertion sort, or tree-tree collisions with front tracking. • Hybrid methods involving a combination of various of these schemes, e.g. a coarse AABB tree plus sweep-and-prune with coherence between colliding leaves.
219
Soft body dynamics
Other applications Other effects which may be simulated via the methods of soft-body dynamics are: • Destructible materials: fracture of brittle solids, cutting of soft bodies, and tearing of cloth. The finite element method is especially suited to modelling fracture as it includes a realistic model of the distribution of internal stresses in the material, which physically is what determines when fracture occurs, according to fracture mechanics. • Plasticity (permanent deformation) and melting • Simulated hair, fur, and feathers • Simulated organs for biomedical applications Simulating fluids in the context of computer graphics would not normally be considered soft-body dynamics, which is usually restricted to mean simulation of materials which have a tendency to retain their shape and form. In contrast, a fluid assumes the shape of whatever vessel contains it, as the particles are bound together by relatively weak forces.
Engines supporting soft body physics • Bullet 2.69 • • • • • • • • • • • • • • • •
Carbon, by Numerion Software CryEngine 3 (http://mycryengine.com) Digital Molecular Matter Havok Cloth Maya nCloth OpenTissue - (http://www.opentissue.org) OpenCloth - (http://code.google.com/p/opencloth) - A collection of source codes implementing cloth simulation algorithms as well as soft body dynamics in OpenGL. Physics Abstraction Layer (PAL) - Uniform API, supports multiple physics engines. PhysX Phyz (Dax Phyz) Rigs of Rods - (http://www.rigsofrods.com [1]) - Predecessor of BeamNG. SOFA (Simulation Open Framework Architecture) Step Syflex (Cloth simulator) Unreal Engine 3 BeamNG - (http://beamng.com)
References [1] http:/ / www. rigsofrods. com/
External links • "The Animation of Natural Phenomena", CMU course on physically based animation, including deformable bodies (http://graphics.cs.cmu.edu/courses/15-869/) • Soft body dynamics video example (http://youtube.com/watch?v=gbXCGpuJI7w) • Introductory article (http://vizproto.prism.asu.edu/classes/sp03/wyman_g/Soft Body Dynamics. htm)Wikipedia:Link rot • Article by Thomas Jakobsen which explains the basics of the PBD method (http://www.teknikus.dk/tj/ gdc2001.htm)
220
Solid modeling
Solid modeling Solid modeling (or modelling) is a consistent set of principles for mathematical and computer modeling of three-dimensional solids. Solid modeling is distinguished from related areas of geometric modeling and computer graphics by its emphasis on physical fidelity. Together, the principles of geometric and solid modeling form the foundation of computer-aided design and in general support the creation, exchange, visualization, animation, interrogation, and annotation of digital models of physical objects.
Overview The use of solid modeling techniques allows for the automation of several difficult engineering calculations that are carried out as a part of the design process. Simulation, planning, and verification of processes such as machining and assembly were one of the main catalysts for the development of solid modeling. More recently, the range of supported manufacturing applications has been greatly The geometry in solid modeling is fully described expanded to include sheet metal manufacturing, injection molding, in 3‑D space; objects can be viewed from any welding, pipe routing etc. Beyond traditional manufacturing, solid angle. modeling techniques serve as the foundation for rapid prototyping, digital data archival and reverse engineering by reconstructing solids from sampled points on physical objects, mechanical analysis using finite elements, motion planning and NC path verification, kinematic and dynamic analysis of mechanisms, and so on. A central problem in all these applications is the ability to effectively represent and manipulate three-dimensional geometry in a fashion that is consistent with the physical behavior of real artifacts. Solid modeling research and development has effectively addressed many of these issues, and continues to be a central focus of computer-aided engineering.
Mathematical foundations The notion of solid modeling as practiced today relies on the specific need for informational completeness in mechanical geometric modeling systems, in the sense that any computer model should support all geometric queries that may be asked of its corresponding physical object. The requirement implicitly recognizes the possibility of several computer representations of the same physical object as long as any two such representations are consistent. It is impossible to computationally verify informational completeness of a representation unless the notion of a physical object is defined in terms of computable mathematical properties and independent of any particular representation. Such reasoning led to the development of the modeling paradigm that has shaped the field of solid modeling as we know it today. All manufactured components have finite size and well behaved boundaries, so initially the focus was on mathematically modeling rigid parts made of homogeneous isotropic material that could be added or removed. These postulated properties can be translated into properties of subsets of three-dimensional Euclidean space. The two common approaches to define solidity rely on point-set topology and algebraic topology respectively. Both models specify how solids can be built from simple pieces or cells.
221
Solid modeling
According to the continuum point-set model of solidity, all the points of any X ⊂ ℝ3 can be classified according to their neighborhoods with respect to X as interior, exterior, or boundary points. Assuming ℝ3 is endowed with the typical Euclidean metric, a Regularization of a 2-d set by taking the closure of its interior neighborhood of a point p ∈X takes the form of an open ball. For X to be considered solid, every neighborhood of any p ∈X must be consistently three dimensional; points with lower dimensional neighborhoods indicate a lack of solidity. Dimensional homogeneity of neighborhoods is guaranteed for the class of closed regular sets, defined as sets equal to the closure of their interior. Any X ⊂ ℝ3 can be turned into a closed regular set or regularized by taking the closure of its interior, and thus the modeling space of solids is mathematically defined to be the space of closed regular subsets of ℝ3 (by the Heine-Borel theorem it is implied that all solids are compact sets). In addition, solids are required to be closed under the Boolean operations of set union, intersection, and difference (to guarantee solidity after material addition and removal). Applying the standard Boolean operations to closed regular sets may not produce a closed regular set, but this problem can be solved by regularizing the result of applying the standard Boolean operations. The regularized set operations are denoted ∪∗, ∩∗, and −∗. The combinatorial characterization of a set X ⊂ ℝ3 as a solid involves representing X as an orientable cell complex so that the cells provide finite spatial addresses for points in an otherwise innumerable continuum. The class of semi-analytic bounded subsets of Euclidean space is closed under Boolean operations (standard and regularized) and exhibits the additional property that every semi-analytic set can be stratified into a collection of disjoint cells of dimensions 0,1,2,3. A triangulation of a semi-analytic set into a collection of points, line segments, triangular faces, and tetrahedral elements is an example of a stratification that is commonly used. The combinatorial model of solidity is then summarized by saying that in addition to being semi-analytic bounded subsets, solids are three-dimensional topological polyhedra, specifically three-dimensional orientable manifolds with boundary. In particular this implies the Euler characteristic of the combinatorial boundary of the polyhedron is 2. The combinatorial manifold model of solidity also guarantees the boundary of a solid separates space into exactly two components as a consequence of the Jordan-Brouwer theorem, thus eliminating sets with non-manifold neighborhoods that are deemed impossible to manufacture. The point-set and combinatorial models of solids are entirely consistent with each other, can be used interchangeably, relying on continuum or combinatorial properties as needed, and can be extended to n dimensions. The key property that facilitates this consistency is that the class of closed regular subsets of ℝn coincides precisely with homogeneously n-dimensional topological polyhedra. Therefore every n-dimensional solid may be unambiguously represented by its boundary and the boundary has the combinatorial structure of an n−1-dimensional polyhedron having homogeneously n−1-dimensional neighborhoods.
Solid representation schemes Based on assumed mathematical properties, any scheme of representing solids is a method for capturing information about the class of semi-analytic subsets of Euclidean space. This means all representations are different ways of organizing the same geometric and topological data in the form of a data structure. All representation schemes are organized in terms of a finite number of operations on a set of primitives. Therefore the modeling space of any particular representation is finite, and any single representation scheme may not completely suffice to represent all types of solids. For example, solids defined via combinations of regularized boolean operations cannot necessarily be represented as the sweep of a primitive moving according to a space trajectory, except in very simple cases. This
222
Solid modeling forces modern geometric modeling systems to maintain several representation schemes of solids and also facilitate efficient conversion between representation schemes. Below is a list of common techniques used to create or represent solid models. Modern modeling software may use a combination of these schemes to represent a solid.
Parameterized primitive instancing This scheme is based on the notion of families of objects, each member of a family distinguishable from the other by a few parameters. Each object family is called a generic primitive, and individual objects within a family are called primitive instances. For example a family of bolts is a generic primitive, and a single bolt specified by a particular set of parameters is a primitive instance. The distinguishing characteristic of pure parameterized instancing schemes is the lack of means for combining instances to create new structures which represent new and more complex objects. The other main drawback of this scheme is the difficulty of writing algorithms for computing properties of represented solids. A considerable amount of family-specific information must be built into the algorithms and therefore each generic primitive must be treated as a special case, allowing no uniform overall treatment.
Spatial occupancy enumeration This scheme is essentially a list of spatial cells occupied by the solid. The cells, also called voxels are cubes of a fixed size and are arranged in a fixed spatial grid (other polyhedral arrangements are also possible but cubes are the simplest). Each cell may be represented by the coordinates of a single point, such as the cell's centroid. Usually a specific scanning order is imposed and the corresponding ordered set of coordinates is called a spatial array. Spatial arrays are unambiguous and unique solid representations but are too verbose for use as 'master' or definitional representations. They can, however, represent coarse approximations of parts and can be used to improve the performance of geometric algorithms, especially when used in conjunction with other representations such as constructive solid geometry.
Cell decomposition This scheme follows from the combinatoric (algebraic topological) descriptions of solids detailed above. A solid can be represented by its decomposition into several cells. Spatial occupancy enumeration schemes are a particular case of cell decompositions where all the cells are cubical and lie in a regular grid. Cell decompositions provide convenient ways for computing certain topological properties of solids such as its connectedness (number of pieces) and genus (number of holes). Cell decompositions in the form of triangulations are the representations used in 3d finite elements for the numerical solution of partial differential equations. Other cell decompositions such as a Whitney regular stratification or Morse decompositions may be used for applications in robot motion planning.
Boundary representation In this scheme a solid is represented by the cellular decomposition of its boundary. Since the boundaries of solids have the distinguishing property that they separate space into regions defined by the interior of the solid and the complementary exterior according to the Jordan-Brouwer theorem discussed above, every point in space can unambiguously be tested against the solid by testing the point against the boundary of the solid. Recall that ability to test every point in the solid provides a guarantee of solidity. Using ray casting it is possible to count the number of intersections of a cast ray against the boundary of the solid. Even number of intersections correspond to exterior points, and odd number of intersections correspond to interior points. The assumption of boundaries as manifold cell complexes forces any boundary representation to obey disjointedness of distinct primitives, i.e. there are no self-intersections that cause non-manifold points. In particular, the manifoldness condition implies all pairs of vertices are disjoint, pairs of edges are either disjoint or intersect at one vertex, and pairs of faces are disjoint or intersect at a common edge. Several data structures that are combinatorial maps have been developed to store
223
Solid modeling
224
boundary representations of solids. In addition to planar faces, modern systems provide the ability to store quadrics and NURBS surfaces as a part of the boundary representation. Boundary representations have evolved into a ubiquitous representation scheme of solids in most commercial geometric modelers because of their flexibility in representing solids exhibiting a high level of geometric complexity.
Constructive solid geometry Constructive solid geometry (CSG) connotes a family of schemes for representing rigid solids as Boolean constructions or combinations of primitives via the regularized set operations discussed above. CSG and boundary representations are currently the most important representation schemes for solids. CSG representations take the form of ordered binary trees where non-terminal nodes represent either rigid transformations (orientation preserving isometries) or regularized set operations. Terminal nodes are primitive leaves that represent closed regular sets. The semantics of CSG representations is clear. Each subtree represents a set resulting from applying the indicated transformations/regularized set operations on the set represented by the primitive leaves of the subtree. CSG representations are particularly useful for capturing design intent in the form of features corresponding to material addition or removal (bosses, holes, pockets etc.). The attractive properties of CSG include conciseness, guaranteed validity of solids, computationally convenient Boolean algebraic properties, and natural control of a solid's shape in terms of high level parameters defining the solid's primitives and their positions and orientations. The relatively simple data structure and elegant recursive algorithms have further contributed to the popularity of CSG.
Sweeping The basic notion embodied in sweeping schemes is simple. A set moving through space may trace or sweep out volume (a solid) that may be represented by the moving set and its trajectory. Such a representation is important in the context of applications such as detecting the material removed from a cutter as it moves along a specified trajectory, computing dynamic interference of two solids undergoing relative motion, motion planning, and even in computer graphics applications such as tracing the motions of a brush moved on a canvas. Most commercial CAD systems provide (limited) functionality for constructing swept solids mostly in the form of a two dimensional cross section moving on a space trajectory transversal to the section. However, current research has shown several approximations of three dimensional shapes moving across one parameter, and even multi-parameter motions.
Implicit representation A very general method of defining a set of points X is to specify a predicate that can be evaluated at any point in space. In other words, X is defined implicitly to consist of all the points that satisfy the condition specified by the predicate. The simplest form of a predicate is the condition on the sign of a real valued function resulting in the familiar representation of sets by equalities and inequalities. For example if the conditions
,
, and
represent respectively a plane and two open linear halfspaces.
More complex functional primitives may be defined by boolean combinations of simpler predicates. Furthermore, the theory of R-functions allow conversions of such representations into a single function inequality for any closed semi analytic set. Such a representation can be converted to a boundary representation using polygonization algorithms, for example, the marching cubes algorithm.
Parametric and feature-based modeling Features are defined to be parametric shapes associated with attributes such as intrinsic geometric parameters (length, width, depth etc.), position and orientation, geometric tolerances, material properties, and references to other features. Features also provide access to related production processes and resource models. Thus, features have a semantically higher level than primitive closed regular sets. Features are generally expected to form a basis for linking CAD with downstream manufacturing applications, and also for organizing databases for design data reuse.
Solid modeling
History of solid modelers The historical development of solid modelers has to be seen in context of the whole history of CAD, the key milestones being the development of the research system BUILD followed by its commercial spin-off Romulus which went on to influence the development of Parasolid, ACIS and Solid Modeling Solutions. Other contributions came from Mäntylä, with his GWB and from the GPM project which contributed, among other things, hybrid modeling techniques at the beginning of the 1980s. This is also when the Programming Language of Solid Modeling PLaSM was conceived at the University of Rome.
Computer-aided design The modeling of solids is only the minimum requirement of a CAD system’s capabilities. Solid modelers have become commonplace in engineering departments in the last ten yearsWikipedia:Manual of Style/Dates and numbers#Chronological items due to faster computers and competitive software pricing. Solid modeling software creates a virtual 3D representation of components for machine design and analysis. A typical graphical user interface includes programmable macros, keyboard shortcuts and dynamic model manipulation. The ability to dynamically re-orient the model, in real-time shaded 3-D, is emphasized and helps the designer maintain a mental 3-D image. A solid part model generally consists of a group of features, added one at a time, until the model is complete. Engineering solid models are built mostly with sketcher-based features; 2-D sketches that are swept along a path to become 3-D. These may be cuts, or extrusions for example. Design work on components is usually done within the context of the whole product using assembly modeling methods. An assembly model incorporates references to individual part models that comprise the product. Another type of modeling technique is 'surfacing' (Freeform surface modeling). Here, surfaces are defined, trimmed and merged, and filled to make solid. The surfaces are usually defined with datum curves in space and a variety of complex commands. Surfacing is more difficult, but better applicable to some manufacturing techniques, like injection molding. Solid models for injection molded parts usually have both surfacing and sketcher based features. Engineering drawings can be created semi-automatically and reference the solid models.
Parametric modeling Parametric modeling uses parameters to define a model (dimensions, for example). Examples of parameters are: dimensions used to create model features, material density, formulas to describe swept features, imported data (that describe a reference surface, for example). The parameter may be modified later, and the model will update to reflect the modification. Typically, there is a relationship between parts, assemblies, and drawings. A part consists of multiple features, and an assembly consists of multiple parts. Drawings can be made from either parts or assemblies. Example: A shaft is created by extruding a circle 100 mm. A hub is assembled to the end of the shaft. Later, the shaft is modified to be 200 mm long (click on the shaft, select the length dimension, modify to 200). When the model is updated the shaft will be 200 mm long, the hub will relocate to the end of the shaft to which it was assembled, and the engineering drawings and mass properties will reflect all changes automatically. Related to parameters, but slightly different are constraints. Constraints are relationships between entities that make up a particular shape. For a window, the sides might be defined as being parallel, and of the same length. Parametric modeling is obvious and intuitive. But for the first three decades of CAD this was not the case. Modification meant re-draw, or add a new cut or protrusion on top of old ones. Dimensions on engineering drawings were created, instead of shown. Parametric modeling is very powerful, but requires more skill in model creation. A complicated model for an injection molded part may have a thousand features, and modifying an early feature may cause later features to fail. Skillfully created parametric models are easier to maintain and modify. Parametric modeling also lends itself to data re-use. A whole family of capscrews can be contained in one model, for example.
225
Solid modeling
226
Medical solid modeling Modern computed axial tomography and magnetic resonance imaging scanners can be used to create solid models of internal body features, so-called volume rendering. Optical 3D scanners can be used to create point clouds or polygon mesh models of external body features. Uses of medical solid modeling; • Visualization • Visualization of specific body tissues (just blood vessels and tumor, for example) • Designing prosthetics, orthotics, and other medical and dental devices (this is sometimes called mass customization) • Creating polygon mesh models for rapid prototyping (to aid surgeons preparing for difficult surgeries, for example) • Combining polygon mesh models with CAD solid modeling (design of hip replacement parts, for example) • Computational analysis of complex biological processes, e.g. air flow, blood flow • Computational simulation of new medical devices and implants in vivo If the use goes beyond visualization of the scan data, processes like image segmentation and image-based meshing will be necessary to generate an accurate and realistic geometrical description of the scan data.
Engineering Because CAD programs running on computers “understand” the true geometry comprising complex shapes, many attributes of/for a 3‑D solid, such as its center of gravity, volume, and mass, can be quickly calculated. For instance, the cube shown at the top of this article measures 8.4 mm from flat to flat. Despite its many radii and the shallow pyramid on each of its six faces, its properties are readily calculated for the designer, as shown in the screenshot at right.
Mass properties window of a model in Cobalt
References External links • sgCore C++/C# library (http://www.geometros.com) • The Solid Modeling Association (http://solidmodeling.org/)
Sparse voxel octree
227
Sparse voxel octree A sparse voxel octree (SVO) is a 3D computer graphics rendering technique using a raycasting or sometimes a ray tracing approach into an octree data representation. The technique varies somewhat, but generally relies on generating and processing the hull of points (sparse voxels) which are visible or may be visible, given the resolution and size of the screen. The main points of the technique are: First, only pixels to be actually displayed need to be computed, with the actual screen resolution limiting the level of voxel detail required, and second, interior voxels – voxels fully enclosed by other voxels which are fully opaque – are unnecessary and thus need not be included in the 3D data set. The first point limits the computational cost during rendering, and the second point limits the amount of 3D voxel data (and thus storage space) required for realistic, high-resolution digital models and/or environments. Because it needs only a small subset of the full voxel data a system does not need to process a massive amount of voxel data at any one time; it can extract data from extremely large data sources of voxels as and when needed. The basic advantage of octrees is that as a hierarchical data structure they need not be fully explored to their full depth, and this allows the system to run on current generation computer hardware. In addition, the octree datastructure permits smoothing of the underlying data, to help with antialiasing. It is, however, a generally less well developed technique than standard polygon-based rasterisation schemes.
References
Specularity Specularity is the visual appearance of specular reflections. In computer graphics, it means the quantity used in three-dimensional (3D) rendering which represents the amount of specular reflectivity a surface has. It is a key component in determining the brightness of specular highlights, along with shininess to determine the size of the highlights. It is frequently used in real-time computer graphics where the mirror-like specular reflection of light from other surfaces is often ignored (due to the more intensive computations required to calculate it), and the specular reflection of light directly from point light sources is modelled as specular highlights. Specular highlights on a pair of spheres
References
Static mesh
Static mesh Static meshes are polygon meshes which constitute a major part of map architecture in many game engines, including Unreal Engine, Source, and Unity. The word "static" refers only to the fact that static meshes can't be vertex animated, as they can be moved, scaled, or reskinned in realtime. Static Meshes can create more complex shapes than CSG (the other major part of map architecture) and are faster to render per triangle.
Characteristics A Static Mesh contains information about its shape (vertices, edges and sides), a reference to the textures to be used, and optionally a collision model (see the simple collision section below).
Collision There are three ways for a Static Mesh to collide: • No collision: a static mesh can be set not to block anything. This is often used for small decoration like grass. • Per-polygon collision (default): individual polygons collide with actors. Each material (i.e. each part of the Static Mesh using a separate texture) can be set to collide or not independently from the rest. The advantage of this method is that one part of the Static Mesh can collide while another doesn't (a common example: a tree's trunk collides, but its leaves don't). The disadvantage is that for complex meshes this can take a lot of processing power. • Simple collision: the static mesh doesn't collide itself, but has built-in blocking volumes that collide instead. Usually, the blocking volumes will have a simpler shape than the Static Mesh, resulting in faster collision calculation.
Texturing Although Static Meshes have built-in information on what textures to use, this can be overridden by adding a new skin in the Static Mesh's properties. Alternatively, the Static Mesh itself can be modified to use different textures by default.
Usage In maps, Static Meshes are very common, as they are used for anything more complex than basic architecture (in which case CSG is used) or terrain. Additionally, Static Meshes sometimes represent other objects, including weapon projectiles and destroyed vehicles. Often after rendered cutscenes in which, for instance, a tank is destroyed, the tank's hull would be added as a static mesh to the real-game world.
External links • UnrealWiki: Static Mesh [1]
References [1] http:/ / wiki. beyondunreal. com/ wiki/ Static_Mesh
228
Stereoscopic acuity
Stereoscopic acuity Stereoscopic acuity, also stereoacuity, is the smallest detectable depth difference that can be seen in binocular vision.
Specification and measurement Stereoacuity [1] is most simply explained by considering one of its earliest test: The observer is shown a black peg at a distance of 6m (=20 feet). A second peg, below it, can be moved back and forth until it is just detectably nearer than the fixed one. Stereoacuity is this difference in the two positions, converted into an angle of binocular disparity, i.e., the difference in their binocular parallax. Conversion to the angle of disparity dγ is performed by inserting the position difference dz in the formula
dγ = c a dz/z2 where a is the interocular separation of the observer and z the distance of the fixed peg from the eye. To transfer dγ into the usual unit of minutes of arc, a multiplicative constant c is inserted whose value is 3437.75 (1 radian in arcminutes). In the calculation a, dz and z must be in the same units, say, feet, inches, cm or meters. For the average interocular distance of 6.5 cm, a target distance of 6m and a typical stereoacuity of 0.5 minute of arc, the just detectable depth interval is 8 cm. As targets come closer, this interval gets smaller by the inverse square of the distance, so that an equivalent detectable depth interval at ¼ meter is 0.01 cm or the depth of impression of the head on a coin. These vary small values of normal stereoacuity, expressed in differences of either object distances, or angle of disparity, makes it a hyperacuity.
Tests of stereoacuity
229
Stereoscopic acuity Since the two-peg device, named Howard-Dolman after its inventors,[2] is cumbersome, stereoacuity is usually measured using a stereogram in which separate panels are shown to each eye by superimposing them in a stereoscope using prisms or goggles with color or polarizing filters or alternating occlusion (for a review see [3]). A good procedure is a chart, analogous to the familiar Snellen visual acuity chart, in which one letter in each row differs in depth (front or behind) sequentially Example of a Snellen-like depth test increasing in difficulty. For children the fly test is ideal: the image of a fly is transilluminated by polarized light; wearing polarizing glasses the wing appears at a different depth and allows stereopsis to be demonstrated by trying to pull on it.
Expected performance There is no equivalent in stereoacuity of the normal 20/20 visual acuity standard. In every case, the numerical score, even if expressed in disparity angle, depends to some extent on the test being used. Superior observers under ideal conditions can achieve 0.1 arc min or even better. The distinction between screening for the presence of stereopsis and a measurement of stereoacuity is valuable. To ascertain that depth can be seen in a binocular views, a test must be easily administered and not subject to deception. The random-dot stereogram is used widely for this purpose and has the advantage that for the uninitiated the object shape is unknown. It is made of random small pattern elements; depth can be created only in multiples of elements and therefore may not reach the small threshold disparity which is the purpose of stereoacuity measurements. A population study revealed a surprisingly high incidence of good stereoacuity:.[4] Out of 188 biology students, 97.3% could perform at 2.3 minutes of arc or better.
Factors influencing stereoacuity Optimum stereoacuity requires that the following mitigating factors be avoided: • • • •
Low contrast [5] Short duration exposures (less than 500 milliseconds) [5] Fuzzy or closely spaced pattern elements.[6] Uncorrected or unequally corrected refractive errors (monovision)
Perceptual training in stereopsis More than other such visual capabilities, the limits of stereopsis depend on the observer's familiarity with the situation. Stereo thresholds almost always improve, often several-fold, with training [7] and involve perceptual factors, differing in their particulars for each test.[8] This is most vividly evident in the time it takes to "solve" a random-dot stereogram rapidly decreases between the first exposure and subsequent views [9]
References [1] [2] [3] [4] [5]
Howard IP, Rogers BJ (2002) Seeing in Depth. Vol. 1I. Chapter 19 Porteous, Toronto Howard HJ (1919) A test for the judgment of distance. Amer. J. Ophthalmol., 2, 656-675 http:/ / rspb. royalsocietypublishing. org/ content/ early/ 2011/ 04/ 09/ rspb. 2010. 2777. long Coutant BE(1993) Population distribution of stereoscopic ability. Ophthalmic Physiol Opt, 13, 3-7. Westheimer G, Pettet MW (1990) Contrast and duration of exposure differentially affect vernier and stereoscopic acuity. Proc R Soc Lond B Biol Sci, 241, 42-6
[6] The Ferrier Lecture (1994) Seeing depth with two eyes: stereopsis. Proc R Soc Lond B Biol Sci, 257, 205-14 [7] Fendick M, Westheimer G. (1983) Effects of practice and the separation of test targets on foveal and peripheral stereoacuity. Vision Research, 23, 145-50
230
Stereoscopic acuity
231
[8] McKee SP, Taylor DG (2010) The precision of binocular and monocular depth judgments in natural settings. J. Vision, 10, 5 [9] Harwerth RS, Rawlings SC (1977) Viewing time and stereoscopic threshold with random-dot stereograms. Am J Optom Physiol Opt, 54, 452-457.
External links • Review of 3D displays and stereo vision (http://rspb.royalsocietypublishing.org/content/278/1716/2241. long)
Subdivision surface A subdivision surface, in the field of 3D computer graphics, is a method of representing a smooth surface via the specification of a coarser piecewise linear polygon mesh. The smooth surface can be calculated from the coarse mesh as the limit of a recursive process of subdividing each polygonal face into smaller faces that better approximate the smooth surface.
Overview Subdivision surfaces are defined recursively. The process starts with a given polygonal mesh. A refinement scheme is then applied to this mesh. This process takes that mesh and subdivides it, creating new vertices and new faces. The positions of the new vertices in the mesh are computed based on the positions of nearby old vertices. In some refinement schemes, the positions of old vertices might also be altered (possibly based on the positions of new vertices). This process produces a denser mesh than the original one, containing more polygonal faces. This resulting mesh can be passed through the same refinement scheme again and so on. The limit subdivision surface is the surface produced from this process being iteratively applied infinitely many times. In practical use however, this algorithm is only applied a limited number of times. The limit surface can also be calculated directly for most subdivision surfaces using the technique of Jos Stam, which eliminates the need for recursive refinement. Subdivision surfaces and T-Splines are competing technologies. Mathematically, subdivision surfaces are spline surfaces with singularities.
First three steps of Catmull–Clark subdivision of a cube with subdivision surface below
Refinement schemes Subdivision surface refinement schemes can be broadly classified into two categories: interpolating and approximating. Interpolating schemes are required to match the original position of vertices in the original mesh. Approximating schemes are not; they can and will adjust these positions as needed. In general, approximating schemes have greater smoothness, but editing applications that allow users to set exact surface constraints require an optimization step. This is analogous to spline surfaces and curves, where Bézier splines are required to interpolate certain control points (namely the two end-points), while B-splines are not. There is another division in subdivision surface schemes as well, the type of polygon that they operate on. Some function for quadrilaterals (quads), while others operate on triangles.
Subdivision surface
Approximating schemes Approximating means that the limit surfaces approximate the initial meshes and that after subdivision, the newly generated control points are not in the limit surfaces. Examples of approximating subdivision schemes are: • Catmull–Clark (1978) generalized bi-cubic uniform B-spline to produce their subdivision scheme. For arbitrary initial meshes, this scheme generates limit surfaces that are C2 continuous everywhere except at extraordinary vertices where they are C1 continuous (Peters and Reif 1998). • Doo–Sabin - The second subdivision scheme was developed by Doo and Sabin (1978) who successfully extended Chaikin's corner-cutting method for curves to surfaces. They used the analytical expression of bi-quadratic uniform B-spline surface to generate their subdivision procedure to produce C1 limit surfaces with arbitrary topology for arbitrary initial meshes. • Loop, Triangles - Loop (1987) proposed his subdivision scheme based on a quartic box-spline of six direction vectors to provide a rule to generate C2 continuous limit surfaces everywhere except at extraordinary vertices where they are C1 continuous. • Mid-Edge subdivision scheme - The mid-edge subdivision scheme was proposed independently by Peters–Reif (1997) and Habib–Warren (1999). The former used the midpoint of each edge to build the new mesh. The latter used a four-directional box spline to build the scheme. This scheme generates C1 continuous limit surfaces on initial meshes with arbitrary topology. • √3 subdivision scheme - This scheme has been developed by Kobbelt (2000): it handles arbitrary triangular meshes, it is C2 continuous everywhere except at extraordinary vertices where it is C1 continuous and it offers a natural adaptive refinement when required. It exhibits at least two specificities: it is a Dual scheme for triangle meshes and it has a slower refinement rate than primal ones.
Interpolating schemes After subdivision, the control points of the original mesh and the new generated control points are interpolated on the limit surface. The earliest work was the butterfly scheme by Dyn, Levin and Gregory (1990), who extended the four-point interpolatory subdivision scheme for curves to a subdivision scheme for surface. Zorin, Schröder and Sweldens (1996) noticed that the butterfly scheme cannot generate smooth surfaces for irregular triangle meshes and thus modified this scheme. Kobbelt (1996) further generalized the four-point interpolatory subdivision scheme for curves to the tensor product subdivision scheme for surfaces. • Butterfly, Triangles - named after the scheme's shape • Midedge, Quads • Kobbelt, Quads - a variational subdivision method that tries to overcome uniform subdivision drawbacks
Editing a subdivision surface Subdivision surfaces can be naturally edited at different levels of subdivision. Starting with basic shapes you can use binary operators to create the correct topology. Then edit the coarse mesh to create the basic shape, then edit the offsets for the next subdivision step, then repeat this at finer and finer levels. You can always see how your edits affect the limit surface via GPU evaluation of the surface. A surface designer may also start with a scanned in object or one created from a NURBS surface. The same basic optimization algorithms are used to create a coarse base mesh with the correct topology and then add details at each level so that the object may be edited at different levels. These types of surfaces may be difficult to work with because the base mesh does not have control points in the locations that a human designer would place them. With a scanned object this surface is easier to work with than a raw triangle mesh, but a NURBS object probably had well laid out control points which behave less intuitively after the conversion than before.
232
Subdivision surface
Key developments • 1978: Subdivision surfaces were discovered simultaneously by Edwin Catmull and Jim Clark (see Catmull–Clark subdivision surface). In the same year, Daniel Doo and Malcom Sabin published a paper building on this work (see Doo–Sabin subdivision surface.) • 1995: Ulrich Reif characterized subdivision surfaces near extraordinary vertices by treating them as splines with singularities. • 1998: Jos Stam contributed a method for exact evaluation for Catmull–Clark and Loop subdivision surfaces under arbitrary parameter values.[3]
References • Peters, J.; Reif, U. (October 1997). "The simplest subdivision scheme for smoothing polyhedra". ACM Transactions on Graphics 16 (4): 420–431. doi: 10.1145/263834.263851 (http://dx.doi.org/10.1145/263834. 263851). • Habib, A.; Warren, J. (May 1999). "Edge and vertex insertion for a class C1 of subdivision surfaces". Computer Aided Geometric Design 16 (4): 223–247. doi: 10.1016/S0167-8396(98)00045-4 (http://dx.doi.org/10.1016/ S0167-8396(98)00045-4). • Kobbelt, L. (2000). "√3-subdivision". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. pp. 103–112. doi: 10.1145/344779.344835 (http://dx.doi.org/10. 1145/344779.344835). ISBN 1-58113-208-5.
External links • Resources about Subdvisions (http://www.subdivision.org) • Geri's Game (http://www.pixar.com/shorts/gg/theater/index.html) : Oscar winning animation by Pixar completed in 1997 that introduced subdivision surfaces (along with cloth simulation) • Subdivision for Modeling and Animation (http://www.multires.caltech.edu/pubs/sig99notes.pdf) tutorial, SIGGRAPH 1999 course notes • Subdivision for Modeling and Animation (http://www.mrl.nyu.edu/dzorin/sig00course/) tutorial, SIGGRAPH 2000 course notes • Subdivision of Surface and Volumetric Meshes (http://www.hakenberg.de/subdivision/ultimate_consumer. htm), software to perform subdivision using the most popular schemes • Surface Subdivision Methods in CGAL, the Computational Geometry Algorithms Library (http://www.cgal. org/Pkg/SurfaceSubdivisionMethods3) • Surface and Volumetric Subdivision Meshes, hierarchical/multiresolution data structures in CGoGN (http:// cgogn.unistra.fr) • Modified Butterfly method implementation in C++ (https://bitbucket.org/rukletsov/b)
233
Supinfocom
234
Supinfocom Supinfocom Established 1988 Location Website
Valenciennes, Arles Official website
Pune
[1]
Supinfocom (école SUPérieure d'INFOrmatique de COMmunication, roughly University of Communication Science) is a computer graphics university with campuses in Valenciennes, Arles (France) and Pune (India). Founded in 1988 in Valenciennes, the school offers a five-year course leading to a diploma of digital direction (certified Level I). A second campus in Arles opened in 2000, while a third one opened in 2008 in Pune, India. In November 2007, the school was ranked #1 worldwide by the American magazine "3D World" with criteria such as the distribution of student films and prizes in festivals around the world.
Curriculum The curriculum includes: • Two years of preparatory courses (design and applied art, perspective, film analysis, video, color, 2D animation, art history, sculpture, communication, English); • Three years of specialization in computer graphics (3D software, screenplay, storyboards, animation, compositing, 3D production, sound, editing). The final year of study is devoted to the team-based production of a short film in CG. Until the class of 2007 entered, there were only two years of specialization courses; there are now three.
External links • Official site of Supinfocom Valenciennes [2] • Official site of DSK Supinfocom Pune [3]
References [1] http:/ / www. supinfocom. fr [2] http:/ / supinfocom. rubika-edu. com [3] http:/ / www. dsksic. com/ animation/
Surface caching
Surface caching Surface caching is a computer graphics technique pioneered by John Carmack, first used in the computer game Quake, to apply lightmaps to level geometry. Carmack's technique was to combine lighting information with surface textures in texture-space when primitives became visible (at the appropriate mipmap level), exploiting temporal coherence for those calculations. As hardware capable of blended multi-texture rendering (and later pixel shaders) became more commonplace, the technique became less common, being replaced with screenspace combination of lightmaps in rendering hardware. Surface caching contributed greatly to the visual quality of Quake's software rasterized 3D engine on Pentium microprocessors, which lacked dedicated graphics instructions. [citation needed]. Surface caching could be considered a precursor to the more recent MegaTexture technique in which lighting and surface decals and other procedural texture effects are combined for rich visuals devoid of unnatural repeating artifacts.
External links • Quake's Lighting Model: Surface Caching [1] - an in-depth explanation by Michael Abrash
References [1] http:/ / www. bluesnews. com/ abrash/ chap68. shtml
Surfel Surfel is an abbreviation of "surface element". In 3D computer graphics, the use of surfels is an alternative to polygonal modeling. An object is represented by a dense set of points or viewer-facing discs holding lighting information. Surfels are well suited to modeling dynamic geometry, because there is no need to compute topology information such as adjacency lists. Common applications are medical scanner data representation, real time rendering of particle systems, and more generally, rendering surfaces of volumetric data by first extracting the isosurface.[1]
Notes [1] H. Pfister, M. Zwicker, J. van Baar, M.Gross, Surfels: Surface Elements as Rendering Primitives, SIGGRAPH 2000. Available from http:/ / graphics. ethz. ch/ research/ past_projects/ surfels/ surfels/ index. html.
235
Suzanne Award
Suzanne Award The Suzanne Award is awarded to animators using Blender annually ever since the second Blender conference held in Amsterdam of 2003. The categories of the Suzanne Awards have changed repeatedly. In the following lists, the people and works printed in bold were the winners of their particular category for that particular year; the remaining entries were nominated.
2003 Suzanne Awards Best Animation • Andreas Goralczyk
2004 Suzanne Awards Best Artwork • Andreas Goralczyk (@ndy) • • • •
Denis Castaneda (nastacc) Grzegory Rakoczy Jan Kurka Robert J. Tiess
Best Animation • Chicken Chair by Bassam Kurdali • Colin Levy • X-Warrior & Nayman
Best Python Script • • • • •
MakeHuman Team Alan Dennis (RipSting) Jean-Michel Soler (jms) Anthony D'Agostino (Scorpius) Stefano Selleri (S68)
Best Coding Contribution • • • • •
Kester Maddock Alfredo de Greef Brecht van Lommel/Jens Ole Wund Matt Ebb Nathan Letwory
236
Suzanne Award
Special Achievement Blender Foundation Award • Bart Veldhuizen
2005 Suzanne Awards Best Animation, Original Idea or Story • New Penguoen 2.38 by Enrico Valenza • Plumiferos by Studio Manos Digitales • Beagle by Jacob Kafka
Best Animation Artwork • • • •
Esign by Chris Larkee Jake Rocks! by Thorsten Schlueter The Goat, the Boy and the Sun by Martin White treefrog.nature by Jason Pierce
Best Character Animation • • • •
Cycles by Peter Haehnlein Laws of Motion by Robert J. Tiess Learning to Fly Grzegorz Rakoczy Alchemy Trailer by Jason Pierce
2006 Suzanne Awards Best Online Art Gallery • • • • •
enricoceric (Enrico Cerica) Ecks (Jean-Sébastien Guillemette) Backiz (Eric Wessels) dannybear (Daniel Svavarsson) olaf (Olaf Arnold)
Best Character Animation • • • • •
Man in Man by Sago (Sacha Goedegebure) Home Sweet Home by pine (Mats Holmberg) DrCox by BenDansie (Ben Dansie) Private Bob Episode 1 by LGM (Nathan Dunlap) Animacao Sapo by Virgilio (Virgilio Vasconcelos)
237
Suzanne Award
Best Animation, Original Idea or Story • • • • •
Infinitum by rocketman (Sam Brubaker) Asylum by MadMesh Nocturnes by OTO Mental Flesh by Klepoth (Peter Hertzberg) The Ogre, the Wolf, the Little Girl, and the Cake by Tagyn (Laurent)
2007 Suzanne Awards Best Designed Short Film • • • • •
Stop by Eoin Duffy Snakes Can Fly by Daniel Lima (Prenudos) Jungle Legend Series by Virgilio Vasconcelos (Virgilio) The Cathedral by Sebastian Koenig (Stullidpb) 8 by Krzysiek Ślaziński (Mallow)
Best Character Animation • • • • •
The Dance of the Bashfull Dwarf by Juan Pablo Bouza (Jpbouza) Soccer Exersize Nathan Dunlap (LGM) To Be or Not To Be by Pildanovak Pelados by Jorge Rausch (Bataraze) Prueba by Octavio Augusto Méndez Sánchez (Octavio)
Best Short Film • • • • •
Night of the Living Dead Pixels byJussi Saarelma, Jere Virta & Peter Schulman Blood by Yu Yonghai (Harrisyu) Out of the Box by Andy Dolphin (AndyD) Alchemy by Jason Pierce (Sketchy) Giants by Thomas Kristov (Thomislav86)
2008 Suzanne Awards Best Designed Short Film • • • • •
Gameland by Yohann Mepa Glimpse of Light by Alex Glawion Troféu da Casa byDaniel Pinheiro Lima & Danilo Dias Soares Tape-à-l'oeil by Jean-Sebastien Guillemette 2008 Ann Arbor Film Festival, DVD Menu Compilation (A2f2) byPeter Traylor
238
Suzanne Award
Best Character Animation • • • • •
Interviews From the Future by Spark Digital Entertainment A Sad Sad Song by Beorn Leonard The Long Road to Animation by Francesco Calabrese Richie the Gecko by Jonathan Lax Orion Tear by Rogério Perdiz
Best Short Film • • • • •
Hanger No. 5 by Nathan Matsuda Pitch by Daniel Houghton Kala et le mystère de la banane magique by Georges Mignot Pisces by Juan Carlos Camardella The Beast by Roland Hess
2009 Suzanne Awards Best Designed Short Film • • • • • •
Evolution by Alex Glawion The Ballad of the M4 Carbine by Andrew Price Bello Paese (a beautiful country) by Claudio Castelli Protein Expressions by Monica Zoppè Button by Reynante M. Martinez Us and Them by Sávio Pedro
Best Character Animation • • • • • •
Dragosaurio by Claudio Andaur (malefico) The Dummy by Leon Beutl Mancandy Announces Durian by David Bolton Bounce to Space by Pablo Vazquez (venomgfx) Tiku the Clock by Sagar Funde Untitled Animation by Tyler Termini
Best Short Film • • • • • •
Memory by Ryusuke Furuya, Junichi Yamamoto The Death Grind by Barath Endre CommandANT Trailer by Ben Resnick Your Planet, Brighter by Daniel Houghton A Polish Winter by Dwarfed Films Barrel by Philip Aigner
239
Suzanne Award
2010 Suzanne Awards Best Designed Short Film • • • • •
John el Esquizofrenico by Juan Carlos Montes Zotac Trailer by Kai Kostack Bluff by Stephan Mayr Midstraeti reel by Hjalti Hjalmarsson Materia by Samuli Jomppanen
Best Animation • • • • •
(Untitled) by Jarred De Beerr (Untitled) by Savio Pedro Hebring III by Andi Martin (Untitled) by Ivam Pretti Curse by Leon Beutl
Best Short Film • • • • •
Lista by Pawel Lyczkowski The Cup by Students of Pepe-School-Land A very little warrior by Endre Barath Taste Lab by Chris Burton Juan Del Monte by Juan Carlos Camardella
2011 Suzanne Awards Best Designed Short Film • • • • •
Assembly: Life in Macrospace by Jonathan Lax and Ben Simonds from Gecko Animation Ltd. Dikta-Goodbye by Hjalti Hjalmarsson and Bjorn Daniel Svavarson Vacui Spacii- par IX: Infra by Martin Eschoyez Concrete Babylon- Pilot by Peter Hertzberg Rooms by Jakub Szczesniak
Best Character Animation • • • • •
Iceland Express Ad Campaign by Studio Midstraeti Heaven is a half pipe by Everton Schneider Les Recettes Animees d'Apollo by Sarah Laufer and Daniel Salazar Retweet to Sender by Hjalti Hjalmarsson The Light At The End by Chris Burton
240
Suzanne Award
Best Short Film • • • • •
Babioles by Mathieu Auvray Happy Hour by Team Highfive Here comes the mobster by Fab Design Box by Juan Carlos Camardella Si Nini by Johan Tri Handoyo
2012 Suzanne Awards Best Designed Short Film • • • • •
Quando arriva la banda by Fernando Luceri REVERSION by Giancarlo Ng Haunted by Christopher Taylor Dancing Ideologies by Konstantin Svechtarov Morphogenesis by Frederic Kettelhoit
Best Character Animation • • • • • •
Park by Daniel Martinez Lara from Pepeland School Parkour by Juan Carlos The Duel by Nathan Dunlap Venom's Lab! 2 Trailer by Pablo Vazquez The True Master by Pedro Rodrigues Chocolove by Nikhil Salvi
Best Short Film • • • • • •
X.Gift by Simone Lannuzzi Low Poly Rocks by Sampo Rask Our New World by Niklas Holmberg Mars Effect by Jassu Llama Got Milk?- Bedtime by Antoine Quairiat Madam Darmi's Suken Chip
2013 Suzanne Awards Best Designed Short Film • • • • •
Black Cat by Flaky Pixel Le Castle Vania by Prophication Joni Darko Teaser by Norman Ardy Lo de Ribera by Juan Carlos Camardella Sofia Sàvio Pedro
241
Suzanne Award
242
Best Character Animation • • • • •
HBC-00011: Blot by Manu Järvinen Parkour by Juan Carlos Sevillano Vinueza The Forbidden Apple by Lucian Muñoz Amin Bumper 01 by ABI Animation Testszene 1: Repto-Animation by AgenZasBrothers
Best Short Film • • • • •
En Passant by Chris Burton Caminandes: Llama Drama by Pablo Vazquez Supeur Dupeur by Sampo Rask Scale by Anna Celarek Yellow Ribbon by FAB Design
The Official Blender Conference is held annually in October, in Amsterdam and is the highpoint of Blender activity each year. The Suzanne Awards were initiated to inspire, encourage and show off works that are created using Blender and to highlight the power of the software.
Llama Drama by Pablo Vazquez
References External links • 2005 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2005/ animation-festival-2005/) • 2006 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2006/ festival-2006/) • 2007 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2007/ festival/) • 2008 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2008/ festival/) • 2009 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2009/ festival/) • 2010 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2010/ suzanne-nominations/) • 2011 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2010/ suzanne-nominations/) • 2012 Suzanne Awards (http://suzanne.myblender.org/results/) • 2013 Suzanne Awards (http://www.blender.org/conference/2013/suzanne-awards/)
Time-varying mesh
Time-varying mesh Time-Varying Mesh (TVM) is composed of a sequence of polygonal mesh models reproducing dynamic 3D scenes. TVM can be generated from synchronized videos captured by multiple cameras in a studio. In each mesh model (or frame), there are three types of information including vertex positions in Cartesian coordinate systems, vertex connection in triangle edges, and the color attached to its corresponding vertex. However, no structure information or explicit corresponding information is available in mesh models. Namely, both the number of vertices and the topology change frame by frame in TVM. TVM can provide free viewpoint, thus has a potential of many applications such as education, CAD, heritage documentation, broadcasting, and gaming.
Timewarps A timewarp is a tool for manipulating the temporal dimension in a hierarchically described 3D computer animation system. The term was coined by Jeff Smith and Karen Drewery in 1991.[1] Continuous curves that are normally applied to parametric modeling and rendering attributes are instead applied to the local clock value, which effectively remaps the flow of global time within the context of the subsection of the model to which the curves are applied. The tool was first developed to assist animators in making minor adjustments to subsections of animated scenes that might employ dozens of related interpolation curves. Rather than adjust the timing of every curve within the subsection, a timewarp curve can be applied to the model section in question, adjusting the flow of time itself for that element, with respect to the timing of the other, unaffected elements. Originally, the tool was used to achieve minor adjustments, moving a motion forward or back in time, or to alter the speed of a movement. Subsequent experiments with the technique moved beyond these simpler timing adjustment and began to employ the timing curves to create more complex effects, such as continuous animation cycles and simulating more natural movements of large collections of models, such as flocks or crowds, by creating numerous identical copies of a single animated model and then applying small random perturbation timewarps to each of the copies, giving the impression of a less robotic precision to the group's movements. The tool has since become common in both 3D animation and video editing software systems.
References [1] "Timewarps: A Temporal Reparameterization Paradigm for Parametric Animation" Jeff Smith, Karen Drewery (http:/ / www. citeulike. org/ user/ Jefficus/ article/ 4350388)
243
Triangle mesh
Triangle mesh A triangle mesh is a type of polygon mesh in computer graphics. It comprises a set of triangles (typically in three dimensions) that are connected by their common edges or corners. Many graphics software packages and hardware devices can operate more efficiently on triangles that are grouped into meshes than on a similar number of triangles that are presented individually. This is typically because computer graphics do operations on the vertices at the corners of triangles. With individual Example of a triangle mesh representing a dolphin. triangles, the system has to operate on three vertices for every triangle. In a large mesh, there could be eight or more triangles meeting at a single vertex - by processing those vertices just once, it is possible to do a fraction of the work and achieve an identical effect.
Representation Various methods of storing and working with a mesh in computer memory are possible. With the OpenGL and DirectX APIs there are two primary ways of passing a triangle mesh to the graphics hardware, triangle strips and index arrays.
Triangle strip One way of sharing vertex data between triangles is the triangle strip. With strips of triangles each triangle shares one complete edge with one neighbour and another with the next. Another way is the triangle fan which is a set of connected triangles sharing one central vertex. With these methods vertices are dealt with efficiently resulting in the need to only process N+2 vertices in order to draw N triangles. Triangle strips are efficient, however the drawback is that it may not be obvious how or convenient to translate an arbitrary triangle mesh into strips.
Index array See also: Face-vertex meshes With index arrays, a mesh is represented by two separate arrays, one array holding the vertices, and another holding sets of three indices into that array which define a triangle. The graphics system processes the vertices first and renders the triangles afterwards, using the index sets working on the transformed data. In OpenGL, this is supported by the glDrawElements() primitive when using Vertex Buffer Object (VBO). With this method, any arbitrary set of triangles sharing any arbitrary number of vertices can be stored, manipulated, and passed to the graphics API, without any intermediary processing.
244
Vector slime
245
Vector slime Demoscene
Concepts •
Demo
•
Intro
•
Demoparty
•
Effects
•
Demogroup
•
Compo
•
Music disk
•
Diskmag
•
Module file
•
Tracker Alternative demo platforms
•
Amiga
•
Apple IIGS
•
Atari ST
•
Commodore 64
•
Vic-20
•
Text mode
•
ZX Spectrum Current parties
•
Alternative Party
•
Assembly
•
Buenzli
•
Evoke
•
The Gathering
•
Revision
•
Sundown
•
X Past parties
• •
Breakpoint The Party Websites
• •
Scene.org Mod Archive Software
Vector slime
246 •
ProTracker
•
Scream Tracker
•
Fast Tracker
•
Impulse Tracker
•
ModPlug
•
Renoise
• •
Tracker musicians Demosceners
• • •
v t
e [1]
In the demoscene (demo (computer programming)), vector slime refers to a class of visual effects achieved by procedural deformation of geometric shapes.
Synopsis A geometric object exposed to vector slime is usually defined by vertices and faces in two or three dimensions. In the process of deformation, each vertex in the original shape undergoes one or more linear transformations (usually rotation or translation), defined as a function of the vertex' position in space (usually a function of the magnitude of the vector) and time. The desired result is an animated geometric object behaving in a harmonic way, creating some degree of illusion of physical realism. Older vector slime implementations kept old copies of the rendering result from simple vector objects in RAM, and selected scan-lines from the different buffers in order to make a time-displacement illusion over the y-axis.
Appearance Depending on variances in implementation, vector slime can approximate an array of physical properties. A traditional approach is to let the linear transformation vary as a smooth function of time minus the magnitude of the vector in question. This creates the illusion that there is a force applied to the origin of the object space (where the object is usually centered), and the rest of the object's body reacts as a soft body, as each vertex reacts to a change in the force delayed by the distance to the origin. Applied to a spikeball (a sphere with extracted arms), the object could resemble the behaviour of a soft squid-like animal. Applied to a cube, the object would appear as a cubic piece of jelly propelled by a gyro[1] force from the inside.
Areas of Application Although the classical vector slime algorithms are far from an attempt of correct physical modelling, the result can, under certain conditions, trick the viewer into believing that there is some sophisticated physical simulation involved. The effect has therefore grown quite popular in the demoscene to create impressive visual effects at relatively low computational cost. Interactive vector slime implementations can also eventually be found in computer games as a substitute for a more correct physical simulation algorithm.
Vector slime
247
Demos featuring vector slime • • • •
Crystal Dream 2 by Triton [2] Lethal Exit by Digital [3] (possibly the first demo to use this term) Yuri Nation by Non Alien Nature-V [4] (possibly the first hardware vector slime) Shapeshifter by Excess [5]
References [1] http:/ / toolserver. org/ %7Edispenser/ cgi-bin/ dab_solver. py?page=Vector_slime& editintro=Template:Disambiguation_needed/ editintro& client=Template:Dn [2] http:/ / www. pouet. net/ prod. php?which=462 [3] http:/ / www. pouet. net/ prod. php?which=3226 [4] http:/ / www. pouet. net/ prod. php?which=1943 [5] http:/ / www. pouet. net/ prod. php?which=6983
Vertex (geometry) In geometry, a vertex (plural vertices) is a special kind of point that describes the corners or intersections of geometric shapes.
Definitions Of an angle The vertex of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments and lines that result in two straight "sides" meeting at one place.
Of a polytope A vertex is a corner point of a polygon, polyhedron, or other higher dimensional polytope, formed by the intersection of edges, faces or facets of the object.
A vertex of an angle is the endpoint where two line segments or lines come together.
In a polygon, a vertex is called "convex" if the internal angle of the polygon, that is, the angle formed by the two edges at the vertex, with the polygon inside the angle, is less than π radians; otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and concave otherwise. Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve there will be a point of extreme curvature near each polygon vertex. However, a smooth curve approximation to a polygon will also have additional vertices, at the points where its curvature is minimal.
Vertex (geometry)
248
Of a plane tiling A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces.
Principal vertex A polygon vertex xi of a simple polygon P is a principal polygon vertex if the diagonal [x(i−1),x(i+1)] intersects the boundary of P only at x(i−1) and x(i+1). There are two types of principal vertices: ears and mouths.
Ears A principal vertex xi of a simple polygon P is called an ear if the diagonal [x(i−1),x(i+1)] that bridges xi lies entirely in P. (see also convex polygon)
Mouths A principal vertex xi of a simple polygon P is called a mouth if the diagonal [x(i−1),x(i+1)] lies outside the boundary of P.
Vertex B is an ear, because the straight line between C and D is entirely inside the polygon. Vertex C is a mouth, because the straight line between A and B is entirely outside the polygon.
Vertices in computer graphics In computer graphics, objects are often represented as triangulated polyhedra in which the object vertices are associated not only with three spatial coordinates but also with other graphical information necessary to render the object correctly, such as colors, reflectance properties, textures, and surface normals; these properties are used in rendering by a vertex shader, part of the vertex pipeline.
External links • Weisstein, Eric W., "Polygon Vertex [1]", MathWorld. • Weisstein, Eric W., "Polyhedron Vertex [2]", MathWorld. • Weisstein, Eric W., "Principal Vertex [3]", MathWorld.
References [1] http:/ / mathworld. wolfram. com/ PolygonVertex. html [2] http:/ / mathworld. wolfram. com/ PolyhedronVertex. html [3] http:/ / mathworld. wolfram. com/ PrincipalVertex. html
Vertex Buffer Object
Vertex Buffer Object A Vertex Buffer Object (VBO) is an OpenGL feature that provides methods for uploading vertex data (position, normal vector, color, etc.) to the video device for non-immediate-mode rendering. VBOs offer substantial performance gains over immediate mode rendering primarily because the data resides in the video device memory rather than the system memory and so it can be rendered directly by the video device. The Vertex Buffer Object specification has been standardized by the OpenGL Architecture Review Board [1] as of OpenGL Version 1.5 (in 2003). Similar functionality was available before the standardization of VBOs via the Nvidia-created extension "Vertex Array Range" or ATI's "Vertex Array Object" extension.
Basic VBO functions The following functions form the core of VBO access and manipulation: In OpenGL 2.1: GenBuffersARB(sizei n, uint *buffers) Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved. BindBufferARB(enum target, uint buffer) Use a previously created buffer as the active VBO. BufferDataARB(enum target, sizeiptrARB size, const void *data, enum usage) Upload data to the active VBO. DeleteBuffersARB(sizei n, const uint *buffers) Deletes the specified number of VBOs from the supplied array or VBO id. In OpenGL 3.x and OpenGL 4.x: GenBuffers(sizei n, uint *buffers) Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved. BindBuffer(enum target, uint buffer) Use a previously created buffer as the active VBO. BufferData(enum target, sizeiptrARB size, const void *data, enum usage) Upload data to the active VBO. DeleteBuffers(sizei n, const uint *buffers) Deletes the specified number of VBOs from the supplied array or VBO id.
Example usage in C Using OpenGL 2.1 //Initialise VBO - do only once, at start of program //Create a variable to hold the VBO identifier GLuint triangleVBO; //Vertices of a triangle (counter-clockwise winding) float data[] = {1.0, 0.0, 1.0, 0.0, 0.0, -1.0, -1.0, 0.0, 1.0}; //Create a new VBO and use the variable id to store the VBO id glGenBuffers(1, &triangleVBO);
249
Vertex Buffer Object //Make the new VBO active glBindBuffer(GL_ARRAY_BUFFER, triangleVBO); //Upload vertex data to the video device glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); //Make the new VBO active. Repeat here incase changed since initialisation glBindBuffer(GL_ARRAY_BUFFER, triangleVBO); //Draw Triangle from VBO - do each time window, view point or data changes //Establish its 3 coordinates per vertex with zero stride in this array; necessary here glVertexPointer(3, GL_FLOAT, 0, NULL); //Establish array contains vertices (not normals, colours, texture coords etc) glEnableClientState(GL_VERTEX_ARRAY); //Actually draw the triangle, giving the number of vertices provided glDrawArrays(GL_TRIANGLES, 0, sizeof(data) / sizeof(float) / 3); //Force display to be drawn now glFlush();
Example usage in C Using OpenGL 3.x and OpenGL 4.x Vertex Shader: /*----------------- "exampleVertexShader.vert" -----------------*/ #version 150 // Specify which version of GLSL we are using. // in_Position was bound to attribute index 0("shaderAttribute") in vec3 in_Position; void main() { gl_Position = vec4(in_Position.x, in_Position.y, in_Position.z, 1.0); return 0; } /*--------------------------------------------------------------*/ Fragment Shader: /*---------------- "exampleFragmentShader.frag" ----------------*/
250
Vertex Buffer Object
251
#version 150 // Specify which version of GLSL we are using. precision highp float; // Video card drivers require this line to function properly out vec4 fragColor; void main() { fragColor = vec4(1.0,1.0,1.0,1.0); //Set colour of each fragment to WHITE return 0; } /*--------------------------------------------------------------*/ Main OpenGL Program: /*--------------------- Main OpenGL Program ---------------------*/
/* Create a variable to hold the VBO identifier */ GLuint triangleVBO;
/* This is a handle to the shader program */ GLuint shaderProgram;
/* These pointers will receive the contents of our shader source code files */ GLchar *vertexSource, *fragmentSource;
/* These are handles used to reference the shaders */ GLuint vertexShader, fragmentShader;
const unsigned int shaderAttribute = 0;
const float NUM_OF_VERTICES_IN_DATA=3;
/* Vertices of a triangle (counter-clockwise winding) */ float data[3][3] = { {
0.0, 1.0, 0.0
},
{ -1.0, -1.0, 0.0
},
{
}
1.0, -1.0, 0.0
};
/*---------------------- Initialise VBO - (Note: do only once, at start of program) ---------------------*/ /* Create a new VBO and use the variable "triangleVBO" to store the VBO id */ glGenBuffers(1, &triangleVBO);
Vertex Buffer Object
/* Make the new VBO active */ glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);
/* Upload vertex data to the video device */ glBufferData(GL_ARRAY_BUFFER, NUM_OF_VERTICES_IN_DATA * 3 * sizeof(float), data, GL_STATIC_DRAW);
/* Specify that our coordinate data is going into attribute index 0(shaderAttribute), and contains three floats per vertex */ glVertexAttribPointer(shaderAttribute, 3, GL_FLOAT, GL_FALSE, 0, 0);
/* Enable attribute index 0(shaderAttribute) as being used */ glEnableVertexAttribArray(shaderAttribute);
/* Make the new VBO active. */ glBindBuffer(GL_ARRAY_BUFFER, triangleVBO); /*-------------------------------------------------------------------------------------------------------*/
/*--------------------- Load Vertex and Fragment shaders from files and compile them --------------------*/ /* Read our shaders into the appropriate buffers */ vertexSource = filetobuf("exampleVertexShader.vert"); fragmentSource = filetobuf("exampleFragmentShader.frag");
/* Assign our handles a "name" to new shader objects */ vertexShader = glCreateShader(GL_VERTEX_SHADER); fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
/* Associate the source code buffers with each handle */ glShaderSource(vertexShader, 1, (const GLchar**)&vertexSource, 0); glShaderSource(fragmentShader, 1, (const GLchar**)&fragmentSource, 0);
/* Free the temporary allocated memory */ free(vertexSource); free(fragmentSource);
/* Compile our shader objects */ glCompileShader(vertexShader); glCompileShader(fragmentShader); /*-------------------------------------------------------------------------------------------------------*/
/*-------------------- Create shader program, attach shaders to it and then link it ---------------------*/ /* Assign our program handle a "name" */ shaderProgram = glCreateProgram();
252
Vertex Buffer Object /* Attach our shaders to our program */ glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader);
/* Bind attribute index 0 (shaderAttribute) to in_Position*/ /* "in_Position" will represent "data" array's contents in the vertex shader */ glBindAttribLocation(shaderProgram, shaderAttribute, "in_Position");
/* Link shader program*/ glLinkProgram(shaderProgram); /*-------------------------------------------------------------------------------------------------------*/
/* Set shader program as being actively used */ glUseProgram(shaderProgram);
/* Set background colour to BLACK */ glClearColor(0.0, 0.0, 0.0, 1.0);
/* Clear background with BLACK colour */ glClear(GL_COLOR_BUFFER_BIT);
/* Actually draw the triangle, giving the number of vertices provided by invoke glDrawArrays while telling that our data is a triangle and we want to draw 0-3 vertexes */ glDrawArrays(GL_TRIANGLES, 0, 3); /*---------------------------------------------------------------*/
References [1] http:/ / www. opengl. org/ about/ arb/
External links • Vertex Buffer Object Whitepaper (http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt)
253
Vertex (computer graphics)
254
Vertex (computer graphics) A vertex (plural vertices) in computer graphics is a data structure that describes a point in 2D or 3D space. Display objects are composed of arrays of flat surfaces (typically triangles) and vertices define the location and other attributes of the corners of the surfaces.
Application to object models In computer graphics, objects are most-often represented as triangulated polyhedra. Non triangular surfaces can be converted to an array of triangles through tessellation. The vertices of triangles are associated not only with position but also with other graphical attributes used to render the object correctly. Such attributes can include color at the vertex point, reflectance of the surface at the vertex, textures of the surface at the vertex, and the normal of an approximated curved surface at the location of the vertex. These properties are used in rendering by a vertex shader or vertex pipeline. The normal can be used to determine a surface's orientation toward a light source for flat shading using Lambert's cosine law, or the orientation of each of the vertices to mimic a curved surface with Phong shading.
Vertex attributes Most attributes of a vertex represent vectors in the space to be rendered. Vectors can be 1 (x), 2 (x, y), or 3 (x, y, z) dimensional and can include a fourth homogeneous coordinate (w). The following is a table of built in attributes of vertices in the OpenGL standard.
OpenGL vertex attributes GL attribute name
attribute defined (data value size)
gl_Vertex
Position (vec4)
gl_Normal
Normal (vec4)
gl_Color
Primary color of vertex (vec4)
gl_MultiTexCoord0 Texture coordinate of texture unit 0 (vec4) gl_MultiTexCoord1 Texture coordinate of texture unit 1 (vec4) gl_MultiTexCoord2 Texture coordinate of texture unit 2 (vec4) gl_MultiTexCoord3 Texture coordinate of texture unit 3 (vec4) gl_MultiTexCoord4 Texture coordinate of texture unit 4 (vec4) gl_MultiTexCoord5 Texture coordinate of texture unit 5 (vec4) gl_MultiTexCoord6 Texture coordinate of texture unit 6 (vec4) gl_MultiTexCoord7 Texture coordinate of texture unit 7 (vec4) gl_FogCoord
References
Fog Coord (float)
Vertex pipeline
Vertex pipeline The function of the vertex pipeline in any GPU is to take geometry data (usually supplied as vector points), work with it if needed with either fixed function processes (earlier DirectX), or a vertex shader program (later DirectX), and create all of the 3D data points in a scene to a 2D plane for display on a computer monitor. It is possible to eliminate unneeded data from going through the rendering pipeline to cut out extraneous work (called view volume clipping and backface culling). After the vertex engine is done working with the geometry, all the 2D calculated data is sent to the pixel engine for further processing such as texturing and fragment shading. As of DirectX 9c, the vertex processor is able to do the following by programming the vertex processing under the Direct X API: • • • • • •
Tessellation Displacement mapping Geometry blending Higher-order primitives Point sprites Matrix stacks
External links • Anandtech Article [1]
References [1] http:/ / www. anandtech. com/ video/ showdoc. aspx?i=2044& p=3
255
Viewing frustum
256
Viewing frustum In 3D computer graphics, the viewing frustum or view frustum is the region of space in the modeled world that may appear on the screen; it is the field of view of the notional camera.[1] The exact shape of this region varies depending on what kind of camera lens is being simulated, but typically it is a frustum of a rectangular pyramid (hence the name). The planes that cut the frustum perpendicular to the viewing direction are called the near plane and the far plane. Objects closer to the camera than the near plane or beyond the far plane are not drawn. Sometimes, the far plane is placed infinitely far away from the camera so all objects within the frustum are drawn regardless of their distance from the camera.
A view frustum.
Viewing frustum culling or view frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process. Rendering these objects would be a waste of time since they are not directly visible. To make culling fast, it is usually done using bounding volumes surrounding the objects rather than the objects themselves.
Definitions VPN the view-plane normal – a normal to the view plane. VUV the view-up vector – the vector on the view plane that indicates the upward direction. VRP the viewing reference point – a point located on the view plane, and the origin of the VRC. PRP the projection reference point – the point where the image is projected from, for parallel projection, the PRP is at infinity. VRC the viewing-reference coordinate system. The geometry is defined by a field of view angle (in the 'y' direction), as well as an aspect ratio. Further, a set of z-planes define the near and far bounds of the frustum.
References [1] http:/ / msdn. microsoft. com/ en-us/ library/ ff634570. aspx Microsoft – What Is a View Frustum?
Viewport
Viewport A viewport is a polygon viewing region in computer graphics, or a term used for optical components. It has several definitions in different contexts:
Computing In 3D computer graphics it refers to the 2D rectangle used to project the 3D scene to the position of a virtual camera. A viewport is a region of the screen used to display a portion of the total image to be shown.[1] In virtual desktops, the viewport is the visible portion of a 2D area which is larger than the visualization device. In web browsers, the viewport is the visible portion of the canvas element.
Optical components In Manufacturing it refers to hermetically sealed optical components which are typically used for visual or broad band energy transmission into and out of vacuum systems. Single and multi-layer coatings can be added to viewports to optimize transmission performance. They describe where the selected object portion resides inside a window.
References [1] http:/ / msdn. microsoft. com/ en-us/ library/ ff634571. aspx Microsoft - What Is a Viewport?
External links List of viewport sizes for mobile and tablet devices (http:/ / i-skool. co. uk/ mobile-development/ web-design-for-mobiles-and-tablets-viewport-sizes/)
257
Virtual actor
Virtual actor A virtual human or digital clone is the creation or re-creation of a human being in image and voice using computer-generated imagery and sound, that is often indistinguishable from the real actor. This idea was first portrayed in the 1981 film Looker, wherein models had their bodies scanned digitally to create 3D computer generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used this concept: "Fools" by Pat Cadigan, and Et Tu, Babe by Mark Leyner. In general, virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or "silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton, Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and George Burns. Ironically, data sets of Arnold Schwarzenegger for the creation of a virtual Arnold (head, at least) have already been made. The name Schwarzeneggerization comes from the 1992 book Et Tu, Babe by Mark Leyner. In one scene, on pages 50–51, a character asks the shop assistant at a video store to have Arnold Schwarzenegger digitally substituted for existing actors into various works, including (amongst others) Rain Man (to replace both Tom Cruise and Dustin Hoffman), My Fair Lady (to replace Rex Harrison), Amadeus (to replace F. Murray Abraham), The Diary of Anne Frank (as Anne Frank), Gandhi (to replace Ben Kingsley), and It's a Wonderful Life (to replace James Stewart). Schwarzeneggerization is the name that Leyner gives to this process. Only 10 years later, Schwarzeneggerization was close to being reality. By 2002, Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and David Duchovny had all had their heads laser scanned to create digital computer models thereof.
Early history Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 filmWikipedia:Please clarify created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Society of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands. In 1987, the Kleizer-Walczak Construction Company begain its Synthespian ("synthetic thespian") Project, with the aim of creating "life-like figures based on the digital animation of clay models". In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face placed onto a watery pseudopod. In 1991, Terminator 2, also directed by Cameron, confident in the abilities of computer-generated effects from his experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2 contained over forty shots throughout the film. In 1997, Industrial Light and Magic worked on creating a virtual actor that was a composite of the bodily parts of several real actors. By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete
258
Virtual actor those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky Captain and the World of Tomorrow.
Legal issues Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was supposedly preserving: our point of contact with the irreplaceable, finite person". And even more problematic are the issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the United States, for instance, they must resort to database protection laws in order to exercise what control they have (The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor does not own the copyright on his digital clones, unless they were created by him. Robert Patrick, for example, would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2. The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs. It is also a career difficulty, since a clone could be used in roles that a real actor would never accept for various reasons. Bad identifications of an actor's image with a certain type of roles could harm his career, and real actors, conscious of this, pick and choose what roles they play (Bela Lugosi and Margaret Hamilton became typecast with their roles as Count Dracula and the Wicked Witch of the West, whereas Anthony Hopkins and Dustin Hoffman have played a diverse range of parts). A digital clone could be used to play the parts of (for examples) an axe murderer or a prostitute, which would affect the actor's public image, and in turn affect what future casting opportunities were given to that actor. Both Tom Waits and Bette Midler have won actions for damages against people who employed their images in advertisements that they had refused to take part in themselves. In the USA, the use of a digital clone in advertisements is requireed to be accurate and truthful (section 43(a) of the Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied endorsement. The New York District Court held that an advertisement employing a Woody Allen impersonator would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product. Other concerns include posthumous use of digital clones. Barbara Creed states that "Arnold's famous threat, 'I'll be back', may take on a new meaning". Even before Brandon Lee was digitally reanimated, the California Senate drew up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich[1] and Vincent Price.
In fiction • S1m0ne, a 2002 science fiction drama film written, produced and directed by Andrew Niccol, starring Al Pacino.
In business A Virtual Actor can also be a person who performs a role in real-time when logged into a Virtual World or Collaborative On-Line Environment. One who represents, via an avatar, a character in a simulation or training event. One who behaves as if acting a part through the use of an avatar. Vactor Studio LLC is a New York-based company, but its "Vactors" (virtual actors) are located all across the US and Canada. The Vactors log into virtual world applications from their homes or offices to participate in exercises covering an extensive range of markets including: Medical, Military, First Responder, Corporate, Government, Entertainment, and Retail. Through their own computers, they become doctors, soldiers, EMTs, customer service
259
Virtual actor reps, victims for Mass Casualty Response training, or whatever the demonstration requires. Since 2005, Vactor Studio’s role-players have delivered thousands of hours of professional virtual world demonstrations, training exercises, and event management services.
References [1] Los Angeles Times / Digital Elite Inc. (http:/ / articles. latimes. com/ 1999/ aug/ 09/ business/ fi-64043)
Further reading • Michael D. Scott and James N. Talbott (1997). "Titles and Characters". Scott on Multimedia Law. Aspen Publishers Online. ISBN 1-56706-333-0. — a detailed discussion of the law, as it stood in 1997, relating to virtual humans and the rights held over them by real humans • Richard Raysman (2002). "Trademark Law". Emerging Technologies and the Law: Forms and Analysis. Law Journal Press. pp. 6—15. ISBN 1-58852-107-9. — how trademark law affects digital clones of celebrities who have trademarked their personæ
External links • Vactor Studio (http://www.vactorstudio.com/)
Virtual environment software Virtual environment software refers to any software, program or system that implements, manages and controls multiple virtual environment instances. The software is installed within an organization's existing IT infrastructure and controlled from within the organization itself. From a central interface the software creates an interactive and immersive experience for administrators and users.
Uses Virtual environment software can be purposed for any use. From advanced military training in a virtual environment simulator to virtual classrooms. Many Virtual Environments are being purposed as branding channels for products and services by enterprise corporations and non-profit groups. Currently, virtual event and virtual tradeshow have been the early accepted uses of virtual event services. More recently, virtual environment software platforms have offered choice to enterprises – with the ability to connect people across the Internet. Virtual Environment Software enables organizations to extend their market and industry reach while reducing (all travel-related) costs and time.
Background Providers of virtual environments have tended to focus on the early marketplace adoption of virtual events. These providers are typically software as a service (SaaS)-based. Most have evolved from the streaming media/gaming arena and social networking applications. This early virtual event marketplace is now moving towards 3D persistent environments, where enterprises combine e-commerce, social media as core operating systems, and is evolving into virtual environments for branding, customer acquisition, and service centers. A Persistent Environment enables users, visitors and administrators to re-visit a part or parts of the event or session. Information gathered by attendees and end users is typically stored in a virtual briefcase typically including contact information and marketing materials.
260
Virtual environment software
Potential advantages Virtual environment software has the potential to maximize the benefits of both online and on-premises environments. A flexible platform would allow companies to deploy the software in both environments while having the ability to run reports on data in both locations from a centralized interface. The advent of ‘persistent environments’ lends itself to a rich integration with enterprise technology assets which can be solved efficiently through the implementation of software. Virtual environment software can be applied to virtual learning environments (also called Learning Management Systems or LMS). In the US, Universities, Colleges and similar higher education institutions have adopted virtual learning environments to economize time, resources and course effectiveness.
Future Virtual events, trade shows and environments are not projected to replace physical events and interactions. Instead they are seen as extensions and enhancements to these physical events and environments by increasing lead generation and reaching a wider audience while decreasing expenses. The virtual environments industry has been projected to reach a market size in the billions of dollars.
Market availability Virtual environment software is an alternative to bundled services. Companies known to provide virtual environment software are UBIVENT [1], Unisfair [2] or vcopious [3].
References [1] http:/ / www. ubivent. com/ [2] http:/ / www. unisfair. com/ [3] http:/ / vcopious. com/
261
Virtual replay
262
Virtual replay Virtual Replay is a technology which allows people to see 3D animations of sporting events. The technology was widely used during the 2006 FIFA World Cup. During this event, bbcnews.com posted highlights of the event on their websites soon after matches concluded and users could view the 3D renderings from multiple points of view.
External links • A page on bbcnews.com using virtual replay technology [1]
References [1] http:/ / news. bbc. co. uk/ sport2/ hi/ football/ world_cup_2006/ 5148780. stm?goalid=500251
Volume mesh Volumetric meshes are a polygonal representation of the interior volume of an object. Unlike polygon meshes, which represent only the surface as polygons, volumetric meshes also discretize the interior structure of the object. One application of volumetric meshes is in finite element analysis, which may use regular or irregular volumetric meshes to compute internal stresses and forces in an object throughout the entire volume of the object.
Voxel A voxel (volume element), represents a value on a regular grid in three dimensional space. Voxel is a combination of "volume" and "pixel" where pixel is a combination of "picture" and "element".[1] This is analogous to a texel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image). In contrast to pixels and voxels, points and polygons are often explicitly represented by the coordinates of their vertices. A direct consequence of this difference is that polygons are able to efficiently represent simple 3D structures with lots of empty or homogeneously filled space, while voxels are good at representing regularly sampled spaces that are non-homogeneously filled.
A series of voxels in a stack with a single voxel shaded
Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays use voxels to describe their resolution. For example, a display might be able to show 512×512×512 voxels.
Voxel
263
Rasterization Another technique for voxels involves Raster graphics where you simply raytrace every pixel of the display into the scene. A typical implementation will raytrace each pixel of the display starting at the bottom of the screen using what is known as a y-buffer. When a voxel is reached that has a higher y value on the display it is added to the y-buffer overriding the previous value and connected with the previous y-value on the screen interpolating the color values. Outcast and other 1990's video games employed this graphics technique for effects such as reflection and bump-mapping and usually for terrain rendering. Outcast's graphics engine was mainly a combination of a ray casting (heightmap) engine, used to render the landscape, and a texture mapping polygon engine used to render objects. The "Engine Programming" section of the games credits in the manual has several subsections related to graphics, among them: "Landscape Engine", "Polygon Engine", "Water & Shadows Engine" and "Special effects Engine". Although Outcast is often cited as a forerunner of voxel technology, this is somewhat misleading. The game does not actually model three-dimensional volumes of voxels. Instead, it models the ground as a surface, which may be seen as being made up of voxels. The ground is decorated with objects that are modeled using texture-mapped polygons. When Outcast was developed, the term "voxel engine", when applied to computer games, commonly referred to a ray casting engine (for example the VoxelSpace engine). On the engine technology page of the game's website, the landscape engine is also referred to as the "Voxels engine".[2] The engine is purely software-based; it does not rely on hardware-acceleration via a 3D graphics card.[3] John Carmack also experimented with Voxels for the Quake III engine.[4] One such problem cited by Carmack is the lack of graphics cards designed specifically for such rendering requiring them to be software rendered, which still remains an issue with the technology to this day. Comanche was also the first commercial flight simulation based on voxel technology. NovaLogic used the proprietary Voxel Space engine developed for the company by Kyle Freeman [5](written entirely in Assembly language) to create open landscapes.[6] This rendering technique allowed for much more detailed and realistic terrain compared to simulations based on vector graphics at that time.
Voxel data A voxel represents a single sample, or data point, on a regularly spaced, three-dimensional grid. This data point can consist of a single piece of data, such as an opacity, or multiple pieces of data, such as a color in addition to opacity. A voxel represents only a single point on this grid, not a volume; the space between each voxel is not represented in a voxel-based dataset. Depending on the type of data and the intended use for the dataset, this missing information may be reconstructed and/or approximated, e.g. via interpolation. The value of a voxel may represent various properties. In CT scans, the values are Hounsfield units, giving the opacity of material to X-rays.[7]:29 Different types of value are acquired from MRI or ultrasound.
A (smoothed) rendering of a data set of voxels for a macromolecule
Voxels can contain multiple scalar values, essentially vector (tensor) data; in the case of ultrasound scans with B-mode and Doppler data, density, and volumetric flow rate are captured as separate channels of data relating to the same voxel positions.
Voxel While voxels provide the benefit of precision and depth of reality, they are typically large data sets and are unwieldy to manage given the bandwidth of common computers. However, through efficient compression and manipulation of large data files, interactive visualization can be enabled on consumer market computers. Other values may be useful for immediate 3D rendering, such as a surface normal vector and color.
Uses Common uses of voxels include volumetric imaging in medicine and representation of terrain in games and simulations. Voxel terrain is used instead of a heightmap because of its ability to represent overhangs, caves, arches, and other 3D terrain features. These concave features cannot be represented in a heightmap due to only the top 'layer' of data being represented, leaving everything below it filled (the volume that would otherwise be the inside of the caves, or the underside of arches or overhangs).
Visualization A volume containing voxels can be visualized either by direct volume rendering or by the extraction of polygon isosurfaces that follow the contours of given threshold values. The marching cubes algorithm is often used for isosurface extraction, however other methods exist as well.
Computer gaming • Planet Explorers is a 3D building game that uses voxels for rendering equipment, buildings, and terrain. Using a voxel editor, players can actually create their own models for weapons and buildings, and terrain can be modified similar to other building games. • C4 Engine is a game engine that uses voxels for in game terrain and has a voxel editor for its built-in level editor. • Miner Wars 2081 uses its own Voxel Rage engine to let the user deform the terrain of asteroids allowing tunnels to be formed. • Many NovaLogic games have used voxel-based rendering technology, including the Delta Force, Armored Fist and Comanche series. • Westwood Studios' Command & Conquer: Tiberian Sun and Command & Conquer: Red Alert 2 use voxels to render most vehicles. • Westwood Studios' Blade Runner video game used voxels to render characters and artifacts. • Outcast, a game made by Belgian developer Appeal, sports outdoor landscapes that are rendered by a voxel engine. • Comanche series, a game made by NovaLogic used voxel rasterization for terrain rendering.[8] • The videogame Amok for the Sega Saturn makes use of voxels in its scenarios. • The computer game Vangers uses voxels for its two-level terrain system. • Master of Orion III uses voxel graphics to render space battles and solar systems. Battles displaying 1000 ships at a time were rendered slowly on computers without hardware graphic acceleration. • Sid Meier's Alpha Centauri uses voxel models to render units. • Shattered Steel featured deforming landscapes using voxel technology. • Build engine first-person shooter games Shadow Warrior and Blood use voxels instead of sprites as an option for many of the items pickups and scenery. Duke Nukem 3D has an fan-created pack in a similar style. • Crysis, as well as the Cryengine 2 and 3, use a combination of heightmaps and voxels for its terrain system. • Worms 4: Mayhem uses a voxel-based engine to simulate land deformation similar to the older 2D Worms games. • The multi-player role playing game Hexplore uses a voxel engine allowing the player to rotate the isometric rendered playfield. • The computer game Voxatron, produced by Lexaloffle, is composed and generated fully using voxels. • Ace of Spades used Ken Silverman's Voxlap engine before being rewritten in a bespoke OpenGL engine.
264
Voxel • 3D Dot Game Heroes uses voxels to present retro-looking graphics. • Vox, an upcoming voxel based exploration/RPG game focusing on player generated content. • ScrumbleShip, a block-building MMO space simulator game in development, renders each in-game component and damage to those components using dozens to thousands of voxels. • Castle Story, a castle building Real Time Strategy game in development, has terrain consisting of smoothed voxels • Block Ops, a voxel based First Person Shooter game. • Cube World, an Indie voxel based game with RPG elements based on games like, Terraria, Diablo (video game), The Legend of Zelda, Monster Hunter, World of Warcraft, Secret of Mana, and many others. • EverQuest Next and EverQuest Next: Landmark, upcoming MMORPGs by Sony Online Entertainment, make extensive use of voxels for world creation as well as player generated content • 7 Days to Die, Voxel based open world survival horror game developed by The Fun Pimps Entertainment. • Brutal Nature, Voxel based Survival FPS that uses surface net relaxation to render voxels as a smooth mesh.
Voxel editors While scientific volume visualization doesn't require modifying the actual voxel data, voxel editors can be used to create art (especially 3D pixel art) and models for voxel based games. Some editors are focused on a single approach to voxel editing while others mix various approaches. Some common approaches are: • Slice based: The volume is sliced in one or more axes and the user can edit each image individually using 2D raster editor tools. These generally store color information in voxels. • Sculpture: Similar to the vector counterpart but with no topology constraints. These usually store density information in voxels and lack color information. • Building blocks: The user can add and remove blocks just like a construction set toy.
Voxel editors for games Many game developers use in-house editors that are not released to the public, but a few games have publicly available editors, some of them created by players. • Slice based fan-made Voxel Section Editor III for Command & Conquer: Tiberian Sun and Command & Conquer: Red Alert 2. • SLAB6 and VoxEd are sculpture based voxel editors used by Voxlap engine games, including Voxelstein 3D and Ace of Spades. • The official Sandbox 2 editor for CryEngine 2 games (including Crysis) has support for sculpting voxel based terrain. • The C4 Engine and editor support multiple detail level (LOD) voxel terrain by implementing the patent-free Transvoxel algorithm.
General purpose voxel editors There are a few voxel editors available that are not tied to specific games or engines. They can be used as alternatives or complements to traditional 3D vector modeling.
Extensions A generalization of a voxel is the doxel, or dynamic voxel. This is used in the case of a 4D dataset, for example, an image sequence that represents 3D space together with another dimension such as time. In this way, an image could contain 100×100×100×100 doxels, which could be seen as a series of 100 frames of a 100×100×100 volume image (the equivalent for a 3D image would be showing a 2D cross section of the image in each frame). Although storage
265
Voxel
266
and manipulation of such data requires large amounts of memory, it allows the representation and analysis of spacetime systems.
References [1] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-3. html [2] Engine Technology (http:/ / web. archive. org/ web/ 20060507235618/ http:/ / www. outcast-thegame. com/ tech/ paradise. htm) [3] " Voxel terrain engine (http:/ / www. codermind. com/ articles/ Voxel-terrain-engine-building-the-terrain. html)", introduction. In a coder's mind, 2005. [4] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-2. html [5] http:/ / patents. justia. com/ inventor/ kyle-g-freeman [6] http:/ / www. flightsim. com/ vbfs/ content. php?2994-NovaLogic-Awarded-Patent-For-Voxel-Space-Graphics-Engine [7] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0-674-83339-2. [8] http:/ / projectorgames. net/ blog/ ?p=168
External links • Games with voxel graphics (http://www.mobygames.com/game-group/visual-technique-style-voxel-graphics) at MobyGames • Fundamentals of voxelization (http://labs.cs.sunysb.edu/labs/projects/volume/Papers/Voxel/)
Web3D Web3D was initially the idea to fully display and navigate Web sites using 3D. By extension, the term now refers to all interactive 3D content which are embedded into web pages html, and that we can see through a web browser. Web3D Technologies usually require to install a Web 3D viewer (plugin) to see this kind of content. Nowadays many formats and tools are available: • • • • • • • • • • • • • • • • • • •
3DMLW Adobe Shockwave Altadyn Burster (Web plugin to play Blender content) Cult3D FancyEngine Java 3D JOGL LWJGL O3D Oak3D ShiVa TurnTool Unity Virtools VRML Viewpoint Web3D Consortium WebGL
• WireFusion • X3D (extension of VRML)
Web3D
267
• AMF - Additive Manufacturing File Format They are mainly distinguished by five criteria: • • • •
Simplicity (Automatic Installation, rates facilities already high) Compatibility (Windows, Mac, Unix ..) Quality (Performances, see Frames per second, and indirectly display quality) Interactivity (Depending on the solutions, their programming opportunities, the creators of content have more or less freedom in the creation of interactivity.) • Standardization (none, "market position", by a standards organization, etc.)
External links • Lateral Visions [1] Lateral Visions Software Company : 3D Web specialist and platform developers. • TDT3D [2] European 3D Community : specialized in computer graphics and real-time 3D rendering (updated regularly). • Web3D and Web3D Framework : Web3D Software Company and Web3D Developers [3] • Walkthrough Web3D Online Galleries, Web3D Online Museums, Web3D Online Fairs [4] • Paul Festa (2002-02-26). "Bringing 3D to the Web" [5]. CNET News. • Canvas3D [6] • Altadyn [7] 3D Online Collaborative Platforms
References [1] [2] [3] [4] [5] [6] [7]
http:/ / lateralvisions. com http:/ / www. tdt3d. eu/ http:/ / graphtwerk. com http:/ / 3dstellwerk. com http:/ / www. news. com/ 2100-1023-844985. html http:/ / blog. vlad1. com/ canvas-3d/ http:/ / altadyn. com
Article Sources and Contributors
Article Sources and Contributors 3D modeling Source: https://en.wikipedia.org/w/index.php?oldid=587734845 Contributors: 16@r, A.mounir86, ALoopingIcon, Ahershberger, Alanbly, Aleksa Lukic, Alokz, Anir1uph, Apparition11, B, Bacchus87, Baiutti, Bdwolverine87, Beetstra, Bernd vdB, Billinghurst, BioPupil, Boing! said Zebedee, Bovineone, Bsirmacek, CGItems, CGSignal, Caltas, Cannolis, CapitalR, Carrotpeeler, ChibiKareshi, Chowbok, Chromatikoma, Ckatz, Clarkalastair, Codename Lisa, Commator, Cspan64, DVdm, Daggett, Dancter, David C, David Eppstein, DeadEyeArrow, DeadlyAssassin, DennisVRBE, Depardizayn, Dhatfield, Dicklyon, DoctorKubla, Domozy, DopefishJustin, Dsajga, Eddieheinz, Edgeloop, Eep², Ego White Tray, Enanko, Excirial, Flat Pyramid, Frecklefoot, Freshacconci, Fu Kung Master, Furrykef, Gabbs1, Gaius3, Game-Guru999, Gareth Griffith-Jones, George100, Georgefox123, Germancorredorp, Ginsuloft, Gnomonworkshop, Greg L, Hagerman, Hans Dunkelberg, Helensmith943, Hirsutism, Hodlipson, Holdendesign, Hu12, Imroy, Instantaneous, Jafet, JamesBWatson, Jeff Silvers, Jessicalorenwilson, Jmencisom, Johnspencer, Johnuniq, Jonpolygon, Jschnur, Jvs, Kimolyn, Koza1983, Kri, Kusunose, Kyosuke Aoki, Lampak, LeMisanthrope, LpGod, Lugia2453, Luyu823, M-le-mot-dit, M.J. Moore-McGonigal PhD, P.Eng, MAXXX-309, MER-C, Magioladitis, Magnus Manske, Mamadoutadioukone, Marius Kalytis, Mayafishing, Mdd, MedicineMan555, Mefio, Michael751, Mike Rosoft, Minimac93, Minna Sora no Shita, Mk*, Mojo Hand, MrOllie, Myofilus, NHRef, Nospildoh, Novusspero, OMGsplosion, Ohnoitsjamie, Oicumayberight, Orangemike, Ouzari, Parametric66, Pcap, PhilKnight, Philip Trueman, Pinky deamon, Plugwash, Pnm, Quincy2010, RPSJK, Rafeec, Raven in Orbit, Remag Kee, Rich Farmbrough, Ricosenna, Rilak, Roberta F., Rojomoke, Rpf 81, Sabinaroze, Salam32, Satellizer, Saumier, SchreiberBike, Serioussamp, Shark.printshop, Shell Kinney, Sid2089, SiobhanHansa, Skhedkar, Some standardized rigour, SpunkyBob, Ssentinull, Stepheng3, Superfinicky, Susiw, Tarinth, TeaDrinker, TheBendster, TheRealFennShysa, Theopolisme, Tkgd2007, Tonywalton, Tooth557, UnTrueOrUnSimplified, Usearch, Useight, Versageek, Victorbabkov, VitruV07, Vrenator, Wannabemodel, Wascally wabbit, Xewp, Xionbox, Xlr8u2, Xurxor, Yintan, Zanimum, Zarex, Zomno, Zundark, Πrate, 272 anonymous edits 3D computer graphics Source: https://en.wikipedia.org/w/index.php?oldid=592782698 Contributors: 041744, 28421u2232nfenfcenc, ALoopingIcon, Abhishekvaza, Acroterion, Adailide, Akumdara, Al Lemos, Alanbly, Allens, Alpha Centaury, AndrewTJ31, Anthere, Antilived, Apparition11, ArtashesKarapetyan, Aryoadeh, Asifsra, Awghrah, Azunda, Barbaking, Beetstra, Bernard mcboil, Bnewbies, Bobo192, Breezeonhold, Calabraxthis, Car9999, Charles Gaudette, Chithrapriya, ChrisGualtieri, Ckatz, Cmedling, Cpl Syx, D0762, DARTH SIDIOUS 2, DBigXray, Dancter, Davester78, David C, Ddjd911, Dedeche, Derekbridges, Derekleeholder, Dicklyon, Diyar se, Doc Strange, Dumbledad, EJF, ESkog, Eagleal, Edgeloop, Eekerz, El C, Eliz81, Emesee, Epbr123, Erianna, ErkinBatu, Evster88, Excirial, FlyingPenguins, Frecklefoot, Gabbs1, Garde, GeMet, Germancorredorp, Giftlite, Girolamo Savonarola, Greg L, Grishick, Guillom, HolgerK, Hu12, Hyju, INVERTED, Ideal gas equation, Imroy, Indon, Iokseng, Isaac, Iskander HFC, JForget, Jack Phoenix, Jackaranga, Jagged 85, Jdonati, Jusdafax, Jvs, Kain Nihil, Katalaveno, Katieh5584, Kennedy311, Kingpin13, Kinkwan, Kozuch, Krawi, Kri, Laurusnobilis, Lightmouse, LilHelpa, Liquidsnakex, Lolbill58, Loren.wilton, LpGod, Luckofbuck, M.J. Moore-McGonigal PhD, P.Eng, MER-C, Magioladitis, Martarius, Masgatotkaca, Materialscientist, Matthew Yeager, MaxRipper, Maxis ftw, Mayafishing, Mchristen, Mdd, Mdebets, Mefio, Mendaliv, Mercury, Michael Hardy, Mild Bill Hiccup, Mmxx, MrOllie, MyMii, NJA, Naturehead, NawlinWiki, Nigholith, Nk.sheridan, Novusspero, Obsidian Soul, Oicumayberight, Ost316, OwenX, Philip Trueman, Pie Bird, Pinky deamon, Poss, Prashanthns, Pseudomonas, Quincy2010, R'n'B, Randomran, Read-write-services, RekishiEJ, Remag Kee, Richard Sneyd, RiyaKeralore, Robin S, Rockruler, Ronz, Salam32, Satya 82, Scott Martin, Serioussamp, Sheikh wadood, Shell Kinney, Shereth, Siddhant, SiobhanHansa, Skhedkar, Soumyasch, SpunkyBob, Srleffler, Strangnet, Superbeecat, Svsraju, THEN WHO WAS PHONE?, Tedder, The Thing That Should Not Be, The undertow, TheBendster, TheDJ, TheRealFennShysa, Titodutta, Tomas444, Tommy2010, Tot12, Trevorgoodchild, Tsalat, TwinsMetsFan, Twsx, Useight, Weetoddid, Winchelsea, Winston365, Woohookitty, Xeolyte, Yamamoto Ichiro, Yintan, Yousaf465, Yui sama, Zidonuke, Zundark, 359 anonymous edits 3D computer graphics software Source: https://en.wikipedia.org/w/index.php?oldid=591198496 Contributors: -Midorihana-, 16@r, 3DAnimations.biz, 790, 99neurons, ALoopingIcon, Adrian 1001, Agentbla, Al Hart, Alanbly, AlexTheMartian, Alibaba327, Andek714, Antientropic, Aquilosion, Archizero, Arneoog, AryconVyper, Asav, Autumnalmonk, Bagatelle, BananaFiend, BcRIPster, Beetstra, Bertmg, Bigbluefish, Blackbox77, Bobsterling1975, Book2, Bovineone, Brenont, Bsmweb3d, Bwildasi, Byronknoll, CALR, CairoTasogare, CallipygianSchoolGirl, Candyer, Canoe1967, Carioca, Ccostis, Chowbok, Chris Borg, Chris TC01, Chris the speller, Chrisminter, Chromecat, Cjrcl, Codename Lisa, CorporateM, Cremepuff222, Cyon Steve, Cyrre, Davester78, Dekisugi, Dgirardeau, Dicklyon, Dlee3d, Dobie80, Dodger, Dr. Woo, DriveDenali, Dryo, Dsavi, Dto, Dynaflow, EEPROM Eagle, ERobson, ESkog, Edward, Eiskis, Elf, Elfguy, Emal35, EncMstr, Enigma100cwu, Enquire, EpsilonSquare, ErkDemon, Erp Erpington, Euchiasmus, Extremophile, Fiftyquid, Firsfron, Forderud, Frecklefoot, Fu Kung Master, GTBacchus, Gaius Cornelius, Gal911, Genius101, Goncalopp, Greg L, GustavTheMushroom, Gvancollie, Herorev, Holdendesign, HoserHead, Hyad, Iamsouthpaw, IanManka, Im.thatoneguy, Intgr, Inthoforo, Iphonefans2009, Iridescent, JLaTondre, Jameshfisher, Jan Tomanek, JayDez, Jdm64, Jdtyler, Jncraton, JohnCD, Joshmings, Jreynaga, Jstier, Jtanadi, Juhame, Julian Herzog, K8 fan, KDS4444, KVDP, Kev Boy, Koffeinoverdos, Kotakotakota, Lambda, Lantrix, Laurent Cancé, Lead holder, Lerdthenerd, LetterRip, Licu, Lifeweaver, Lightworkdesign, LilHelpa, Litherlandsand, Lolbill58, Longhair, M.J. Moore-McGonigal PhD, P.Eng, Malcolmxl5, Mandarax, Marcelswiss, Markhobley, Martarius, Materialscientist, Matuhin86, Mayalld, Michael Devore, Michael b strickland, Mike Gale, Millahnna, Mlfarrell, Mojo Hand, Mr mr ben, MrOllie, NeD80, NeoKron, Nev1, Nick Drake, Nickdi2012, Nixeagle, Nopnopzero, Nutiketaiel, Oddbodz, Oicumayberight, Optigon.wings, Ouzari, Papercyborg, Parametric66, Parscale, Paul Stansifer, Pepelyankov, Phiso1, Plan, Quincy2010, Radagast83, Raffaele Megabyte, Ramu50, Rapasaurus, Raven in Orbit, Relux2007, Requestion, Rich Farmbrough, Ronz, Rtc, Ryan Postlethwaite, Samtroup, SchreiberBike, Scotttsweeney, Sendai2ci, Serioussamp, ShaunMacPherson, Skhedkar, Skinnydow, SkyWalker, Skybum, Smalljim, Snarius, Snoblomma, Sparklyindigopink, Sparkwoodand21, Speck-Made, Spg3D, Stib, Strattonbrazil, Sugarsmax, Tbsmith, Team FS3D, TheRealFennShysa, Thecrusader 440, Three1415, Thymefromti, Tim1357, Tommato, Tritos, Truthdowser, Uncle Dick, VRsim, Vdf22, Victordiaz, VitruV07, Waldir, WallaceJackson, Wcgteach, Weetoddid, Welsh, WereSpielChequers, Woohookitty, Wsultzbach, Xx3nvyxx, Yellowweasel, ZanQdo, Zarius, Zundark, Δ, 430 anonymous edits 3D computer vision Source: https://en.wikipedia.org/w/index.php?oldid=508460158 Contributors: Antonov777, Avicennasis, Cmprince, Edward, Jamietw, Jason Quinn, Lamro, Mr Sheep Measham, VQuakr, Welsh, 3 anonymous edits 3D data acquisition and object reconstruction Source: https://en.wikipedia.org/w/index.php?oldid=589766113 Contributors: Andreas Kaufmann, Auntof6, Cerebellum, Dialectric, Gaius Cornelius, Grafen, Hippietrail, KVDP, Magioladitis, Martarius, Mooneyd, Mzajac, Nick Number, Rehno Lindeque, Toffanin, Trappist the monk, XLerate, Xerti, 17 anonymous edits 3D reconstruction Source: https://en.wikipedia.org/w/index.php?oldid=590153001 Contributors: Bearcat, Cerebellum, Codename Lisa, Dancostin2003, Daniel Mietchen, Ermishin, Hu12, Iohannes Animosus, Lmatt, Malcolma, Northamerica1000, S Marshall, Sinuhet, Snek01, 5 anonymous edits Binary space partitioning Source: https://en.wikipedia.org/w/index.php?oldid=569902953 Contributors: Abdull, Altenmann, Amanaplanacanalpanama, Amritchowdhury, Angela, AquaGeneral, Ariadie, B4hand, Bomazi, Brucenaylor, Brutaldeluxe, Bryan Derksen, Cbraga, Cgbuff, Chan siuman, Charles Matthews, ChrisGualtieri, Chrisjohnson, CyberSkull, Cybercobra, DanielPharos, David Eppstein, Dcoetzee, Dionyziz, Dysprosia, Fredrik, Frencheigh, Gbruin, GregorB, Gyunt, Headbomb, Immibis, Immonster, Jafet, Jamesontai, Jkwchui, JohnnyMrNinja, Kdau, Kelvie, KnightRider, Kri, LOL, Leithian, LogiNevermore, M-le-mot-dit, Mdob, Michael Hardy, Mild Bill Hiccup, Miquonranger03, Noxin911, NtpNtp, NuclearFriend, Obiwhonn, Oleg Alexandrov, Operator link, Palmin, Percivall, Prikipedia, QuasarTE, RPHv, Reedbeta, Spodi, Stephan Leeds, Svick, Tabletop, Tarquin, The Anome, TreeMan100, Twri, Wiki alf, WikiLaurent, WiseWoman, Wmahan, Wonghang, Yar Kramer, Zetawoof, 71 anonymous edits Bounding interval hierarchy Source: https://en.wikipedia.org/w/index.php?oldid=527510270 Contributors: Altenmann, Czarkoff, David Eppstein, Imbcmdth, Michael Hardy, Oleg Alexandrov, Rehno Lindeque, Snoopy67, Srleffler, Welsh, 26 anonymous edits Bounding volume Source: https://en.wikipedia.org/w/index.php?oldid=582109189 Contributors: Aboeing, Aeris-chan, [email protected], Altenmann, CardinalDan, Chris the speller, DavidCary, Flamurai, Forderud, Frank Shearar, Gdr, Gene Nygaard, Interiot, Iridescent, Jafet, Jaredwf, Lambiam, LokiClock, M-le-mot-dit, Michael Hardy, Oleg Alexandrov, Oli Filth, Operativem, Pmaillot, RJHall, Reedbeta, Ryk, Sixpence, Smokris, Sterrys, T-tus, Tony1212, Tosha, Werddemer, WikHead, 45 anonymous edits Bounding volume hierarchy Source: https://en.wikipedia.org/w/index.php?oldid=560904104 Contributors: Asdfwtf, Chire, David Eppstein, Ddegirmenci, Gromobir, Henke37, Houjun8022, Imbcmdth, Kri, Magioladitis, Magog the Ogre, Mecanismo, Michael Hardy, PeteBegin, Poxelcoll, Sae1962, Schreiberx, Svick, Tired time, ToSter, Twri, 5 anonymous edits Box modeling Source: https://en.wikipedia.org/w/index.php?oldid=565525666 Contributors: 2112 rush, Addshore, Bluerasberry, Courcelles, David Eppstein, Derbeth, Dprust, Furrykef, Greatpoo, Johnmperry, Jonburney, Junkyardprince, Lulzmango, Magioladitis, Melligem, Metron4, Mild Bill Hiccup, Scott5114, Smith609, Some standardized rigour, Sparkit, 11 anonymous edits Catmull–Clark subdivision surface Source: https://en.wikipedia.org/w/index.php?oldid=590642293 Contributors: Ahelps, Aquilosion, Ati3414, Austin512, Bebestbe, Berland, Chase me ladies, I'm the Cavalry, Chikako, Cristiprefac, Cyp, David Eppstein, Decora, Duncan.Hull, Elmindreda, Empoor, Forderud, Furrykef, Giftlite, Goatcheese3, Gorbay, Guffed, Harmsma, Ianp5a, Irtopiste, J.delanoy, Juhame, Karmacodex, Kinkybb, Krackpipe, Kubajzz, Lomacar, Michael Hardy, Mont29, Mr mr ben, My head itches, Mysid, Mystaker1, Niceguyedc, Nicholasbishop, Oleg Alexandrov, Pablodiazgutierrez, Rasmus Faber, Sigmundv, Skybum, Smcquay, Tomruen, Willpowered, 74 anonymous edits Cloth modeling Source: https://en.wikipedia.org/w/index.php?oldid=579189625 Contributors: Alanbly, Andreas Kaufmann, Asav, Chowbok, E v popov, FusionNow, Jason Quinn, Mdd, Oicumayberight, Ouzari, Rilak, Snoktruix, UAwiki, Updatehelper, Van helsing, Wavelength, Wolfkeeper, Xeno, Zundark, 9 anonymous edits COLLADA Source: https://en.wikipedia.org/w/index.php?oldid=593344362 Contributors: 1mujin22, 3droberto, 99neurons, A03danan, ALoopingIcon, Aboeing, Altenmann, Amalas, Amckern, Anthere, Austintate, BAxelrod, CALR, Cfrs, Chowbok, Danim, DennisRobinson, Derekleungtszhei, DiThi, Dsavi, Dstary, Ehsan2004, Elf, Erwincoumans, Expertoengrout, FordGT90Concept, Frecklefoot, GDallimore, Gaius Cornelius, GeorgeLouis, Ghettoblaster, Gilliam, Grantor, Guy.hubert, Hans.dewitte, Hibou57, Iamsouthpaw, Jamelan, Jmath666, Jmdelrio1, Joecool79, JoeyPrink, John Vandenberg, Jterrace, Karada, Khalid hassani, Kjmathew, Kocio, Lansd, Lawrencema, Lectonar, Locos epraix, LordYavin, Manoridius, Mboldisc, Mitch Ames, Moreati, Mr mr ben,
268
Article Sources and Contributors MrOllie, Mugofjava, N00body, Nameless23, Nasa-verve, Nialsh, Nigosh, Now3d, OS2Warp, Pedant, Plattapuss, Pnm, Protectr, Qst, R'n'B, Racklever, Richardsan, Rksomayaji, Runtime, Rwalker, RzR, Saforrest, Saudia-cmuetc, Seb35, Shadowfire87, Skadge, Slippy37, Somno, Springsuns, Stampsm, Swcisel, TBBle, TexasAndroid, TheCuriousGnome, Thunderbrand, Todd Vierling, WaZim0, Waffleguy4, Who, Wilkswiki, XavierXerxes, Xeonx, 154 anonymous edits Computed Corpuscle Sectioning Source: https://en.wikipedia.org/w/index.php?oldid=340006013 Contributors: Freedrat, Keesiewonder, Mdd, 15 anonymous edits Computer representation of surfaces Source: https://en.wikipedia.org/w/index.php?oldid=576813380 Contributors: CapitalR, Chris the speller, Conscious, Daniel Mietchen, Davepape, Eekerz, Freakofnurture, Freeformer, Leduart, Lindosland, Master of Puppets, Mentifisto, Michael Hardy, Oleg Alexandrov, Smurrayinchester, SoledadKabocha, StuRat, Zarex, 5 anonymous edits Constructive solid geometry Source: https://en.wikipedia.org/w/index.php?oldid=593277804 Contributors: Alezk90, Altenmann, Andreas Fabri, AnonMoos, Ap, Aqua008, BenFrantzDale, Blacklemon67, Brlcad, Bruce1ee, Bv3r, Captain Sprite, Charles Matthews, ChrisGualtieri, DanielPharos, DevastatorIIC, DrYak, Drttm, Dysprosia, E Wing, Ed g2s, Fmgazette, Fredrik, Gdr, Guy Macon, HarisM, Iraytrace, Isilanes, J. Finkelstein, Jfmiller28, Jncraton, Karl-Henner, Kgibbs, Mate2code, MaxDZ8, Merovingian, Michael Hardy, Mikhajist, Mr mr ben, MrOllie, Mutley1989, Nimur, Oleg Alexandrov, Onna, Operativem, PavelSolin, PhennPhawcks, Pietrow, Pjvpjv, R'n'B, RJHall, Rabbabodrool, RedWolf, Rgrof, Rsrikanth05, Segv11, Seliopou, Smjg, Sopoforic, Sterrys, TexPaulus, Thumperward, TimBentley, Tomruen, Zonderr, 72 anonymous edits Conversion between quaternions and Euler angles Source: https://en.wikipedia.org/w/index.php?oldid=590726533 Contributors: Anakin101, BlindWanderer, Charles Matthews, EdJohnston, Eiserlohpp, Florent Lamiraux, Fmalan, Forderud, Gaius Cornelius, Guentherwagner, Hyacinth, Icairns, Icalanise, Incnis Mrsi, JWWalker, Jcuadros, Jemebius, Jgoppert, Jheald, JohnBlackburne, JordiGH, Juansempere, Linas, Lionelbrits, Marcofantoni84, Mjb4567, Niac2, Oleg Alexandrov, PAR, Patrick, RJHall, Radagast83, Stamcose, Steve Lovelace, ThomasV, TobyNorris, Waldir, Woohookitty, ZeroOne, 41 anonymous edits Crowd simulation Source: https://en.wikipedia.org/w/index.php?oldid=569053150 Contributors: Airplaneman, Aniboy2000, Axyzdesign, Cloderic, DMacks, Delmingo77, Dhatfield, Edward, Epipelagic, Henrycwinco, Hoibywan, JAwiki83, Jazmatician, JeroenSteenbakkers, JonHarder, Kuru, LetterRip, Manocha 97, Mariapedrosa, Masaruemoto, Mikaelbarton, Mubbasir.kapadia, Noformation, Oicumayberight, Pinethicket, Pmkpmk, R'n'B, Redress perhaps, Rodriguezmikel, Takomat, The Anome, Themat21III, UAwiki, Ushau97, Waldir, Yann.pinczondusel, Yeahy2, Zoicon5, Zzo, 42 anonymous edits Cutaway drawing Source: https://en.wikipedia.org/w/index.php?oldid=564485549 Contributors: A7N8X, Bigbluefish, Biker Biker, CommonsDelinker, DZNRKkCV, Geniac, Hmains, Liontooth, Malcolma, Mdd, Northamerica1000, Ohnoitsjamie, Pisceesumsprecan, RainbowCrane, The Thing That Should Not Be, TomAlt, Zaheen, 11 anonymous edits Demoparty Source: https://en.wikipedia.org/w/index.php?oldid=571824649 Contributors: 32X, Akilaa, Altenmann, Arjayay, Bobblewik, C3o, Croquemitaine2, Danhash, Dec1pher, Dennis714, Froggy, G-Hell, Gargaj, Haemo, Icedog, J.delanoy, Mandarax, Mattl, Mcewan, Nialsh, Nmrd, Nunh-huh, Rhe br, Simon Strandgaard, Srleffler, Tygerdsebat, Viznut, Vossanova, 15 anonymous edits Depth map Source: https://en.wikipedia.org/w/index.php?oldid=588584137 Contributors: Albany NY, Bluefist, Dominicos, Dusti, Hyacinth, Zachlipton Digital puppetry Source: https://en.wikipedia.org/w/index.php?oldid=585556669 Contributors: 4flattires, Aspects, BarrelProof, BradFraggle, Comrade Graham, Davidhorman, Dragice, Erianna, Eric Stratton, Faolin42, Gaius Cornelius, Hektor, Histrion, JeffJonez, JordoCo, LSmok3, LtPowers, Majorly, Martarius, Mattbr, McDoobAU93, McGeddon, Metamatic, Number 57, Rjwilmsi, SNIyer12, Smalljim, Sr7wiki, TKD, Tassedethe, Themeparkgc, Wavelength, Wine Guy, Zedpoinpoin, 76 anonymous edits Dilution of precision (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=583925783 Contributors: BRW, David Levy, Drbreznjev, Miracle Pen, RJHall, Wavelength, Wheger, 6 anonymous edits Doo–Sabin subdivision surface Source: https://en.wikipedia.org/w/index.php?oldid=551288546 Contributors: Berland, Cuvette, Deodar, Forderud, Hagerman, Jitse Niesen, Michael Hardy, Tomruen, 6 anonymous edits Draw distance Source: https://en.wikipedia.org/w/index.php?oldid=584345228 Contributors: 25, Adso, Ar-wiki, Banus, Cawanpink, Deepomega, Diego Moya, Eldamorie, Exukvera, Goncalopp, Happycoder89, Hollingdrakepaul, KLLvr283, MuZemike, Noerrorsfound, Oswax, Scottandrewhutchins, Sct72, Snesiscool, Stewartadcock, Th1rt3en, Tomtheeditor, Vreemdst, Xhin, Y2kcrazyjoker4, ZS, 25 anonymous edits Edge loop Source: https://en.wikipedia.org/w/index.php?oldid=581222944 Contributors: Albrechtphilly, Balloonguy, Costela, Fages, Fox144112, Furrykef, G9germai, George100, Grafen, Gurch, Guy BlueSummers, J04n, Marasmusine, ProveIt, R'n'B, Scott5114, Skapur, Zundark, 13 anonymous edits Euler operator Source: https://en.wikipedia.org/w/index.php?oldid=582868132 Contributors: Brad7777, BradBeattie, Dina, Elkman, Havemann, InverseHypercube, Jayprich, Jitse Niesen, Ldo, Marylandwizard, Mecanismo, MetaNest, Michael Hardy, Mild Bill Hiccup, Tompw, 2 anonymous edits Explicit modeling Source: https://en.wikipedia.org/w/index.php?oldid=584269901 Contributors: AvicAWB, Bearcat, DoctorKubla, JimVC3, LadyofShalott, Malcolma, Sanjaykumarjjjj, Scotttsweeney False radiosity Source: https://en.wikipedia.org/w/index.php?oldid=566959678 Contributors: Fratrep, Kostmo, Nainsal, Visionguru, 3 anonymous edits Fiducial marker Source: https://en.wikipedia.org/w/index.php?oldid=593543061 Contributors: Ahunt, BenFrantzDale, CommonsDelinker, Edward, Electron9, Fbarw, Gilliam, Graham87, GregorB, Ianneilmacleod, InternetMeme, Kirigiri, Kootsoop, KrakatoaKatie, Moonriddengirl, Neuronz, Nick Number, Petiatil, Primaler, Sameer9812, Shanes1007, WRJF, Σ, 18 anonymous edits Fluid simulation Source: https://en.wikipedia.org/w/index.php?oldid=561134903 Contributors: Disavian, Ecstudent, EdGl, Erik9, Feline Hymnic, GSlicer, Jfmantis, Longbyte1, Macorosh, Mijagourlay, Mootzoid, NeD80, Nehalem, Nextil, Pascal666, Premeditated Chaos, Rjwilmsi, Salih, Santiglu, Sanya3, Shinyary2, Snoktruix, Stib, Strangnet, Svick, Tommy2010, 57 anonymous edits Forward kinematic animation Source: https://en.wikipedia.org/w/index.php?oldid=542508979 Contributors: Aboeing, Anetode, Bryan Derksen, Charles Matthews, Jiang, MrOllie, Nehrams2020, Onebyone, Sabry hassouna, Sam Hocevar, TimBentley, Topbanana, Wknight94, 10 anonymous edits Forward kinematics Source: https://en.wikipedia.org/w/index.php?oldid=558794311 Contributors: BigSEEN, Chaosdruid, DOSGuy, Danim, Deisysac, Frecklefoot, Hooperbloob, Jenniferskay, Jhonqb7, Kkhauser, Michael Hardy, NeD80, Nunoplopes, O keyes, Prof McCarthy, RJFJR, Rami radwan, Roy Presley, Wknight94, 12 anonymous edits Freeform surface modelling Source: https://en.wikipedia.org/w/index.php?oldid=592076018 Contributors: Bonasta, ChrisGualtieri, Crazycasta, Crvaught, DVD R W, Dancter, Danim, Epbr123, Flutter2009, Freeformer, GadgetSteve, Gareth Griffith-Jones, Giftlite, Greg L, Gzalazar, JuniperFuse, Kingturtle, Kostmo, Leonard G., Meredyth, Michael Hardy, Minna Sora no Shita, Mrt3366, MySchizoBuddy, NE2, Nallenscott, Nerdture, Onceler, Parametric66, RaySys, Rilak, Rjwilmsi, Safalra, SchreiberBike, Scriberius, Smashville, Sprocedato, Srice13, Tom Jenkins, VitruV07, Vladsinger, Welsh, 57 anonymous edits Geometry instancing Source: https://en.wikipedia.org/w/index.php?oldid=566269530 Contributors: Dvdjg, Genpfault, Kaathe, KrakatoaKatie, MaxDZ8, Michael Hardy, Mister fidget, NeD80, Nousernamesleft, Richie Campbell, Sneftel, TheDeadlyShoe, Uthbrian, Wayne Hardman, Xpclient, 13 anonymous edits Geometry pipelines Source: https://en.wikipedia.org/w/index.php?oldid=576563594 Contributors: Bejnar, Bumm13, Cybercobra, Eda eng, Frap, GL1zdA, Hazardous Matt, Jesse Viviano, JoJan, Joy, Joyous!, Jpbowen, R'n'B, Rilak, Robertvan1, Shaundakulbara, Stephenb, W Nowicki, 14 anonymous edits Geometry processing Source: https://en.wikipedia.org/w/index.php?oldid=560850158 Contributors: ALoopingIcon, Alanbly, Betamod, Dsajga, EpsilonSquare, Frecklefoot, Happyrabbit, JMK, Jeff3000, JennyRad, Jeodesic, Lantonov, Michael Hardy, N2e, PJY, Poobslag, RJHall, Siddhant, Sterrys, 14 anonymous edits Gimbal lock Source: https://en.wikipedia.org/w/index.php?oldid=587425348 Contributors: AzraelUK, B.d.mills, BRW, Beetstra, Bersl2, Bruyninc, Catclock, CatherineMunro, Cherkash, Chetvorno, Cjcollier, DMahalko, Daev, Damjan1980, DarthNerdious, Davidhorman, Demomoer, Duxwing, Dysprosia, Eric Le Bigot, Fabien.chraim, Faffod, Featherwinglove, Furrykef, Gail, Giftlite, Gmcrivello, GroverInfo, Herbee, Hooperbloob, Incnis Mrsi, JA.Davidson, JamesBWatson, Jarble, Jcloninger, Jesdisciple, JoOleaN, John, KSmrq, Karada, Katiecoh438, Kauffner, Kevinpurcell, Khazar2, Kjinsert, Kku, Korval, Krientle, Kwan3217, Linas, Lookang, LtPowers, LucasVB, MacMog, Markioffe, MathsPoetry, Michael Hardy, Mild Bill Hiccup, Minesweeper, MistyMorn, Mmeijeri, Mogism, N2e, Naddy, Nahum Reduta, Nbarth, NoNonsenseHumJock, PAR, PatheticCopyEditor, Patriotic dissent, PerryTachett, Pinethicket, Ppareit, R'n'B, Robinh, SEIBasaurus, Shai-kun, Slawekb, Soler97, Sonett72, The Anome, Thumperward, Tom Duff, Toon81, Utcursch, Waldir, Wyatt915, 84 anonymous edits Glide API Source: https://en.wikipedia.org/w/index.php?oldid=589319462 Contributors: Astronautics, Bbi5291, BoH, Brianski, BrownstoneKnockn, Comrade Graham, DRiLLA, DanielPharos, Devil Master, Doug Bell, Frap, Ganimoth, Gargaj, GreatWhiteNortherner, Helpsloose, Hyperthermia, Imroy, Jtalledo, Kingadrock, Knight of BAAWA, Lavenderbunny, Lproven, MartectX, Masterius, Maury Markowitz, Minghong, Qutezuce, RJHall, ROM SPACEKNIGHT, RekishiEJ, Rhe br, Ryanoasis, ScotXW, Swaaye, The Anome, Tim1357, Wuahn, Ysangkok, 59 anonymous
269
Article Sources and Contributors edits GloriaFX Source: https://en.wikipedia.org/w/index.php?oldid=585910213 Contributors: BitBus, ChrisGualtieri, Kd1live, MrX Hemicube (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=552387990 Contributors: Favonian, Rocketmagnet, Tomruen, 3 anonymous edits Image plane Source: https://en.wikipedia.org/w/index.php?oldid=543995208 Contributors: BenFrantzDale, CesarB, Michael C Price, RJHall, Reedbeta, TheParanoidOne, 1 anonymous edits Image-based meshing Source: https://en.wikipedia.org/w/index.php?oldid=581479376 Contributors: Af1523, David Eppstein, Eboelen, Egallois, Gilliam, Iweber2003, Jitse Niesen, Michael Hardy, Shervinemami, Simpleware123, Tckma, 5 anonymous edits Inflatable icons Source: https://en.wikipedia.org/w/index.php?oldid=533506825 Contributors: Alex.muller, Dragentsheets, Erechtheus, Frogger3140, Hm2k, MrOllie, RainbowCrane, Retodon8, Robofish, Trinitresque, 3 anonymous edits Interactive Digital Centre Asia Source: https://en.wikipedia.org/w/index.php?oldid=384602136 Contributors: 3dcreator, Calaka, Fusnex, Tassedethe, 1 anonymous edits Interactive skeleton-driven simulation Source: https://en.wikipedia.org/w/index.php?oldid=582503189 Contributors: ChrisGualtieri, David Eppstein, Dineshkumar Ponnusamy, Dmr2, EagleFan, Edward, ErkDemon, Foobarnix, Hongooi, J04n, JAF1970, JeffJonez, John of Reading, Lkinkade, Materialscientist, Neelix, Pekaje, RCX, Reyk, Rilak, Robofish, ScierGuy, Smalljim, Woohookitty, 7 anonymous edits Inverse kinematics Source: https://en.wikipedia.org/w/index.php?oldid=587912156 Contributors: Abmac, Aboeing, Acdx, Anetode, ArnoldReinhold, BAxelrod, Big dut, Chaos95, Chaosdruid, Charles Matthews, Chojitsa, Closedmouth, DMahalko, Danim, Ddon, Decept404, Diego Moya, Dipankan001, Dratman, Drewnoakes, Dryea, Dvavasour, Ego White Tray, Frecklefoot, Furrykef, GTubio, Hooperbloob, Hweihe, JaGa, Jac16888, Jibjibjib, Jmainpri, K.Nevelsteen, Kingpin13, MPaetkau, Maian, Maximus Rex, NeD80, Oceano2012, Openmouth, Prof McCarthy, Qwertyus, RJHall, Radagast83, Rdiankov, Rich.hooper, Rilak, Roy Presley, Simeon, Slgrandson, Spoulakk, Sverdrup, Tide rolls, Tiptoety, Tomas e, Tuber, Ut40755, Van helsing, Vedantkumar, Vonstroodl, Wabbit98, Waffleguy4, 70 anonymous edits Isosurface Source: https://en.wikipedia.org/w/index.php?oldid=573152633 Contributors: Ariadacapo, Banus, Brad7777, CALR, Charles Matthews, Dergrosse, George100, Khalid hassani, Kku, Kri, Michael Hardy, Onna, Ospalh, RJHall, RedWolf, Rudolf.hellmuth, Sam Hocevar, StoatBringer, Taw, The demiurge, Thurth, Tijfo098, TortoiseWrath, 8 anonymous edits Joint constraints Source: https://en.wikipedia.org/w/index.php?oldid=458044863 Contributors: Banana04131, Chaosdruid, Cooperh, Dreadstar, EagleFan, ErkDemon, Icep, Mmernex, Paolo.dL, Ravedave, Rofthorax, Salmar, 2 anonymous edits Kinematic chain Source: https://en.wikipedia.org/w/index.php?oldid=560929876 Contributors: Andy Dingley, ArnoldReinhold, Bruyninc, Chaosdruid, CommonsDelinker, Danim, Dr.K., Gamsbart, John of Reading, Lyla1205, MattGiuca, NeD80, Prof McCarthy, Rich Farmbrough, Saimhe, Vijayant06631, Écrivain, 6 anonymous edits Lambert's cosine law Source: https://en.wikipedia.org/w/index.php?oldid=585423819 Contributors: AvicAWB, AxelBoldt, Ben Moore, BenFrantzDale, Berean Hunter, Cellocgw, Charles Matthews, Choster, Css, Dbenbenn, Deuar, Dufbug Deropa, Escientist, Gene Nygaard, GianniG46, Helicopter34234, Hhhippo, HiraV, Hugh Hudson, Inductiveload, Jcaruth123, Kri, Linas, Magioladitis, Marcosaedro, Michael Hardy, Mpfiz, Oleg Alexandrov, OptoDave, Owen, PAR, Papa November, Patrick, Pflatau, Q Science, RDBury, RJHall, Radagast83, Ramjar, Robobix, Scolobb, Seth Ilys, Srleffler, Telfordbuck, The wub, ThePI, Thorseth, Tomruen, Tøpholm, 35 anonymous edits Light stage Source: https://en.wikipedia.org/w/index.php?oldid=589319373 Contributors: LilHelpa, Redress perhaps, Tikuko, Tim1357 Light transport theory Source: https://en.wikipedia.org/w/index.php?oldid=580221468 Contributors: Beland, Chaoticbob, Chris the speller, Curious brain, Darrell Greenwood, DerHexer, Elgordon, Favonian, Jammer6524, Laubzega, Mattfletcher, Qutezuce, Qxz, Requestion, ResidueOfDesign, Rilak, Rjwilmsi, Srleffler, Tkgd2007, 23 anonymous edits Loop subdivision surface Source: https://en.wikipedia.org/w/index.php?oldid=546701115 Contributors: ChrisGualtieri, Egpetersen, Forderud, Michael Hardy, SimonFuhrmann, Tinucherian, Tomruen, 2 anonymous edits Low poly Source: https://en.wikipedia.org/w/index.php?oldid=570407408 Contributors: Aeusoes1, Avanu, Cerejota, Crahul, David Levy, Diego Moya, Edtion, GabrielOPadoan, InvisibleUp, Josh Parris, Luky1971, Mardus, Plasmatics, Praetor alpha, RJHall, Ronz, Shell Kinney, SilkTork, Silver seren, Smalleditor, Soc8675309, Stultitiam debello, This, that and the other, Thu, Thumperward, Vibhijain, Wbm1058, Wiz3kid, Zephyris, 13 anonymous edits Marching cubes Source: https://en.wikipedia.org/w/index.php?oldid=585232143 Contributors: Abstracte, Accelerometer, Andreas Kaufmann, Anoko moonlight, Arru, Ciphers, Clemmy, David Eppstein, Dcoetzee, Dispenser, Dormant25, Elifer, GregorB, Harro, Iweber2003, JamesBrownJr, Jimw338, Jmtrivial, Jstrater, Jtsiomb, Kri, Laserstorm, Lorensen, Mel Etitis, Michael Devore, Mikhailfranco, Oleg Alexandrov, Ptxmac, Rudolf.hellmuth, Satchmo, Sjschen, Tijfo098, Tobo, Ренат Насыров, 42 anonymous edits Mesh parameterization Source: https://en.wikipedia.org/w/index.php?oldid=481425556 Contributors: Ennetws, George100 Metaballs Source: https://en.wikipedia.org/w/index.php?oldid=564661788 Contributors: ABF, Abmac, Alansohn, Amberinie123, Anthony Appleyard, Asrghasrhiojadrhr, Capricorn42, Catgut, Chester Markel, Chochopk, Cometstyles, Danukasan, Davidhorman, Deodar, Download, Eric-Wester, Faradayplank, Felsir, Forsteri, Frap, Frecklefoot, Furrykef, Gabzuka, Gurch, I do not exist, Iron Wallaby, J.delanoy, JNW, Japanese Searobin, JeffLait, Jonnabuz, Jwz, Kibibu, Kku, NiTenIchiRyu, NormDor, ObfuscatePenguin, Piano non troppo, Quuxplusone, R'n'B, RDBury, RJHall, RainbowCrane, Reedbeta, SharkD, Shinjin, Spinningspark, Sterrys, T-tus, The Thing That Should Not Be, Trombodave, Viznut, Vossanova, Yuckfoo, Ренат Насыров, 55 anonymous edits Micropolygon Source: https://en.wikipedia.org/w/index.php?oldid=544010772 Contributors: Beland, David Eppstein, Dmaas, Flamurai, InnerJustice, Kimon, Moritz Moeller, RJHall, T-tus, 9 anonymous edits Morph target animation Source: https://en.wikipedia.org/w/index.php?oldid=546098496 Contributors: Ayavaron, CapitalR, David Eppstein, DavidConrad, Fama Clamosa, Fortdj33, Martarius, Oicumayberight, Robofish, Smalljim, Tregoweth, 10 anonymous edits Motion capture Source: https://en.wikipedia.org/w/index.php?oldid=592235574 Contributors: 007patrick, 1canuck2, 692351933two, 7, AJR, Abeld, Adamshand, Adashiel, Aeternus, Aff123a, Alarics, Allens, Altzinn, Amhill4, Arts-ed-web, Asteuartw, Avoided, Bacinphx, Baiji, Ballerinailina, Batman tas, Befairplaynice, Bhny, Binary, BlastOButter42, Blr246, BobKerns, Bobblewik, Bongwarrior, CanisRufus, CaptiveMotion, CarlosCoppola, Cat's Tuxedo, Cecilia-Marketing OM, Cganimation, Chenjeru, Chmod007, Chrisamichaels, Codamotion, ColorKipik, Colski, CommonsDelinker, Cornellier, Corpsedust, Csusarah, Ctbolt, Cybernerd1999, Dadofsam, Dancter, Daniel Mietchen, Desmall, Dger, Dhatfield, Dieselbub, Dman mocap, Dragice, Dvd-junkie, E Wing, Edward Z. Yang, Eloy, EncMstr, Eptin, Erianna, Erik, Eva gloss, Felicity4711, FenristheWolf, Fluffystar, Fortdj33, Freakofnurture, Frecklefoot, Furrykef, GM11, Game-Guru999, Geni, Geoffspear, Gios, GirDraxa, Glen, Glenmark, Gothicfilm, HCA, Hajenso, Hajor, HalfShadow, Hipocrite, II Ross II, Ian Pitchford, Ianblair23, Ilikefood, Immblueversion, Inition, Interiot, Itroll69420, J Di, J.delanoy, JBKramer, Janke, Janko, Janto, Jason Quinn, JasonAQuest, Jedi94, Jeff Merritt, JeffJonez, Jengelh, Jer4346, Jeremybirn, John Reid, Joriki, Jort227, Josh23french, Kingpin13, Kintetsubuffalo, Kittins floating in the sky yay, Kku, Koza1983, Krohneew, Leaderofearth, Leladax, Limited, Lindosland, LittleSmall, Lmtanco, LorenzoB, Lumoy, Lunzueta78, MacCool, MangaFalzy, Marasmusine, Martarius, MasterOfTheXP, Materialscientist, Maurice Carbonaro, McGeddon, Mdd4696, Melonkelon, Michael Snow, Mini mocap, Miss Manzana, Mocapservices, Mocapstudios, Modeha, Mogism, Motioncap, Motioncapture, Mounirzok, Mr MoCap, Msgarrett, Muhammad.ashim, Murata, Myf, Mysid, Nick R, NotACow, NuclearWarfare, ONEder Boy, Oakshade, Paranoid, Paul A, Pedant, Peeldog, Phobsn, Pikawil, Pixelface, Pjacobi, Plowboylifestyle, Porterjoh, Postdlf, Prasha.ina, Qwertyuiop71944, RJHall, Radagast83, Rameshmerl, Redress perhaps, RenniePet, Rich Farmbrough, Richl38, Rmctodd, Robert K S, RobertM52, Rpf 81, Rumtumtum, Ryan256, Sam Hocevar, Sapporod1965, Sasuke Sarutobi, Sbolduc, Sbowers3, Sbrockway, SchuminWeb, Sergeyy, Shinkocheng, Shmuel, SimonP, Smokizzy, Sonicsuns, Starkiller88, Stefano001, SteinAlive, Steve Pucci, T-tus, TDogg310, Tarinth, Tbhotch, Tempshill, TenPoundHammer, Tenebrae, Th1rt3en, Thatotherperson, The Cake is a Lie, Themfromspace, Thu, Tikiwont, Tmcsheery, Tomer 070, TriMesh, VampyreDark, Vanished User 8a9b4725f8376, VanishedUserABC, Veghead, Veinor, Verymadbob, Vinniereddy, Waldir, Wavelength, Wendywaterman, Wikitanvir, Wildroot, Xaviercarpentier, Zedmelon, 592 anonymous edits Newell's algorithm Source: https://en.wikipedia.org/w/index.php?oldid=570372988 Contributors: Andreas Kaufmann, Charles Matthews, David Eppstein, Farley13, KnightRider, Komap, RockMagnetist, TowerOfBricks, 6 anonymous edits Non-uniform rational B-spline Source: https://en.wikipedia.org/w/index.php?oldid=593692846 Contributors: *drew, ALoopingIcon, Ahellwig, Alan Parmenter, Alanbly, Alansohn, AlphaPyro, Andreas Kaufmann, Angela, Apparition11, Ati3414, BAxelrod, BMF81, Barracoon, BenFrantzDale, Berland, Buddelkiste, C0nanPayne, Cgbuff, Cgs, Commander Keane, Crahul, DMahalko, Dallben, Developer, Dhatfield, Dmmd123, Doradus, DoriSmith, Ensign beedrill, Eric Demers, Ettrig, FF2010, Forderud, Fredrik, Freeformer, Furrykef, Gargoyle888, Gea, Graue, Greg L, Happyrabbit, Hasanisawi, Hazir, HugoJacques1, HuyS3, Ian Pitchford, Ihope127, Iltseng, J04n, JFPresti, JJC1138, JohnBlackburne, Jusdafax, Kaldari, Karlhendrikse, Khunglongcon, KoenDelaere, LeTrebuchet, Lou schaefer, Lzur, Maccarthaigh d, Malarame, Mardson, MarmotteNZ, Matthijs, Mauritsmaartendejong, Maury Markowitz, Meungkim, Michael Hardy, Migilik, NPowerSoftware, Nedaim, Neostarbuck, Newbiepedian, Nichalp, Nick, Nick Pisarro, Jr., Nijun, Nintend06, Oleg Alexandrov, Orborde, Oxymoron83, Palapa, Parametric66, Pashute, Peter M
270
Article Sources and Contributors Gerdes, Pgimeno, Puchiko, Purwar, Quinacrine, Qutezuce, Radical Mallard, Rasmus Faber, Rconan, Reelrt, Regenwolke, Rfc1394, Ronz, Roundaboutyes, Sedimin, Skrapion, SlowJEEP, SmilingRob, Speck-Made, Spitfire19, Stefano.anzellotti, Stewartadcock, Strangnet, Sukesh pabba, Taejo, Tamfang, The Anome, Toolnut, Tsa1093, Uwe rossbacher, VitruV07, Vladsinger, Whaa?, WulfTheSaxon, Xcoil, Xmnemonic, Yahastu, Yousou, ZeroOne, Zoodinger.Dreyfus, Zootalures, ﻣﺎﻧﻲ, 212 anonymous edits Nonobtuse mesh Source: https://en.wikipedia.org/w/index.php?oldid=529588100 Contributors: Andreas Kaufmann, Arcenciel, Cardamon, Cobi, David Eppstein, Edward, MZMcBride, Michael Hardy, Muichon, Rich Farmbrough, 6 anonymous edits Normal (geometry) Source: https://en.wikipedia.org/w/index.php?oldid=581278108 Contributors: 16@r, 4C, Aboalbiss, Abrech, Aquishix, Arcfrk, BenFrantzDale, Chris Howard, ChrisGualtieri, D.Lazard, Daniele.tampieri, Dori, Dysprosia, Editsalot, Elembis, Epolk, Excirial, Fgnievinski, Fixentries, Frecklefoot, Furrykef, Gene Nygaard, Giftlite, Hakeem.gadi, Herbee, Ilya Voyager, InternetMeme, JasonAD, JohnBlackburne, JonathanHudgins, Jorge Stolfi, Joseph Myers, KSmrq, Kostmo, Kushal one, LOL, Lunch, Madmath789, Michael Hardy, ObscureAuthor, Oleg Alexandrov, Olegalexandrov, Paolo.dL, Patrick, Paulheath, Pazouzou, Pooven, Quanda, Quondum, R'n'B, RDBury, RJHall, RevenDS, Serpent's Choice, Skytopia, Smessing, Squash, Sterrys, Subhash15, Takomat, Vkpd11, Zvika, 52 anonymous edits Painter's algorithm Source: https://en.wikipedia.org/w/index.php?oldid=580518767 Contributors: 16@r, Andreas Kaufmann, BlastOButter42, Bryan Derksen, Cgbuff, EoGuy, Fabiob, Farley13, Feezo, Finell, Finlay McWalter, Fredrik, Frietjes, Hhanke, Jaberwocky6669, Jmabel, JohnBlackburne, KnightRider, Komap, Mickoush, Norm, Ordoon, PRMerkley, Phyte, RJHall, RadRafe, Rainwarrior, Rasmus Faber, Reedbeta, Rufous, Shai-kun, Shanes, Sreifa01, Sterrys, SteveBaker, Sverdrup, WISo, Whatsthatcomingoverthehill, Zapyon, 26 anonymous edits Parallax barrier Source: https://en.wikipedia.org/w/index.php?oldid=574071397 Contributors: 123GhostMonkey, Acalamari, AxelBoldt, CanadianLinuxUser, Chris the speller, ChrisGualtieri, Chrismiceli, Cmglee, Conquerist, Criffer, CuriousEric, Cwjakesteel, Dakovski, DanTheMormon, Edcolins, FireyFly, Fluffystar, Hghyux, Iain99, JMyrleFuller, JonathanMather, Jovianeye, Karlhendrikse, Ksajan16, Luvcraft, Magioladitis, Mather1, Mister Mormon, Mpa5220, Mumiemonstret, NFreak007, NetRolller 3D, Penubag, Pomte, Qsaw, RW Marloe, Reywas92, Smalljim, Some guy, Sonicdude558, Strasburger, ThomasO1989, Unicycle77, ZooFari, 46 anonymous edits Parallel rendering Source: https://en.wikipedia.org/w/index.php?oldid=562172759 Contributors: Abdull, Archwyrm, Bejnar, Bovineone, Charles Matthews, Clicketyclack, Edward, Eile, GeorgeMoney, JDG, JustinHagstrom, Khar khar, Khar khar78, M-le-mot-dit, Mandarax, Miym, RJHall, Rilak, Sandeep.chandna, SchreiberBike, ThomasHarte, Wikibarista, Youshotwhointhewhatnow, 20 anonymous edits Particle system Source: https://en.wikipedia.org/w/index.php?oldid=580492666 Contributors: Aliakakis, Ashlux, Athlord, Baron305, Bjørn, CanisRufus, Charles Matthews, Chris the speller, Darthuggla, Deadlydog, Deodar, Eekerz, Ferdzee, Fractal3, Furrykef, Gamer3D, Gracefool, Halixi72, Jay1279, Jpbowen, Jtsiomb, Ketiltrout, Kibibu, Krizas, Lesser Cartographies, LilHelpa, MarSch, MrOllie, Mrwojo, Onebyone, Oxfordwang, Philip Trueman, Rocketrod1960, Rror, Salvidrim!, Sameboat, Schmiteye, SchreiberBike, ScottDavis, SethTisue, Shanedidona, Sideris, Sterrys, SteveBaker, Sun Creator, The Merciful, Thesalus, Tjmax99, Vegaswikian, Zzuuzz, 78 anonymous edits Point cloud Source: https://en.wikipedia.org/w/index.php?oldid=589486212 Contributors: 3dscanguru, ALoopingIcon, AbsolutDan, Alsocal, Americanhero, Amin Hashem, Bensmith, CALR, Ch0c0lina, CloudNine, Cretog8, Cteutsch, Dante Alighieri, Delirium, Derick1259, Dgirardeau, Equendil, Herdingcats2, Hooperbloob, Iltseng, Imroy, Ldo, LucasVB, Manop, Marianika, Michael Hardy, Mike Stramba, Nathan nfm, NathanHagen, NeD80, Oleg Alexandrov, P.arashnia, PJ Geest, Poor Yorick, Qr189, RainbowCrane, Rbrusu, Rchoetzlein, Rmashhadi, Ron.swonger, Ryan Roos, Sarkadiu, Sibazyun, Staszek Lem, Stoermerjp, Thumperward, Wolfkeeper, 26 anonymous edits Polygon (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=576263896 Contributors: Arnero, BlazeHedgehog, CALR, David Levy, Diego Moya, Forderud, Iceman444k, J04n, Jagged 85, Mardus, Michael Hardy, Navstar, Pietaster, RJHall, Reedbeta, SimonP, 3 anonymous edits Polygon mesh Source: https://en.wikipedia.org/w/index.php?oldid=593133589 Contributors: ALoopingIcon, Alcexhim, Andreas Kaufmann, Ariadacapo, ArmyRetired, BenFrantzDale, Berland, Chadernook, Chrschn, Cobaltcigs, Cobi, Cornellier, Crahul, David Eppstein, Der Golem, EthanL, Evakuate, Fnunnari, Foobaz, Forderud, Furrykef, Giftlite, Gongshow, HannesJvV, Happyrabbit, Hasanisawi, Havemann, Huttarl, Hymek, Jengelh, Jeodesic, Jirka Němec, Kevin, Kku, Kri, LilHelpa, Lobsterbake, MaSt, Mackseem, Marc-André Aßbrock, Maskurii, Me Three, Michael Hardy, Mr.stilt, Nameless23, Pnm, PraetorianFury, Quinacrine, Radical Mallard, Rchoetzlein, Reyk, Ronz, Shtamy, Siddhant, Some Old Man, SoylentGreen, Srodrig, Sterrys, Svick, Tamfang, Tetracube, Tim1357, Tom Edwards, Tom macknight, Tomas e, Tomruen, Trollzz, Victorbabkov, Waldir, Wiz3kid, Wwlin, Wykypydya, Δ, 121 anonymous edits Polygon soup Source: https://en.wikipedia.org/w/index.php?oldid=563993794 Contributors: Elektrik Shoos, Jfmantis, LaidlawFX, LeftClicker, SimenH, 1 anonymous edits Polygonal modeling Source: https://en.wikipedia.org/w/index.php?oldid=590857550 Contributors: ALoopingIcon, Alai, Aperittos, Ayavaron, BD2412, Ben pcc, Brad Halls, Burschik, Charles Matthews, Chowbok, Daniel J. Leivick, DanielPharos, David Eppstein, Diego Moya, Dr Gangrene, Drf5n, Edtion, EpsilonSquare, Ericbeg, Eumolpo, FMax, Forderud, Furrykef, GoingBatty, Hexicola, J.delanoy, Jbalint, Jreynaga, KenArthur, Kingoomieiii, Kostmo, Kuru, Leksanski, Lightmouse, LpGod, M-le-mot-dit, Mdd4696, Mwtoews, Nintendude, Pak21, Peter L, Peter M Gerdes, Piotrus, Pnm, RJFJR, Rchoetzlein, Ricosenna, Sterrys, TenOfAllTrades, ZS, 47 anonymous edits Pre-rendering Source: https://en.wikipedia.org/w/index.php?oldid=592968358 Contributors: Albany NY, Amalas, Bob A, CougRoyalty, CyberSkull, DanielPharos, Darkmaster01, Darthbob100, Fsantos222, Gimboid, Glacialfox, Green451, Guiltyspark, Hellknowz, Jagged 85, Jonkerz, JubalHarshaw, KyraVixen, M-le-mot-dit, MonkeyKingBar, Moriori, MrPenbrook, Prolifix - Zaretser, RememberCharlie, Revth, RockMFR, Shawnc, TVippy, Vendettax, 26 anonymous edits Precomputed Radiance Transfer Source: https://en.wikipedia.org/w/index.php?oldid=589319599 Contributors: Abstracte, Colonies Chris, Deodar, Fanra, Imroy, Red Act, SteveBaker, Tesi1700, Tim1357, WhiteMouseGary, 7 anonymous edits Procedural modeling Source: https://en.wikipedia.org/w/index.php?oldid=531334311 Contributors: Alan Liefting, Chris TC01, Daftchunk, EikeFA, Frecklefoot, Inthoforo, Jenseevinck, Joelholdsworth, Licu, ScaledLizard, Some standardized rigour, The-Wretched, 27 anonymous edits Procedural texture Source: https://en.wikipedia.org/w/index.php?oldid=591287443 Contributors: Altenmann, Besieged, CapitalR, Cargoking, D6, Dhatfield, Eflouret, Foolscreen, Gadfium, Geh, Gurch, IndigoMertel, Jacoplane, Joeybuddy96, Ken md, MaxDZ8, Michael Hardy, MoogleDan, Nezbie, Ntfs.hard, PaulBoxley, Petalochilus, RhinosoRoss, Spark, Thparkth, TimBentley, Viznut, Volfy, Wikedit, Wragge, Zundark, 22 anonymous edits Progressive meshes Source: https://en.wikipedia.org/w/index.php?oldid=551441516 Contributors: Jirka Němec, Rcsprinter123, 3 anonymous edits 3D projection Source: https://en.wikipedia.org/w/index.php?oldid=589766195 Contributors: AManWithNoPlan, Aekquy, Akilaa, Akulo, Alfio, Allefant, Altenmann, Angela, Aniboy2000, Baudway, BenFrantzDale, Berland, Bgwhite, Bloodshedder, Bobbygao, BrainFRZ, Bunyk, Canthusus, Charles Matthews, Cholling, Chris the speller, Ckatz, Cpl Syx, Ctachme, Cyp, Datadelay, Davidhorman, Deom, Dhatfield, Dratman, Ego White Tray, Flamurai, Froth, Furrykef, Gamer Eek, Giftlite, Heymid, Ieay4a, Jaredwf, Jovianconflict, Kevmitch, Lincher, Luckyherb, Marco Polo, Martarius, MathsIsFun, Mdd, Michael Hardy, Michaelbarreto, Miym, Mrwojo, Nbarth, Oleg Alexandrov, Omegatron, Paolo.dL, Patrick, Pearle, PhilKnight, Pickypickywiki, Plowboylifestyle, PsychoAlienDog, Que, R'n'B, RJHall, Rabiee, Raven in Orbit, Remi0o, RenniePet, Rjwilmsi, RossA, Sandeman684, Sboehringer, Schneelocke, Seet82, SharkD, Sietse Snel, Skytiger2, Speshall, Stephan Leeds, Stestagg, Tamfang, Technopat, TimBentley, Trappist the monk, Tristanreid, Twillisjr, Tyler, Unigfjkl, Van helsing, Vgergo, Waldir, Widr, Zanaq, 111 anonymous edits Projective texture mapping Source: https://en.wikipedia.org/w/index.php?oldid=482361764 Contributors: CapitalR, ChaimSanders, Cmdrjameson, Garion96, Heimstern, Hurricane111, Ian Pitchford, LeeHunter, MaxDZ8, Ms2ger, Qutezuce, Rilak, Thue, Who, 12 anonymous edits Pyramid of vision Source: https://en.wikipedia.org/w/index.php?oldid=551980986 Contributors: CatherineMunro, DavidCary, Frigo, RainbowCrane, Xavax, 2 anonymous edits Quantitative Invisibility Source: https://en.wikipedia.org/w/index.php?oldid=536997118 Contributors: C12H22O11, C6541, Dicklyon, Frencheigh, Gioto, Infovarius, Patrick, Sweat1nce, Wheger, 3 anonymous edits Quaternions and spatial rotation Source: https://en.wikipedia.org/w/index.php?oldid=592935342 Contributors: AeronBuchanan, Albmont, Ananthsaran7, ArnoldReinhold, AxelBoldt, BD2412, Ben pcc, BenFrantzDale, BenRG, Bgwhite, Bjones410, Bmju, Brews ohare, Bulee, CALR, Catskul, Ceyockey, Chadernook, Charles Matthews, CheesyPuffs144, Count Truthstein, Cyp, Daniel Brockman, Daniel.villegas, Darkbane, David Eppstein, Davidjholden, Denevans, Depakote, Dionyziz, Dl2000, Download, Ebelular, Edward, Endomorphic, Enosch, Eregli bob, Eugene-elgato, Fgnievinski, Fish-Face, Forderud, ForrestVoight, Fropuff, Fyrael, Gaius Cornelius, GangofOne, Genedial, Giftlite, Gj7, Gonz3d, Gutza, HenryHRich, Hyacinth, Ig0r, Incnis Mrsi, J04n, Janek Kozicki, Jemebius, Jermcb, Jheald, Jitse Niesen, JohnBlackburne, JohnPritchard, JohnnyMrNinja, Josh Triplett, Joydeep.biswas, KSmrq, Kborer, Kordas, Lambiam, LeandraVicci, Lemontea, Light current, Linas, Lkesteloot, Looxix, Lotu, Lourakis, LuisIbanez, Maksim343, ManoaChild, Markus Kuhn, MathsPoetry, Michael C Price, Michael Hardy, Mike Stramba, Mild Bill Hiccup, Mtschoen, Nayuki, Oleg Alexandrov, Onlinetexts, PAR, Paddy3118, Paolo.dL, Patrick, Patrick Gill, Patsuloi, PiBVi, Ploncomi, Pt, Quondum, RJHall, Rainwarrior, Randallbsmith, Reddi, Reddwarf2956, Rgdboer, Robinh, Ruffling, RzR, Samuel Huang, Sebsch, Short Circuit, Sigmundur, SlavMFM, Soler97, TLKeller, Tamfang, Terry Bollinger, Timo Honkasalo, Tkuvho, TobyNorris, User A1, WVhybrid, Wa03, WaysToEscape, X-Fi6, Yoderj, Zhw, Zundark, 224 anonymous edits Andreas Raab Source: https://en.wikipedia.org/w/index.php?oldid=568203152 Contributors: AtticusX, CodeZeilen, Eliotmiranda, Frank Shearar, Gbawden, Gbracha, Hroðulf, Itsmeront, LittleWink, Racklever, 39 anonymous edits
271
Article Sources and Contributors RealityEngine Source: https://en.wikipedia.org/w/index.php?oldid=593361951 Contributors: Dreadstar, Driscoll, Giraffedata, Rilak, Shieldforyoureyes, W Nowicki, 3 anonymous edits Reflection (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=541710956 Contributors: Al Hart, Chris the speller, Dbolton, Dhatfield, Epbr123, Hom sepanta, Jeodesic, Kri, M-le-mot-dit, PowerSerj, Remag Kee, Rich Farmbrough, Siddhant, Simeon, Srleffler, 5 anonymous edits Relief mapping (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=570483454 Contributors: ALoopingIcon, D6, Dionyziz, Editsalot, Eep², JonH, Korg, M-le-mot-dit, PeterRander, PianoSpleen, Qwyrxian, R'n'B, Scottc1988, Searchme, Simeon, Sirus20x6, Starkiller88, Vitorpamplona, Zyichen, 17 anonymous edits Retained mode Source: https://en.wikipedia.org/w/index.php?oldid=578246862 Contributors: BAxelrod, Bovineone, Chris Chittleborough, Damian Yerrick, Klassobanieras, Peter L, Simeon, SteveBaker, Uranographer, 13 anonymous edits Scene description language Source: https://en.wikipedia.org/w/index.php?oldid=574701893 Contributors: Cedar101, Descubes, Frap, Rpyle731, Sopher99 Schlick's approximation Source: https://en.wikipedia.org/w/index.php?oldid=589477369 Contributors: Alhead, AlphaPyro, Anticipation of a New Lover's Arrival, The, AySz88, BenFrantzDale, KlappCK, Kri, Shenfy, Svick, 10 anonymous edits Sculpted prim Source: https://en.wikipedia.org/w/index.php?oldid=584897473 Contributors: AySz88, Bato Brendel, Canoe1967, Colanderman, Eekerz, Erasme Beck, EthanL, Hussayn.dabbous, Iohannes Animosus, Kenchikuben, Kim SJ, LilHelpa, Mackseem, Master of Puppets, Michael Hardy, Mogism, Oleg Alexandrov, Piotrus, Radical Mallard, Remag Kee, Signpostmarv, Toussaint, WRK, WikiCLT, 31 anonymous edits Silhouette edge Source: https://en.wikipedia.org/w/index.php?oldid=519305799 Contributors: BenFrantzDale, David Levy, Forderud, Gaius Cornelius, Quibik, RJHall, Rjwilmsi, Wheger, 17 anonymous edits Skeletal animation Source: https://en.wikipedia.org/w/index.php?oldid=591542366 Contributors: Adjusting, AsceticRose, Avix, Batman tas, Chakmeshma, Datahaki, Davedx, David C, David Eppstein, Dhatfield, Doc glasgow, Fama Clamosa, Freakofnurture, Gabbs1, Gerbrant, Hogdotmac, Honkerpeck, Idmillington, JeffJonez, Kukini, Lindosland, Loren.wilton, Lugia2453, Marasmusine, MattGiuca, Merovingian, MrOllie, Mrwojo, Oicumayberight, Paril, Quenhitran, RlyehRising, Smalljim, SmoothPorcupine, Stefano001, VanishedUserABC, Vincent Simar, Yann.pinczondusel, Zozart, 45 anonymous edits Sketch-based modeling Source: https://en.wikipedia.org/w/index.php?oldid=593624299 Contributors: David Eppstein, EdGl, Frap, Furrykef, KenArthur, Pjrich, Ronz, Scope creep, Wavelength, 10 anonymous edits Smoothing group Source: https://en.wikipedia.org/w/index.php?oldid=532336380 Contributors: Captain panda, Pichpich, SolowandererY2K Soft body dynamics Source: https://en.wikipedia.org/w/index.php?oldid=568081997 Contributors: Aboeing, Adrian Lange, Ae-a, AwamerT, BigrTex, Chris the speller, Dialectric, Dineshkumar Ponnusamy, Distrikt, E v popov, Everton.hermann, J04n, Jacklee, Jerryobject, Jorge Stolfi, Kkmurray, Kotiwalo, LilHelpa, Michaelas10, Mmmovania, Numsgil, Phil Boswell, Remag Kee, Ricvelozo, Rilak, Ryttaren, Snoktruix, Zundark, 52 anonymous edits Solid modeling Source: https://en.wikipedia.org/w/index.php?oldid=589467339 Contributors: -19S.137.93.171, 3dscanguru, 3dscience, ALoopingIcon, Altenmann, BD2412, Bdiscoe, Betamod, Bjornwireen, Brirush, Cartoro, Charles Matthews, CharlesC, Chollapete, Cyon Steve, Danhash, Dart555, Dmmd123, Dthede, Electriccatfish2, Freeformer, Greg L, Ian.stroud, Ivokabel, Iweber2003, JFPresti, JHunterJ, Klemen Kocjancic, Kubanczyk, L.djinevski, Martarius, Michael Hardy, Mild Bill Hiccup, Mogism, MrOllie, Nddstudent, Oanjao, Ohnoitsjamie, Onna, Patstuart, PavelSolin, Radagast83, Rilak, RosaMcVey, Salvar, Schwarrx, Shadowjams, Some standardized rigour, Subbob, Tinss, Tortillovsky, Tosaka1, Victorbabkov, VitruV07, Welsh, Zarex, 59 anonymous edits Sparse voxel octree Source: https://en.wikipedia.org/w/index.php?oldid=545916202 Contributors: Malcolmxl5, Strafym, Tony1, Wolfkeeper, 5 anonymous edits Specularity Source: https://en.wikipedia.org/w/index.php?oldid=581501673 Contributors: Barticus88, Dori, Fluffystar, Frap, Hetar, JDspeeder1, Jh559, M-le-mot-dit, Megan1967, Mild Bill Hiccup, Nboughen, Neonstarlight, Nintend06, Oliver Lineham, Utrecht gakusei, Volfy, 5 anonymous edits Static mesh Source: https://en.wikipedia.org/w/index.php?oldid=486308748 Contributors: Axem Titanium, Drehxm, Kireclebnul, Owoc, Tomer 070, WGH, 3 anonymous edits Stereoscopic acuity Source: https://en.wikipedia.org/w/index.php?oldid=573497598 Contributors: Chris the speller, Eumolpo, FerrousCathode, Fluffystar, Gwestheimer, Hyacinth, Staticd, 2 anonymous edits Subdivision surface Source: https://en.wikipedia.org/w/index.php?oldid=572260730 Contributors: Ablewisuk, Abmac, Andreas Fabri, Ati3414, Banus, Berland, BoredTerry, Boubek, Brock256, Bubbleshooting, CapitalR, Charles Matthews, Crucificator, David Eppstein, Decora, Deodar, Feureau, Flamurai, Forderud, Furrykef, Giftlite, Husond, Khazar2, Korval, Lauciusa, Levork, Listmeister, Lomacar, MIT Trekkie, Mark viking, Moritz Moeller, MoritzMoeller, Mysid, Nczempin, Norden83, Pifthemighty, Quinacrine, Qutezuce, RJHall, Radioflux, Rasmus Faber, Romainbehar, Shorespirit, Smcquay, Surfgeom, Tabletop, The-Wretched, WorldRuler99, Xingd, 55 anonymous edits Supinfocom Source: https://en.wikipedia.org/w/index.php?oldid=562538804 Contributors: Asteuartw, Johnpacklambert, Pjrich, Sahils1512, Sahilsardessai, 5 anonymous edits Surface caching Source: https://en.wikipedia.org/w/index.php?oldid=542303130 Contributors: Amalas, AnteaterZot, AvicAWB, Brian Geppert, Forderud, Fredrik, Hephaestos, KirbyMeister, LOL, Lockley, Markb, Mika1h, Miyagawa, Resoru, Schneelocke, Thunderbrand, Tregoweth, 16 anonymous edits Surfel Source: https://en.wikipedia.org/w/index.php?oldid=546509531 Contributors: BenFrantzDale, Constructive editor, David Levy, Miaow Miaow, Sadads, 7 anonymous edits Suzanne Award Source: https://en.wikipedia.org/w/index.php?oldid=589715358 Contributors: Alabandit, Aliuken, BRW, Belinrahs, D6, Dany 123, Dbolton, DeadEyeArrow, Discospinster, DoctorKubla, ErkinBatu, Ethomson92, Feureau, Fraggle81, GDallimore, Haunt House, Improv, Julian Herzog, MichaelSchoenitzer, Not Accessible, Sn1per, Toussaint, 18 anonymous edits Time-varying mesh Source: https://en.wikipedia.org/w/index.php?oldid=565403029 Contributors: Alvestrand, Dana boomer, DoctorKubla, Jianfengxu, SchuminWeb, WQUlrich Timewarps Source: https://en.wikipedia.org/w/index.php?oldid=562564108 Contributors: Aeusoes1, Fortdj33, Jefficus, Mercurywoodrose, Pascal.Tesson, 1 anonymous edits Triangle mesh Source: https://en.wikipedia.org/w/index.php?oldid=548309518 Contributors: Andreas Kaufmann, Cardamon, David Eppstein, Dermeister, Ericbeg, Fixentries, Iltseng, Martabosch, Mauritsmaartendejong, Meegs, Reyk, SteveBaker, The Anome, Thumperward, Zundark, 13 anonymous edits Vector slime Source: https://en.wikipedia.org/w/index.php?oldid=582900372 Contributors: DuckersOfOutracks, GoingBatty, Meredyth, Mikkel, Rjwilmsi, Vossanova, Welsh, 9 anonymous edits Vertex (geometry) Source: https://en.wikipedia.org/w/index.php?oldid=585830807 Contributors: ABF, Aaron Kauppi, AbigailAbernathy, Aitias, Americanhero, Anyeverybody, Ataleh, Azylber, Butterscotch, CMBJ, Coopkev2, Crisis, Cronholm144, David Eppstein, DeadEyeArrow, Discospinster, DoubleBlue, Duoduoduo, Epicgenius, Escape Orbit, Fixentries, Fly by Night, Funandtrvl, Giftlite, Hvn0413, Icairns, J.delanoy, JForget, Jamesx12345, Knowz, Leuko, M.Virdee, Magioladitis, MarsRover, Martin von Gagern, Mecanismo, Mendaliv, Methecooldude, Mhaitham.shammaa, Mikayla102295, Miym, NatureA16, Orange Suede Sofa, Panscient, Petrb, Pinethicket, Pumpmeup, R'n'B, Racerx11, SGBailey, SchfiftyThree, Shinli256, Shyland, SimpleParadox, Squids and Chips, StaticGull, Steelpillow, Synchronism, TheWeakWilled, TimtheTarget, Tomruen, WaysToEscape, William Avery, WissensDürster, Wywin, ﻣﺎﻧﻲ, 155 anonymous edits Vertex Buffer Object Source: https://en.wikipedia.org/w/index.php?oldid=592187403 Contributors: Acdx, Allenc28, BRW, Frecklefoot, GoingBatty, Jgottula, Joy, Korval, Ng Pey Shih 07, Omgchead, Psychonaut, Red Act, Robertbowerman, Tarantulae, 32 anonymous edits Vertex (computer graphics) Source: https://en.wikipedia.org/w/index.php?oldid=592768322 Contributors: Bejnar, Fabtagon, Kri, Madanor, Panscient, Santryl, Superlynx98, 4 anonymous edits Vertex pipeline Source: https://en.wikipedia.org/w/index.php?oldid=558820325 Contributors: Arundhati bakshi, Barte, Dekart, Fernvale, Magioladitis, Mwmorph, RoyBoy, 5 anonymous edits Viewing frustum Source: https://en.wikipedia.org/w/index.php?oldid=589795858 Contributors: Archelon, AvicAWB, Craig Pemberton, Crossmr, Cyp, DavidCary, Dbchristensen, Dpv, Eep², Flamurai, Gdr, Hymek, Innercash, LarsPensjo, M-le-mot-dit, MithrandirMage, MusicScience, Nimur, Poccil, RJHall, Reedbeta, Robth, Shashank Shekhar, Torav, Welsh, Widefox, Ὁ οἶστρος, 14 anonymous edits
272
Article Sources and Contributors Viewport Source: https://en.wikipedia.org/w/index.php?oldid=590748902 Contributors: BitterSTAR, Chealer, Codename Lisa, Diego Moya, KrisRandle, Mpfproducts, NeD80, Rich Farmbrough, RobIII, SimonTrew, Singforlife, Tabor, Vsmith, Wikid77, Милан Јелисавчић, 9 anonymous edits Virtual actor Source: https://en.wikipedia.org/w/index.php?oldid=573745819 Contributors: ASU, Aqwis, BD2412, Bensin, Chowbok, Danielthalmann, Deacon of Pndapetzim, Donfbreed, DragonflySixtyseven, ErkDemon, FernoKlump, Fu Kung Master, Hughdbrown, Jabberwoch, Joseph A. Spadaro, Lenticel, LilHelpa, Martarius, Martijn Hoekstra, Mikola-Lysenko, NYKevin, Neelix, Otto4711, Piski125, Retired username, Sammy1000, Tavix, Uncle G, Vassyana, Woohookitty, Xezbeth, 25 anonymous edits Virtual environment software Source: https://en.wikipedia.org/w/index.php?oldid=583136101 Contributors: Frap, Fæ, GoingBatty, KenyonHayward, Rich Farmbrough, Sudhir h, 5 anonymous edits Virtual replay Source: https://en.wikipedia.org/w/index.php?oldid=535011126 Contributors: INVERTED, Natl1, RainbowCrane, Trivialist Volume mesh Source: https://en.wikipedia.org/w/index.php?oldid=516131719 Contributors: Anders Sandberg, Brusselandfriends, David Eppstein, Fleebo, Michael Hardy, Mr9737, Qetuth, Rchoetzlein, Rich Farmbrough, Scog, Shire Reeve Voxel Source: https://en.wikipedia.org/w/index.php?oldid=593552947 Contributors: Accounting4Taste, Alansohn, Alfio, Andreba, Andrewmu, Ariesdraco, Aursani, Axl, B-a-b, BenFrantzDale, Bendykst, Biasedeyes, Bigdavesmith, Blackberry Sorbet, BlindWanderer, Bojilov, Borek, Bornemix, Calliopejen1, Carpet, Centrx, Chris the speller, CommonsDelinker, Craig Pemberton, Cristan, Ctachme, CyberSkull, Czar, Daeval, Damian Yerrick, Dawidl, DefenceForce, Diego Moya, Dragon1394, DreamGuy, Dubyrunning, Editorfun, Erik Zachte, Everyking, Flarn2006, Fredrik, Frostedzeo, Fubar Obfusco, Furrykef, George100, Gordmoo, Gousst, Gracefool, GregorB, Hairy Dude, Haya shiloh, Hendricks266, Hplusplus, INCSlayer, Jaboja, Jagged 85, Jamelan, Jarble, Jedlinlau, Jedrzej s, John Nevard, Karl-Henner, KasugaHuang, Kbdank71, Kelson, Kuroboushi, Lambiam, LeeHunter, LordCazicThule, MGlosenger, Maestrosync, Marasmusine, Mindmatrix, Miterdale, Mlindstr, Moondoggy, MrOllie, MrScorch6200, Mwtoews, My Core Competency is Competency, Null Nihils, OllieFury, Omegatron, P M Yonge, PaterMcFly, Pearle, Pengo, Petr Kopač, Pine, Pleasantville, Pythagoras1, RJHall, Rajatojha, Retodon8, Roidroid, Romainhk, Ronz, Roxyflute, Rwalker, Sallison, Saltvik, Satchmo, Schizobullet, SharkD, Shentino, Simeon, Softy, Soyweiser, SpeedyGonsales, Spg3D, Stampsm, Stefanbanev, Stephen Morley, Stormwatch, SuperDuffMan, Suruena, The Anome, Thefirstfrontier, Thumperward, Thunderklaus, Tiedoxi, Tinclon, Tncomp, Tomtheeditor, Torchiest, Touchaddict, VictorAnyakin, Victordiaz, Vossman, Voxii, Waldir, Wavelength, Wernher, WhiteHatLurker, Wlievens, Woodroar, Wyrmmage, Xanzzibar, XavierXerxes, Xezbeth, ZeiP, ZeroOne, Михајло Анђелковић, 256 , פרהanonymous edits Web3D Source: https://en.wikipedia.org/w/index.php?oldid=589502342 Contributors: Barraki, Bsmweb3d, Charleshinshaw, Chrisxue815, Daniel K. Schneider, Doctor Einstein, Dtfinch, DzzD, Evanomics, FlyingPenguins, GB fan, Hjlld, Hodlipson, Jephir, KevinLefeuvre, Logtowiki, Martarius, Mayalld, Nephersir7, NiCoX06, Nospildoh, Paul A, Ronhjones, Tirkfl, Toussaint, Viethungtsn1, Waldir, Woohookitty, 23 anonymous edits
273
Image Sources, Licenses and Contributors
Image Sources, Licenses and Contributors File:Glasses 800 edit.png Source: https://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png License: Public Domain Contributors: Gilles Tran File:An early concept design of the ERIS instrument.jpg Source: https://en.wikipedia.org/w/index.php?title=File:An_early_concept_design_of_the_ERIS_instrument.jpg License: unknown Contributors: Jmencisom, 1 anonymous edits Image:Utah teapot simple 2.png Source: https://en.wikipedia.org/w/index.php?title=File:Utah_teapot_simple_2.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dhatfield Image:Polygon face.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Polygon_face.jpg License: GNU Free Documentation License Contributors: Dale R. Kinney, Deerstop, Geierunited Image:3D Plus 3DBuilding.jpg Source: https://en.wikipedia.org/w/index.php?title=File:3D_Plus_3DBuilding.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: 2T, AnRo0002, G(x), Geitost, Hdamm, Kozuch, Manop, Metoc, Ysangkok, と あ る 白 い 猫, 2 anonymous edits file:Engine movingparts.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Engine_movingparts.jpg License: GNU Free Documentation License Contributors: Original uploader was Wapcaplet at en.wikipedia file:Dunkerque 3d.jpeg Source: https://en.wikipedia.org/w/index.php?title=File:Dunkerque_3d.jpeg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Common Good, Danhash, FSII, Rama, SharkD file:Cannonball stack with FCC unit cell.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cannonball_stack_with_FCC_unit_cell.jpg License: GNU Free Documentation License Contributors: User:Greg L File:3D Computer Vision.jpg Source: https://en.wikipedia.org/w/index.php?title=File:3D_Computer_Vision.jpg License: Public Domain Contributors: Sergei Antonov, Alexei Antonov File:Pseudunela viatoris 3.png Source: https://en.wikipedia.org/w/index.php?title=File:Pseudunela_viatoris_3.png License: Creative Commons Attribution 2.5 Contributors: Timea P. Neusser, Katharina M. Jörger, Michael Schrödl File:Example of BSP tree construction - step 1.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_1.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 2.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_2.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 3.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_3.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 4.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_4.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 5.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_5.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 6.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_6.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 7.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_7.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 8.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_8.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree construction - step 9.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_9.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken File:Example of BSP tree traversal.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_traversal.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Zahnradzacken Image:BoundingBox.jpg Source: https://en.wikipedia.org/w/index.php?title=File:BoundingBox.jpg License: Creative Commons Attribution 2.0 Contributors: Bayo, Maksim, Metoc, WikipediaMaster File:Example of bounding volume hierarchy.svg Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_bounding_volume_hierarchy.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Schreiberx Image:Catmull-Clark subdivision of a cube.svg Source: https://en.wikipedia.org/w/index.php?title=File:Catmull-Clark_subdivision_of_a_cube.svg License: GNU Free Documentation License Contributors: Ico83, Kilom691, Mysid, Zundark Image:Saddle pt.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Saddle_pt.jpg License: Public Domain Contributors: User:StuRat Image:spoon_wf.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_wf.jpg License: GNU Free Documentation License Contributors: Freeformer Image:spoon_uv.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_uv.jpg License: GNU Free Documentation License Contributors: Freeformer Image:spoon_fw.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_fw.jpg License: GNU Free Documentation License Contributors: Freeformer Image:spoon_fs.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_fs.jpg License: GNU Free Documentation License Contributors: Freeformer Image:spoon_sh.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_sh.jpg License: GNU Free Documentation License Contributors: Original uploader was Freeformer at en.wikipedia Image:spoon_rl.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_rl.jpg License: GNU Free Documentation License Contributors: Freeformer Image:spoon_fi.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Spoon_fi.jpg License: GNU Free Documentation License Contributors: User Freeformer on en.wikipedia File:Venn 0000 0001 0001 0110.png Source: https://en.wikipedia.org/w/index.php?title=File:Venn_0000_0001_0001_0110.png License: Creative Commons Attribution 3.0 Contributors: Mate2code Image:Boolean union.PNG Source: https://en.wikipedia.org/w/index.php?title=File:Boolean_union.PNG License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: User Captain Sprite on en.wikipedia Image:Boolean difference.PNG Source: https://en.wikipedia.org/w/index.php?title=File:Boolean_difference.PNG License: GNU Free Documentation License Contributors: User Captain Sprite on en.wikipedia Image:Boolean intersect.PNG Source: https://en.wikipedia.org/w/index.php?title=File:Boolean_intersect.PNG License: GNU Free Documentation License Contributors: User Captain Sprite on en.wikipedia Image:Csg tree.png Source: https://en.wikipedia.org/w/index.php?title=File:Csg_tree.png License: GNU Free Documentation License Contributors: Hawky.diddiz, Snaily, Warden, Zottie Image:Eulerangles.svg Source: https://en.wikipedia.org/w/index.php?title=File:Eulerangles.svg License: Creative Commons Attribution 3.0 Contributors: Lionel Brits Image:plane.svg Source: https://en.wikipedia.org/w/index.php?title=File:Plane.svg License: Creative Commons Attribution 3.0 Contributors: Original uploader was Juansempere at en.wikipedia. File:1942 Nash Ambassador X-ray.jpg Source: https://en.wikipedia.org/w/index.php?title=File:1942_Nash_Ambassador_X-ray.jpg License: Public Domain Contributors: Original uploader was CZmarlin at en.wikipedia file:Axonometric projection.svg Source: https://en.wikipedia.org/w/index.php?title=File:Axonometric_projection.svg License: Public Domain Contributors: Yuri Raysper File:Fire-setting.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Fire-setting.jpg License: Public Domain Contributors: Georgius Agricola Image:SpkFrontCutawayView.svg Source: https://en.wikipedia.org/w/index.php?title=File:SpkFrontCutawayView.svg License: GNU Free Documentation License Contributors: Original uploader was Iain at en.wikipedia File:Mercury Spacecraft.png Source: https://en.wikipedia.org/w/index.php?title=File:Mercury_Spacecraft.png License: Public Domain Contributors: Bricktop, Campani, Craigboy, Duesentrieb, Edward, Ingolfson, Mdd, Morio, Romkur, Soerfm, Stunteltje, 1 anonymous edits File:Iowa 16 inch Gun-EN.svg Source: https://en.wikipedia.org/w/index.php?title=File:Iowa_16_inch_Gun-EN.svg License: Creative Commons Attribution-Share Alike Contributors: original by Voytek S, labels and pointer line fixes by Jeff Dahl
274
Image Sources, Licenses and Contributors File:Lake Washington Ship Canal Fish Ladder pamphlet 02.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Lake_Washington_Ship_Canal_Fish_Ladder_pamphlet_02.jpg License: Public Domain Contributors: US government Image:Printer.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Printer.jpg License: Public Domain Contributors: Welleman Image:Prius.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Prius.jpg License: GNU Free Documentation License Contributors: Pwelleman file:Beyond - Conspiracy - 2004 - 64k intro.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Beyond_-_Conspiracy_-_2004_-_64k_intro.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: User:Gargaj Image:Breakpoint2005 outside.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Breakpoint2005_outside.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: User:Gargaj Image:Assembly2004-areena01.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Assembly2004-areena01.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: User:ZeroOne Image:Evoke 2002 3D Brillen.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Evoke_2002_3D_Brillen.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: User:Avatar File:Cubic Structure.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cubic_Structure.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dominicos File:Cubic Frame Stucture and Floor Depth Map.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cubic_Frame_Stucture_and_Floor_Depth_Map.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dominicos File:Cubic Structure and Floor Depth Map with Front and Back Delimitation.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cubic_Structure_and_Floor_Depth_Map_with_Front_and_Back_Delimitation.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dominicos File:Cubic Structure with Pale Blue Fog.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cubic_Structure_with_Pale_Blue_Fog.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dominicos File:Cubic Structure with Shallow Depth of Field.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Cubic_Structure_with_Shallow_Depth_of_Field.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Dominicos Image:DooSabin mesh.png Source: https://en.wikipedia.org/w/index.php?title=File:DooSabin_mesh.png License: Public domain Contributors: Fredrik Orderud Image:DooSabin subdivision.png Source: https://en.wikipedia.org/w/index.php?title=File:DooSabin_subdivision.png License: Public Domain Contributors: Zundark Image:Cirrus Logic CL-GD5446 136309016 crop fiduciary.png Source: https://en.wikipedia.org/w/index.php?title=File:Cirrus_Logic_CL-GD5446_136309016_crop_fiduciary.png License: Creative Commons Attribution-Sharealike 2.0 Contributors: Clusternote, Wdwd, Wirepath File:Waterincup.gif Source: https://en.wikipedia.org/w/index.php?title=File:Waterincup.gif License: Public Domain Contributors: UAwiki File:Robot arm model 1.png Source: https://en.wikipedia.org/w/index.php?title=File:Robot_arm_model_1.png License: Public Domain Contributors: NeD80 File:Puma Robotic Arm - GPN-2000-001817.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Puma_Robotic_Arm_-_GPN-2000-001817.jpg License: Public Domain Contributors: NASA Dominic Hart Image:Ff surface3 t.png Source: https://en.wikipedia.org/w/index.php?title=File:Ff_surface3_t.png License: GNU Free Documentation License Contributors: Freeformer Image:Freeform1.gif Source: https://en.wikipedia.org/w/index.php?title=File:Freeform1.gif License: GNU Free Documentation License Contributors: DVD R W, Maksim, WikipediaMaster Image:blend1.png Source: https://en.wikipedia.org/w/index.php?title=File:Blend1.png License: GNU Free Documentation License Contributors: Freeformer Image:Surface modelling.svg Source: https://en.wikipedia.org/w/index.php?title=File:Surface_modelling.svg License: GNU Free Documentation License Contributors: Surface1.jpg: Maksim derivative work: Vladsinger (talk) File:Gimbal 3 axes rotation.gif Source: https://en.wikipedia.org/w/index.php?title=File:Gimbal_3_axes_rotation.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Lookang File:Gimbal lock still occurs with 4 axis.png Source: https://en.wikipedia.org/w/index.php?title=File:Gimbal_lock_still_occurs_with_4_axis.png License: Public Domain Contributors: Gyroscope_operation.gif: User:Kieff derivative work: DMahalko (talk) File:Gimbal lock airplane.gif Source: https://en.wikipedia.org/w/index.php?title=File:Gimbal_lock_airplane.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Lookang Image:no gimbal lock.png Source: https://en.wikipedia.org/w/index.php?title=File:No_gimbal_lock.png License: GNU Free Documentation License Contributors: MathsPoetry Image:gimbal lock.png Source: https://en.wikipedia.org/w/index.php?title=File:Gimbal_lock.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: MathsPoetry File:Automation of foundry with robot.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Automation_of_foundry_with_robot.jpg License: Public Domain Contributors: KUKA Roboter GmbH, Bachmann File:Unreal-GlideVoodoo1flyby.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Unreal-GlideVoodoo1flyby.jpg License: unknown Contributors: User:Swaaye File:Gloria FX.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Gloria_FX.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:VFX Gloria Fx Image:Hemicube Unfold.gif Source: https://en.wikipedia.org/w/index.php?title=File:Hemicube_Unfold.gif License: Public Domain Contributors: Hugo Elias, original source: File:Redlobster-icon-inflated.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Redlobster-icon-inflated.jpg License: Public Domain Contributors: Dragentsheets File:redlobster-icon.png Source: https://en.wikipedia.org/w/index.php?title=File:Redlobster-icon.png License: Public Domain Contributors: Dragentsheets File:Arc-welding.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Arc-welding.jpg License: Creative Commons Attribution 3.0 Contributors: Orange Indus File:Modele cinematique corps humain.svg Source: https://en.wikipedia.org/w/index.php?title=File:Modele_cinematique_corps_humain.svg License: Public Domain Contributors: Line-drawing_of_a_human_man.svg: created by NASA (User:OldakQuill) derivative work: Cdang (talk) Image:Isosurface on molecule.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Isosurface_on_molecule.jpg License: unknown Contributors: Kri, StoatBringer, 1 anonymous edits File:CFD simulation showing vorticity isosurfaces behind propeller.png Source: https://en.wikipedia.org/w/index.php?title=File:CFD_simulation_showing_vorticity_isosurfaces_behind_propeller.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Citizenthom File:ATHLETE robot climbing a hill.jpg Source: https://en.wikipedia.org/w/index.php?title=File:ATHLETE_robot_climbing_a_hill.jpg License: Public Domain Contributors: NASA/JPL File:JSC2001-01725.jpg Source: https://en.wikipedia.org/w/index.php?title=File:JSC2001-01725.jpg License: Public Domain Contributors: Craigboy File:SteamEngine Boulton&Watt 1784.png Source: https://en.wikipedia.org/w/index.php?title=File:SteamEngine_Boulton&Watt_1784.png License: Public Domain Contributors: Robert Henry Thurston (1839–1903) Image:Lambert Cosine Law 1.svg Source: https://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_1.svg License: Public Domain Contributors: Inductiveload Image:Lambert Cosine Law 2.svg Source: https://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_2.svg License: Public Domain Contributors: Inductiveload Image:BSSDF01 400.svg Source: https://en.wikipedia.org/w/index.php?title=File:BSSDF01_400.svg License: GNU Free Documentation License Contributors: Jurohi (original); Pbroks13 (redraw) Original uploader was Pbroks13 at en.wikipedia Image:BSDF05 800.png Source: https://en.wikipedia.org/w/index.php?title=File:BSDF05_800.png License: GNU Free Documentation License Contributors: User:Jurohi, User:Twisp Image:Loop_Subdivision_Icosahedron.svg Source: https://en.wikipedia.org/w/index.php?title=File:Loop_Subdivision_Icosahedron.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Simon Fuhrmann File:Dolphin triangle mesh.png Source: https://en.wikipedia.org/w/index.php?title=File:Dolphin_triangle_mesh.png License: Public Domain Contributors: en:User:Chrschn Image:Normal map example.png Source: https://en.wikipedia.org/w/index.php?title=File:Normal_map_example.png License: Creative Commons Attribution-ShareAlike 1.0 Generic Contributors: Juiced lemon, Julian Herzog, Maksim, Metoc Image:Marchingcubes-head.png Source: https://en.wikipedia.org/w/index.php?title=File:Marchingcubes-head.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: Acodered, Dake, Kri, Metoc Image:MarchingCubes.svg Source: https://en.wikipedia.org/w/index.php?title=File:MarchingCubes.svg License: GNU General Public License Contributors: Jmtrivial (talk) Image:Metaballs.png Source: https://en.wikipedia.org/w/index.php?title=File:Metaballs.png License: Public Domain Contributors: GlydeG Image:Metaball contact sheet.png Source: https://en.wikipedia.org/w/index.php?title=File:Metaball_contact_sheet.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: derivative work: SharkD (talk) Metaball1.jpg: PureCore Metaball2.jpg: PureCore Metaball3.jpg: PureCore Metaball4.jpg: PureCore Metaball5.jpg: PureCore Metaball7.jpg: PureCore
275
Image Sources, Licenses and Contributors Metaball8.jpg: PureCore Metaball9.jpg: PureCore Metaball10.jpg: PureCore File:Sintel-face-morph.png Source: https://en.wikipedia.org/w/index.php?title=File:Sintel-face-morph.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Sintel model by Angela Guenette simplified Sintel rig by Ben Dansie render by Fama Clamosa File:Morph-puzzle.png Source: https://en.wikipedia.org/w/index.php?title=File:Morph-puzzle.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Fama Clamosa File:Temporal-Control-and-Hand-Movement-Efficiency-in-Skilled-Music-Performance-pone.0050901.s001.ogv Source: https://en.wikipedia.org/w/index.php?title=File:Temporal-Control-and-Hand-Movement-Efficiency-in-Skilled-Music-Performance-pone.0050901.s001.ogv License: Creative Commons Attribution 2.5 Contributors: Goebl W, Palmer C File:Motion Capture Performers.png Source: https://en.wikipedia.org/w/index.php?title=File:Motion_Capture_Performers.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Asteuartw File:Kistler plates.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Kistler_plates.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: D. Gordon E. Robertson Image:MotionCapture.jpg Source: https://en.wikipedia.org/w/index.php?title=File:MotionCapture.jpg License: GNU Free Documentation License Contributors: Original uploader was T-tus at en.wikipedia. Later version(s) were uploaded by 1canuck2 at en.wikipedia. Image:Motion capture facial.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Motion_capture_facial.jpg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Mounirzok Image:Activemarker2.PNG Source: https://en.wikipedia.org/w/index.php?title=File:Activemarker2.PNG License: Public Domain Contributors: Original uploader was Hipocrite at en.wikipedia Image:PrakashOutdoorMotionCapture.jpg Source: https://en.wikipedia.org/w/index.php?title=File:PrakashOutdoorMotionCapture.jpg License: Public Domain Contributors: Egon Eagle, Rameshmerl Image:Painters_problem.png Source: https://en.wikipedia.org/w/index.php?title=File:Painters_problem.png License: GNU Free Documentation License Contributors: Bayo, Grafite, Kilom691, Maksim, Paulo Cesar-1, 1 anonymous edits Image:NURBS 3-D surface.gif Source: https://en.wikipedia.org/w/index.php?title=File:NURBS_3-D_surface.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Greg A L Image:NURBstatic.svg Source: https://en.wikipedia.org/w/index.php?title=File:NURBstatic.svg License: GNU Free Documentation License Contributors: Original uploader was WulfTheSaxon at en.wikipedia.org Image:motoryacht design i.png Source: https://en.wikipedia.org/w/index.php?title=File:Motoryacht_design_i.png License: GNU Free Documentation License Contributors: Original uploader was Freeformer at en.wikipedia Later version(s) were uploaded by McLoaf at en.wikipedia. Image:nurbsbasisconstruct.png Source: https://en.wikipedia.org/w/index.php?title=File:Nurbsbasisconstruct.png License: GNU Free Documentation License Contributors: Mauritsmaartendejong, McLoaf, 1 anonymous edits Image:nurbsbasislin2.png Source: https://en.wikipedia.org/w/index.php?title=File:Nurbsbasislin2.png License: GNU Free Documentation License Contributors: Mauritsmaartendejong, McLoaf, Quadell, 1 anonymous edits Image:nurbsbasisquad2.png Source: https://en.wikipedia.org/w/index.php?title=File:Nurbsbasisquad2.png License: GNU Free Documentation License Contributors: Mauritsmaartendejong, McLoaf, Quadell, 1 anonymous edits Image:Normal vectors2.svg Source: https://en.wikipedia.org/w/index.php?title=File:Normal_vectors2.svg License: Public Domain Contributors: Cdang, Oleg Alexandrov, 2 anonymous edits Image:Surface normal illustration.png Source: https://en.wikipedia.org/w/index.php?title=File:Surface_normal_illustration.png License: Public Domain Contributors: Oleg Alexandrov Image:Surface normal.png Source: https://en.wikipedia.org/w/index.php?title=File:Surface_normal.png License: Public Domain Contributors: Original uploader was Oleg Alexandrov at en.wikipedia Image:Reflection angles.svg Source: https://en.wikipedia.org/w/index.php?title=File:Reflection_angles.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Arvelius, EDUCA33E, Ies File:Painter's algorithm.svg Source: https://en.wikipedia.org/w/index.php?title=File:Painter's_algorithm.svg License: GNU Free Documentation License Contributors: Zapyon File:Magnify-clip.png Source: https://en.wikipedia.org/w/index.php?title=File:Magnify-clip.png License: Public Domain Contributors: User:Erasoft24 File:Painters problem.svg Source: https://en.wikipedia.org/w/index.php?title=File:Painters_problem.svg License: Public Domain Contributors: Wojciech Muła File:Parallax barrier vs lenticular screen.svg Source: https://en.wikipedia.org/w/index.php?title=File:Parallax_barrier_vs_lenticular_screen.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Cmglee File:ParallaxBarrierCrossSection.svg Source: https://en.wikipedia.org/w/index.php?title=File:ParallaxBarrierCrossSection.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather File:ParallaxBarrierPitchCorrection.png Source: https://en.wikipedia.org/w/index.php?title=File:ParallaxBarrierPitchCorrection.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather File:ParallaxBarrierSwitching.svg Source: https://en.wikipedia.org/w/index.php?title=File:ParallaxBarrierSwitching.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather File:ParallaxBarrierTimeMultiplexing.svg Source: https://en.wikipedia.org/w/index.php?title=File:ParallaxBarrierTimeMultiplexing.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather File:StereoscopicCrosstalk.png Source: https://en.wikipedia.org/w/index.php?title=File:StereoscopicCrosstalk.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather File:CrosstalkCorrection.svg Source: https://en.wikipedia.org/w/index.php?title=File:CrosstalkCorrection.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JonathanMather Image:particle sys fire.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Particle_sys_fire.jpg License: Public Domain Contributors: Jtsiomb Image:particle sys galaxy.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Particle_sys_galaxy.jpg License: Public Domain Contributors: User Jtsiomb on en.wikipedia Image:Pi-explosion.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Pi-explosion.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Sameboat Image:Particle Emitter.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Particle_Emitter.jpg License: GNU Free Documentation License Contributors: Halixi72 Image:Strand Emitter.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Strand_Emitter.jpg License: GNU Free Documentation License Contributors: Anthony62490, Halixi72, MER-C Image:Point cloud torus.gif Source: https://en.wikipedia.org/w/index.php?title=File:Point_cloud_torus.gif License: Public Domain Contributors: User:Kieff File:Geo-Referenced Point Cloud.JPG Source: https://en.wikipedia.org/w/index.php?title=File:Geo-Referenced_Point_Cloud.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Stoermerjp File:mesh overview.svg Source: https://en.wikipedia.org/w/index.php?title=File:Mesh_overview.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Mesh_overview.jpg: Original uploader was Rchoetzlein at en.wikipedia derivative work: Lobsterbake (talk) File:Vertex-Vertex Meshes (VV).png Source: https://en.wikipedia.org/w/index.php?title=File:Vertex-Vertex_Meshes_(VV).png License: Creative Commons Zero Contributors: User:Wiz3kid File:mesh fv.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Mesh_fv.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Berland, Rchoetzlein File:mesh we2.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Mesh_we2.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Berland, Rchoetzlein Image:Procedural Texture.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Procedural_Texture.jpg License: GNU Free Documentation License Contributors: Gabriel VanHelsing, Lionel Allorge, Metoc, Wiksaidit File:ECOL VSPLIT.png Source: https://en.wikipedia.org/w/index.php?title=File:ECOL_VSPLIT.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Jirka Němec File:Perspective Transform Diagram.png Source: https://en.wikipedia.org/w/index.php?title=File:Perspective_Transform_Diagram.png License: Public Domain Contributors: Skytiger2, 1 anonymous edits File:Pyramid of vision.svg Source: https://en.wikipedia.org/w/index.php?title=File:Pyramid_of_vision.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Xavax File:Diagonal rotation.png Source: https://en.wikipedia.org/w/index.php?title=File:Diagonal_rotation.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: user:MathsPoetry
276
Image Sources, Licenses and Contributors File:Versor action on Hurwitz quaternions.svg Source: https://en.wikipedia.org/w/index.php?title=File:Versor_action_on_Hurwitz_quaternions.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Incnis Mrsi File:Space of rotations.png Source: https://en.wikipedia.org/w/index.php?title=File:Space_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Flappiefh, MathsPoetry, Phy1729, SlavMFM File:Hypersphere of rotations.png Source: https://en.wikipedia.org/w/index.php?title=File:Hypersphere_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Hawky.diddiz, Incnis Mrsi, MathsPoetry, Perhelion, Phy1729 File:Andreas_and_Kathleen.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Andreas_and_Kathleen.jpg License: GNU Free Documentation License Contributors: Itsmeront Image:SGI-re2-ge10v.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SGI-re2-ge10v.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Shieldforyoureyes Dave Fischer Image:SGI-re2-rm4.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SGI-re2-rm4.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Shieldforyoureyes Dave Fischer Image:Refl sample.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Refl_sample.jpg License: Public Domain Contributors: Lixihan Image:Mirror2.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Mirror2.jpg License: Public Domain Contributors: Al Hart Image:Metallic balls.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Metallic_balls.jpg License: Public Domain Contributors: AlHart Image:Blurry reflection.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Blurry_reflection.jpg License: Public Domain Contributors: AlHart Image:Glossy-spheres.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Glossy-spheres.jpg License: Public Domain Contributors: AlHart File:Tao Presentations real-time 3D rendering of a scene described using its document description language.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Tao_Presentations_real-time_3D_rendering_of_a_scene_described_using_its_document_description_language.jpg License: GNU Free Documentation License Contributors: Descubes File:Second Life Sculpted fruit small.png Source: https://en.wikipedia.org/w/index.php?title=File:Second_Life_Sculpted_fruit_small.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: http://wiki.secondlife.com/wiki/User:Yuu_Nakamichi File:Sintel-hand.png Source: https://en.wikipedia.org/w/index.php?title=File:Sintel-hand.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Sintel model by Angela Guenette simplified Sintel rig by Ben Dansie render by Fama Clamosa File:Two nodes as mass points connected by parallel circuit of spring and damper.svg Source: https://en.wikipedia.org/w/index.php?title=File:Two_nodes_as_mass_points_connected_by_parallel_circuit_of_spring_and_damper.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Adlange. File:Jack-in-cube solid model, light background.gif Source: https://en.wikipedia.org/w/index.php?title=File:Jack-in-cube_solid_model,_light_background.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Greg L File:Regularize1.png Source: https://en.wikipedia.org/w/index.php?title=File:Regularize1.png License: Public Domain Contributors: Schwarrx File:Cobalt Properties window.png Source: https://en.wikipedia.org/w/index.php?title=File:Cobalt_Properties_window.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Greg A L (Greg L) Image:Specular highlight.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Specular_highlight.jpg License: GNU Free Documentation License Contributors: Original uploader was Reedbeta at en.wikipedia Image:Howard Dolman.png Source: https://en.wikipedia.org/w/index.php?title=File:Howard_Dolman.png License: Creative Commons Zero Contributors: Gwestheimer File:StereoSnellenImproved.png Source: https://en.wikipedia.org/w/index.php?title=File:StereoSnellenImproved.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:FerrousCathode File:Catmull-Clark subdivision of a cube.svg Source: https://en.wikipedia.org/w/index.php?title=File:Catmull-Clark_subdivision_of_a_cube.svg License: GNU Free Documentation License Contributors: Ico83, Kilom691, Mysid, Zundark File:Flag of France.svg Source: https://en.wikipedia.org/w/index.php?title=File:Flag_of_France.svg License: Public Domain Contributors: Anomie File:Flag of India.svg Source: https://en.wikipedia.org/w/index.php?title=File:Flag_of_India.svg License: Public Domain Contributors: Anomie, Mifter Image:Supinfocom-logo.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Supinfocom-logo.jpg License: Free Art License Contributors: Kungfuman, Paulbe, Siward File:Caminandes- Llama Drama - Short Movie.ogv Source: https://en.wikipedia.org/w/index.php?title=File:Caminandes-_Llama_Drama_-_Short_Movie.ogv License: Creative Commons Attribution 3.0 Contributors: Liamdavies, Russavia File:Two rays and one vertex.png Source: https://en.wikipedia.org/w/index.php?title=File:Two_rays_and_one_vertex.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: CMBJ File:Polygon mouths and ears.png Source: https://en.wikipedia.org/w/index.php?title=File:Polygon_mouths_and_ears.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Azylber File:ViewFrustum.svg Source: https://en.wikipedia.org/w/index.php?title=File:ViewFrustum.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:MithrandirMage Image:voxels.svg Source: https://en.wikipedia.org/w/index.php?title=File:Voxels.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Pieter Kuiper, Vossman Image:Ribo-Voxels.png Source: https://en.wikipedia.org/w/index.php?title=File:Ribo-Voxels.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: TimVickers, Vossman
277
License
License Creative Commons Attribution-Share Alike 3.0 //creativecommons.org/licenses/by-sa/3.0/
278