OGRE Manual v1.7 (’Cthugha’)
Steve Streeting
c Torus Knot Software Ltd Copyright
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
i
Table of Contents OGRE Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1 1.2
2
The Core Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
3
Object Orientation - more than just a buzzword . . . . . . . . . . . . . . . . 2 Multi-everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Root object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The RenderSystem object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The SceneManager object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 The ResourceGroupManager Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 The Mesh Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Overlays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1
Material Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.2 Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.3 Texture Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1.4 Declaring Vertex/Geometry/Fragment Programs . . . . . . . . . . 63 3.1.5 Cg programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1.6 DirectX9 HLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.1.7 OpenGL GLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.8 Unified High-level Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass . . . . 79 3.1.10 Vertex Texture Fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.1.11 Script Inheritence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.1.12 Texture Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.1.13 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.1.14 Script Import Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 3.2 Compositor Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.2.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.2.2 Target Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.2.3 Compositor Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.2.4 Applying a Compositor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.3 Particle Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 3.3.1 Particle System Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.3.2 Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.3.3 Particle Emitter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 3.3.4 Standard Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
ii 3.3.5 Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Standard Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Overlay Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 OverlayElement Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Standard OverlayElements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Font Definition Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Mesh Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4.1 4.2 4.3
5
137 138 144 149 153 155
Exporters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 XmlConverter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 MeshUpgrader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Hardware Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.1 5.2 5.3 5.4 5.5 5.6
The Hardware Buffer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Buffer Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shadow Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locking buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical Buffer Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 The VertexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Vertex Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Vertex Buffer Bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Updating Vertex Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Hardware Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 The IndexData class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Updating Index Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Hardware Pixel Buffers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Updating Pixel Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Texture Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Pixel Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Pixel boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
160 160 161 162 163 163 164 164 166 167 168 168 169 169 169 171 172 173 174
6
External Texture Sources . . . . . . . . . . . . . . . . . . . . 176
7
Shadows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 7.1 7.2 7.3 7.4
Stencil Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Texture-based Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modulative Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Light Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
181 185 190 191
iii
8
Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 8.1 8.2 8.3
Skeletal Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Animation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Morph Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Pose Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Combining Skeletal and Vertex Animation . . . . . . . . . . . . . . . 8.4 SceneNode Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Numeric Value Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
197 198 198 201 201 202 203 203
OGRE Manual
1
OGRE Manual c The OGRE Team Copyright
This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License. To view a copy of this licence, visit http://creativecommons.org/licenses/by-sa/2.5/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Chapter 1: Introduction
2
1 Introduction This chapter is intended to give you an overview of the main components of OGRE and why they have been put together that way.
1.1 Object Orientation - more than just a buzzword The name is a dead giveaway. It says Object-Oriented Graphics Rendering Engine, and that’s exactly what it is. Ok, but why? Why did I choose to make such a big deal about this?
Well, nowadays graphics engines are like any other large software system. They start small, but soon they balloon into monstrously complex beasts which just can’t be all understood at once. It’s pretty hard to manage systems of this size, and even harder to make changes to them reliably, and that’s pretty important in a field where new techniques and approaches seem to appear every other week. Designing systems around huge files full of C function calls just doesn’t cut it anymore - even if the whole thing is written by one person (not likely) they will find it hard to locate that elusive bit of code after a few months and even harder to work out how it all fits together.
Object orientation is a very popular approach to addressing the complexity problem. It’s a step up from decomposing your code into separate functions, it groups function and state data together in classes which are designed to represent real concepts. It allows you to hide complexity inside easily recognised packages with a conceptually simple interface so they are easy to recognise and have a feel of ’building blocks’ which you can plug together again later. You can also organise these blocks so that some of them look the same on the outside, but have very different ways of achieving their objectives on the inside, again reducing the complexity for the developers because they only have to learn one interface.
I’m not going to teach you OO here, that’s a subject for many other books, but suffice to say I’d seen enough benefits of OO in business systems that I was surprised most graphics code seemed to be written in C function style. I was interested to see whether I could apply my design experience in other types of software to an area which has long held a place in my heart - 3D graphics engines. Some people I spoke to were of the opinion that using full C++ wouldn’t be fast enough for a real-time graphics engine, but others (including me) were of the opinion that, with care, and object-oriented framework can be performant. We were right. In summary, here’s the benefits an object-oriented approach brings to OGRE: Abstraction Common interfaces hide the nuances between different implementations of 3D API and operating systems
Chapter 1: Introduction
3
Encapsulation There is a lot of state management and context-specific actions to be done in a graphics engine - encapsulation allows me to put the code and data nearest to where it is used which makes the code cleaner and easier to understand, and more reliable because duplication is avoided Polymorphism The behaviour of methods changes depending on the type of object you are using, even if you only learn one interface, e.g. a class specialised for managing indoor levels behaves completely differently from the standard scene manager, but looks identical to other classes in the system and has the same methods called on it
1.2 Multi-everything I wanted to do more than create a 3D engine that ran on one 3D API, on one platform, with one type of scene (indoor levels are most popular). I wanted OGRE to be able to extend to any kind of scene (but yet still implement scene-specific optimisations under the surface), any platform and any 3D API.
Therefore all the ’visible’ parts of OGRE are completely independent of platform, 3D API and scene type. There are no dependencies on Windows types, no assumptions about the type of scene you are creating, and the principles of the 3D aspects are based on core maths texts rather than one particular API implementation.
Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics of the platform, API and scene, but it does this in subclasses specially designed for the environment in question, but which still expose the same interface as the abstract versions.
For example, there is a ’Win32Window’ class which handles all the details about rendering windows on a Win32 platform - however the application designer only has to manipulate it via the superclass interface ’RenderWindow’, which will be the same across all platforms. Similarly the ’SceneManager’ class looks after the arrangement of objects in the scene and their rendering sequence. Applications only have to use this interface, but there is a ’BspSceneManager’ class which optimises the scene management for indoor levels, meaning you get both performance and an easy to learn interface. All applications have to do is hint about the kind of scene they will be creating and let OGRE choose the most appropriate implementation - this is covered in a later tutorial.
OGRE’s object-oriented nature makes all this possible. Currently OGRE runs on Windows, Linux and Mac OSX using plugins to drive the underlying rendering API (currently Direct3D or OpenGL). Applications use OGRE at the abstract level, thus ensuring that they automatically operate on all platforms and rendering subsystems that OGRE provides
Chapter 1: Introduction
without any need for platform or API specific code.
4
Chapter 2: The Core Objects
5
2 The Core Objects
Introduction This tutorial gives you a quick summary of the core objects that you will use in OGRE and what they are used for.
A Word About Namespaces OGRE uses a C++ feature called namespaces. This lets you put classes, enums, structures, anything really within a ’namespace’ scope which is an easy way to prevent name clashes, i.e. situations where you have 2 things called the same thing. Since OGRE is designed to be used inside other applications, I wanted to be sure that name clashes would not be a problem. Some people prefix their classes/types with a short code because some compilers don’t support namespaces, but I chose to use them because they are the ’right’ way to do it. Sorry if you have a non-compliant compiler, but hey, the C++ standard has been defined for years, so compiler writers really have no excuse anymore. If your compiler doesn’t support namespaces then it’s probably because it’s sh*t - get a better one. ;)
This means every class, type etc should be prefixed with ’Ogre::’, e.g. ’Ogre::Camera’, ’Ogre::Vector3’ etc which means if elsewhere in your application you have used a Vector3 type you won’t get name clashes. To avoid lots of extra typing you can add a ’using namespace Ogre;’ statement to your code which means you don’t have to type the ’Ogre::’ prefix unless there is ambiguity (in the situation where you have another definition with the same name).
Overview from 10,000 feet Shown below is a diagram of some of the core objects and where they ’sit’ in the grand scheme of things. This is not all the classes by a long shot, just a few examples
Chapter 2: The Core Objects
6
of the more more significant ones to give you an idea of how it slots together.
At the very top of the diagram is the Root object. This is your ’way in’ to the OGRE system, and it’s where you tend to create the top-level objects that you need to deal with, like scene managers, rendering systems and render windows, loading plugins, all the fundamental stuff. If you don’t know where to start, Root is it for almost everything, although often it will just give you another object which will actually do the detail work, since Root itself is more of an organiser and facilitator object. The majority of rest of OGRE’s classes fall into one of 3 roles: Scene Management This is about the contents of your scene, how it’s structured, how it’s viewed from cameras, etc. Objects in this area are responsible for giving you a natural declarative interface to the world you’re building; i.e. you don’t tell OGRE "set these render states and then render 3 polygons", you tell it "I want an object here, here and here, with these materials on them, rendered from this view", and let it get on with it. Resource Management All rendering needs resources, whether it’s geometry, textures, fonts, whatever. It’s important to manage the loading, re-use and unloading of these things carefully, so that’s what classes in this area do. Rendering Finally, there’s getting the visuals on the screen - this is about the lower-level end of the rendering pipeline, the specific rendering system API objects like
Chapter 2: The Core Objects
7
buffers, render states and the like and pushing it all down the pipeline. Classes in the Scene Management subsystem use this to get their higher-level scene information onto the screen. You’ll notice that scattered around the edge are a number of plugins. OGRE is designed to be extended, and plugins are the usual way to go about it. Many of the classes in OGRE can be subclassed and extended, whether it’s changing the scene organisation through a custom SceneManager, adding a new render system implementation (e.g. Direct3D or OpenGL), or providing a way to load resources from another source (say from a web location or a database). Again this is just a small smattering of the kinds of things plugins can do, but as you can see they can plug in to almost any aspect of the system. This way, OGRE isn’t just a solution for one narrowly defined problem, it can extend to pretty much anything you need it to do.
2.1 The Root object The ’Root’ object is the entry point to the OGRE system. This object MUST be the first one to be created, and the last one to be destroyed. In the example applications I chose to make an instance of Root a member of my application object which ensured that it was created as soon as my application object was, and deleted when the application object was deleted.
The root object lets you configure the system, for example through the showConfigDialog() method which is an extremely handy method which performs all render system options detection and shows a dialog for the user to customise resolution, colour depth, full screen options etc. It also sets the options the user selects so that you can initialise the system directly afterwards.
The root object is also your method for obtaining pointers to other objects in the system, such as the SceneManager, RenderSystem and various other resource managers. See below for details.
Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh all the rendering targets as fast as possible (the norm for games and demos, but not for windowed utilities), the root object has a method called startRendering, which when called will enter a continuous rendering loop which will only end when all rendering windows are closed, or any FrameListener objects indicate that they want to stop the cycle (see below for details of FrameListener objects).
2.2 The RenderSystem object The RenderSystem object is actually an abstract class which defines the interface to the underlying 3D API. It is responsible for sending rendering operations to the API and
Chapter 2: The Core Objects
8
setting all the various rendering options. This class is abstract because all the implementation is rendering API specific - there are API-specific subclasses for each rendering API (e.g. D3DRenderSystem for Direct3D). After the system has been initialised through Root::initialise, the RenderSystem object for the selected rendering API is available via the Root::getRenderSystem() method.
However, a typical application should not normally need to manipulate the RenderSystem object directly - everything you need for rendering objects and customising settings should be available on the SceneManager, Material and other scene-oriented classes. It’s only if you want to create multiple rendering windows (completely separate windows in this case, not multiple viewports like a split-screen effect which is done via the RenderWindow class) or access other advanced features that you need access to the RenderSystem object.
For this reason I will not discuss the RenderSystem object further in these tutorials. You can assume the SceneManager handles the calls to the RenderSystem at the appropriate times.
2.3 The SceneManager object Apart from the Root object, this is probably the most critical part of the system from the application’s point of view. Certainly it will be the object which is most used by the application. The SceneManager is in charge of the contents of the scene which is to be rendered by the engine. It is responsible for organising the contents using whatever technique it deems best, for creating and managing all the cameras, movable objects (entities), lights and materials (surface properties of objects), and for managing the ’world geometry’ which is the sprawling static geometry usually used to represent the immovable parts of a scene.
It is to the SceneManager that you go when you want to create a camera for the scene. It’s also where you go to retrieve or to remove a light from the scene. There is no need for your application to keep lists of objects, the SceneManager keeps a named set of all of the scene objects for you to access, should you need them. Look in the main documentation under the getCamera, getLight, getEntity etc methods.
The SceneManager also sends the scene to the RenderSystem object when it is time to render the scene. You never have to call the SceneManager:: renderScene method directly though - it is called automatically whenever a rendering target is asked to update.
So most of your interaction with the SceneManager is during scene setup. You’re likely to call a great number of methods (perhaps driven by some input file containing the scene data)
Chapter 2: The Core Objects
9
in order to set up your scene. You can also modify the contents of the scene dynamically during the rendering cycle if you create your own FrameListener object (see later).
Because different scene types require very different algorithmic approaches to deciding which objects get sent to the RenderSystem in order to attain good rendering performance, the SceneManager class is designed to be subclassed for different scene types. The default SceneManager object will render a scene, but it does little or no scene organisation and you should not expect the results to be high performance in the case of large scenes. The intention is that specialisations will be created for each type of scene such that under the surface the subclass will optimise the scene organisation for best performance given assumptions which can be made for that scene type. An example is the BspSceneManager which optimises rendering for large indoor levels based on a Binary Space Partition (BSP) tree.
The application using OGRE does not have to know which subclasses are available. The application simply calls Root::createSceneManager(..) passing as a parameter one of a number of scene types (e.g. ST GENERIC, ST INTERIOR etc). OGRE will automatically use the best SceneManager subclass available for that scene type, or default to the basic SceneManager if a specialist one is not available. This allows the developers of OGRE to add new scene specialisations later and thus optimise previously unoptimised scene types without the user applications having to change any code.
2.4 The ResourceGroupManager Object The ResourceGroupManager class is actually a ’hub’ for loading of reusable resources like textures and meshes. It is the place that you define groups for your resources, so they may be unloaded and reloaded when you want. Servicing it are a number of ResourceManagers which manage the individual types of resource, like TextureManager or MeshManager. In this context, resources are sets of data which must be loaded from somewhere to provide OGRE with the data it needs.
ResourceManagers ensure that resources are only loaded once and shared throughout the OGRE engine. They also manage the memory requirements of the resources they look after. They can also search in a number of locations for the resources they need, including multiple search paths and compressed archives (ZIP files).
Most of the time you won’t interact with resource managers directly. Resource managers will be called by other parts of the OGRE system as required, for example when you request for a texture to be added to a Material, the TextureManager will be called for you. If you like, you can call the appropriate resource manager directly to preload resources (if for
Chapter 2: The Core Objects
10
example you want to prevent disk access later on) but most of the time it’s ok to let OGRE decide when to do it.
One thing you will want to do is to tell the resource managers where to look for resources. You do this via Root::getSingleton().addResourceLocation, which actually passes the information on to ResourceGroupManager.
Because there is only ever 1 instance of each resource manager in the engine, if you do want to get a reference to a resource manager use the following syntax: TextureManager::getSingleton().someMethod() MeshManager::getSingleton().someMethod()
2.5 The Mesh Object A Mesh object represents a discrete model, a set of geometry which is self-contained and is typically fairly small on a world scale. Mesh objects are assumed to represent movable objects and are not used for the sprawling level geometry typically used to create backgrounds.
Mesh objects are a type of resource, and are managed by the MeshManager resource manager. They are typically loaded from OGRE’s custom object format, the ’.mesh’ format. Mesh files are typically created by exporting from a modelling tool See Section 4.1 [Exporters], page 158 and can be manipulated through various Chapter 4 [Mesh Tools], page 158
You can also create Mesh objects manually by calling the MeshManager::createManual method. This way you can define the geometry yourself, but this is outside the scope of this manual.
Mesh objects are the basis for the individual movable objects in the world, which are called Section 2.6 [Entities], page 11.
Mesh objects can also be animated using See Section 8.1 [Skeletal Animation], page 197.
Chapter 2: The Core Objects
11
2.6 Entities An entity is an instance of a movable object in the scene. It could be a car, a person, a dog, a shuriken, whatever. The only assumption is that it does not necessarily have a fixed position in the world.
Entities are based on discrete meshes, i.e. collections of geometry which are self-contained and typically fairly small on a world scale, which are represented by the Mesh object. Multiple entities can be based on the same mesh, since often you want to create multiple copies of the same type of object in a scene.
You create an entity by calling the SceneManager::createEntity method, giving it a name and specifying the name of the mesh object which it will be based on (e.g. ’muscleboundhero.mesh’). The SceneManager will ensure that the mesh is loaded by calling the MeshManager resource manager for you. Only one copy of the Mesh will be loaded.
Entities are not deemed to be a part of the scene until you attach them to a SceneNode (see the section below). By attaching entities to SceneNodes, you can create complex hierarchical relationships between the positions and orientations of entities. You then modify the positions of the nodes to indirectly affect the entity positions.
When a Mesh is loaded, it automatically comes with a number of materials defined. It is possible to have more than one material attached to a mesh - different parts of the mesh may use different materials. Any entity created from the mesh will automatically use the default materials. However, you can change this on a per-entity basis if you like so you can create a number of entities based on the same mesh but with different textures etc.
To understand how this works, you have to know that all Mesh objects are actually composed of SubMesh objects, each of which represents a part of the mesh using one Material. If a Mesh uses only one Material, it will only have one SubMesh.
When an Entity is created based on this Mesh, it is composed of (possibly) multiple SubEntity objects, each matching 1 for 1 with the SubMesh objects from the original Mesh. You can access the SubEntity objects using the Entity::getSubEntity method. Once you have a reference to a SubEntity, you can change the material it uses by calling it’s setMaterialName method. In this way you can make an Entity deviate from the default materials and thus create an individual looking version of it.
Chapter 2: The Core Objects
12
2.7 Materials The Material object controls how objects in the scene are rendered. It specifies what basic surface properties objects have such as reflectance of colours, shininess etc, how many texture layers are present, what images are on them and how they are blended together, what special effects are applied such as environment mapping, what culling mode is used, how the textures are filtered etc.
Materials can either be set up programmatically, by calling SceneManager::createMaterial and tweaking the settings, or by specifying it in a ’script’ which is loaded at runtime. See Section 3.1 [Material Scripts], page 16 for more info.
Basically everything about the appearance of an object apart from it’s shape is controlled by the Material class.
The SceneManager class manages the master list of materials available to the scene. The list can be added to by the application by calling SceneManager::createMaterial, or by loading a Mesh (which will in turn load material properties). Whenever materials are added to the SceneManager, they start off with a default set of properties; these are defined by OGRE as the following:
• • • • • • • • • • • • • • • •
ambient reflectance = ColourValue::White (full) diffuse reflectance = ColourValue::White (full) specular reflectance = ColourValue::Black (none) emmissive = ColourValue::Black (none) shininess = 0 (not shiny) No texture layers (& hence no textures) SourceBlendFactor = SBF ONE, DestBlendFactor = SBF ZERO (opaque) Depth buffer checking on Depth buffer writing on Depth buffer comparison function = CMPF LESS EQUAL Culling mode = CULL CLOCKWISE Ambient lighting in scene = ColourValue(0.5, 0.5, 0.5) (mid-grey) Dynamic lighting enabled Gourad shading mode Solid polygon mode Bilinear texture filtering
You can alter these settings by calling SceneManager::getDefaultMaterialSettings() and making the required changes to the Material which is returned.
Chapter 2: The Core Objects
13
Entities automatically have Material’s associated with them if they use a Mesh object, since the Mesh object typically sets up it’s required materials on loading. You can also customise the material used by an entity as described in Section 2.6 [Entities], page 11. Just create a new Material, set it up how you like (you can copy an existing material into it if you like using a standard assignment statement) and point the SubEntity entries at it using SubEntity::setMaterialName().
2.8 Overlays Overlays allow you to render 2D and 3D elements on top of the normal scene contents to create effects like heads-up displays (HUDs), menu systems, status panels etc. The frame rate statistics panel which comes as standard with OGRE is an example of an overlay. Overlays can contain 2D or 3D elements. 2D elements are used for HUDs, and 3D elements can be used to create cockpits or any other 3D object which you wish to be rendered on top of the rest of the scene.
You can create overlays either through the SceneManager::createOverlay method, or you can define them in an .overlay script. In reality the latter is likely to be the most practical because it is easier to tweak (without the need to recompile the code). Note that you can define as many overlays as you like: they all start off life hidden, and you display them by calling their ’show()’ method. You can also show multiple overlays at once, and their Z order is determined by the Overlay::setZOrder() method.
Creating 2D Elements The OverlayElement class abstracts the details of 2D elements which are added to overlays. All items which can be added to overlays are derived from this class. It is possible (and encouraged) for users of OGRE to define their own custom subclasses of OverlayElement in order to provide their own user controls. The key common features of all OverlayElements are things like size, position, basic material name etc. Subclasses extend this behaviour to include more complex properties and behaviour.
An important built-in subclass of OverlayElement is OverlayContainer. OverlayContainer is the same as a OverlayElement, except that it can contain other OverlayElements, grouping them together (allowing them to be moved together for example) and providing them with a local coordinate origin for easier lineup.
The third important class is OverlayManager. Whenever an application wishes to create a 2D element to add to an overlay (or a container), it should call OverlayManager::createOverlayElement. The type of element you wish to create is identified by a string, the reason being that it allows plugins to register new types of OverlayElement for you to create without you having to link specifically to those libraries. For example, to cre-
Chapter 2: The Core Objects
14
ate a panel (a plain rectangular area which can contain other OverlayElements) you would call OverlayManager::getSingleton().createOverlayElement("Panel", "myNewPanel");
Adding 2D Elements to the Overlay Only OverlayContainers can be added direct to an overlay. The reason is that each level of container establishes the Zorder of the elements contained within it, so if you nest several containers, inner containers have a higher Zorder than outer ones to ensure they are displayed correctly. To add a container (such as a Panel) to the overlay, simply call Overlay::add2D.
If you wish to add child elements to that container, call OverlayContainer::addChild. Child elements can be OverlayElements or OverlayContainer instances themselves. Remember that the position of a child element is relative to the top-left corner of it’s parent.
A word about 2D coordinates OGRE allows you to place and size elements based on 2 coordinate systems: relative and pixel based. Pixel Mode This mode is useful when you want to specify an exact size for your overlay items, and you don’t mind if those items get smaller on the screen if you increase the screen resolution (in fact you might want this). In this mode the only way to put something in the middle or at the right or bottom of the screen reliably in any resolution is to use the aligning options, whilst in relative mode you can do it just by using the right relative coordinates. This mode is very simple, the top-left of the screen is (0,0) and the bottom-right of the screen depends on the resolution. As mentioned above, you can use the aligning options to make the horizontal and vertical coordinate origins the right, bottom or center of the screen if you want to place pixel items in these locations without knowing the resolution. Relative Mode This mode is useful when you want items in the overlay to be the same size on the screen no matter what the resolution. In relative mode, the top-left of the screen is (0,0) and the bottom-right is (1,1). So if you place an element at (0.5, 0.5), it’s top-left corner is placed exactly in the center of the screen, no matter what resolution the application is running in. The same principle applies to sizes; if you set the width of an element to 0.5, it covers half the width of the screen. Note that because the aspect ratio of the screen is typically 1.3333 : 1 (width : height), an element with dimensions (0.25, 0.25) will not be square, but it will take up exactly 1/16th of the screen in area terms. If you want
Chapter 2: The Core Objects
15
square-looking areas you will have to compensate using the typical aspect ratio e.g. use (0.1875, 0.25) instead.
Transforming Overlays Another nice feature of overlays is being able to rotate, scroll and scale them as a whole. You can use this for zooming in / out menu systems, dropping them in from off screen and other nice effects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for more information.
Scripting overlays Overlays can also be defined in scripts. See Section 3.4 [Overlay Scripts], page 144 for details.
GUI systems Overlays are only really designed for non-interactive screen elements, although you can use them as a crude GUI. For a far more complete GUI solution, we recommend CEGui (http://www.cegui.org.uk), as demonstrated in the sample Demo Gui.
Chapter 3: Scripts
16
3 Scripts OGRE drives many of its features through scripts in order to make it easier to set up. The scripts are simply plain text files which can be edited in any standard text editor, and modifying them immediately takes effect on your OGRE-based applications, without any need to recompile. This makes prototyping a lot faster. Here are the items that OGRE lets you script: • Section 3.1 [Material Scripts], page 16 • Section 3.2 [Compositor Scripts], page 106 • Section 3.3 [Particle Scripts], page 121 • Section 3.4 [Overlay Scripts], page 144 • Section 3.5 [Font Definition Scripts], page 155
3.1 Material Scripts Material scripts offer you the ability to define complex materials in a script which can be reused easily. Whilst you could set up all materials for a scene in code using the methods of the Material and TextureLayer classes, in practice it’s a bit unwieldy. Instead you can store material definitions in text files which can then be loaded whenever required.
Loading scripts Material scripts are loaded when resource groups are initialised: OGRE looks in all resource locations associated with the group (see Root::addResourceLocation) for files with the ’.material’ extension and parses them. If you want to parse files manually, use MaterialSerializer::parseScript.
It’s important to realise that materials are not loaded completely by this parsing process: only the definition is loaded, no textures or other resources are loaded. This is because it is common to have a large library of materials, but only use a relatively small subset of them in any one scene. To load every material completely in every script would therefore cause unnecessary memory overhead. You can access a ’deferred load’ Material in the normal way (MaterialManager::getSingleton().getByName()), but you must call the ’load’ method before trying to use it. Ogre does this for you when using the normal material assignment methods of entities etc.
Another important factor is that material names must be unique throughout ALL scripts loaded by the system, since materials are always identified by name.
Chapter 3: Scripts
17
Format Several materials may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’ (note, no nested form comments allowed). The general format is shown below in the example below (note that to start with, we only consider fixed-function materials which don’t use vertex, geometry or fragment programs, these are covered later):
// This is a comment material walls/funkywall1 { // first, preferred technique technique { // first pass pass { ambient 0.5 0.5 0.5 diffuse 1.0 1.0 1.0 // Texture unit 0 texture_unit { texture wibbly.jpg scroll_anim 0.1 0.0 wave_xform scale sine 0.0 0.7 0.0 1.0 } // Texture unit 1 (this is a multitexture pass) texture_unit { texture wobbly.png rotate_anim 0.25 colour_op add } } } // Second technique, can be used as a fallback or LOD level technique { // .. and so on } } Every material in the script must be given a name, which is the line ’material
’ before the first opening ’’. This name must be globally unique. It can include path characters
Chapter 3: Scripts
18
(as in the example) to logically divide up your materials, and also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a string. If you include spaces in the name, it must be enclosed in double quotes.
NOTE: ’:’ is the delimiter for specifying material copy in the script so it can’t be used as part of the material name.
A material can inherit from a previously defined material by using a colon : after the material name followed by the name of the reference material to inherit from. You can in fact even inherit just parts of a material from others; all this is covered in See Section 3.1.11 [Script Inheritence], page 96). You can also use variables in your script which can be replaced in inheriting versions, see See Section 3.1.13 [Script Variables], page 104.
A material can be made up of many techniques (See Section 3.1.1 [Techniques], page 21)a technique is one way of achieving the effect you are looking for. You can supply more than one technique in order to provide fallback approaches where a card does not have the ability to render the preferred technique, or where you wish to define lower level of detail versions of the material in order to conserve rendering power when objects are more distant.
Each technique can be made up of many passes (See Section 3.1.2 [Passes], page 24), that is a complete render of the object can be performed multiple times with different settings in order to produce composite effects. Ogre may also split the passes you have defined into many passes at runtime, if you define a pass which uses too many texture units for the card you are currently running on (note that it can only do this if you are not using a fragment program). Each pass has a number of top-level attributes such as ’ambient’ to set the amount & colour of the ambient light reflected by the material. Some of these options do not apply if you are using vertex programs, See Section 3.1.2 [Passes], page 24 for more details.
Within each pass, there can be zero or many texture units in use (See Section 3.1.3 [Texture Units], page 45). These define the texture to be used, and optionally some blending operations (which use multitexturing) and texture effects.
You can also reference vertex and fragment programs (or vertex and pixel shaders, if you want to use that terminology) in a pass with a given set of parameters. Programs themselves are declared in separate .program scripts (See Section 3.1.4 [Declaring Vertex/Geometry/Fragment Programs], page 63) and are used as described in Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 79.
Chapter 3: Scripts
19
Top-level material attributes The outermost section of a material definition does not have a lot of attributes of its own (most of the configurable parameters are within the child sections. However, it does have some, and here they are:
lod distances (deprecated) This option is deprecated in favour of hundefinedi [lod values], page hundefinedi now.
lod strategy Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changes based on distance from the camera. Also supported is ’PixelCount’ which changes LOD based on an estimate of the screen-space pixels affected. Format: lod strategy Default: lod strategy Distance
lod values This attribute defines the values used to control the LOD transition for this material. By setting this attribute, you indicate that you want this material to alter the Technique that it uses based on some metric, such as the distance from the camera, or the approximate screen space coverage. The exact meaning of these values is determined by the option you select for [lod strategy], page 19 - it is a list of distances for the ’Distance’ strategy, and a list of pixel counts for the ’PixelCount’ strategy, for example. You must give it a list of values, in order from highest LOD value to lowest LOD value, each one indicating the point at which the material will switch to the next LOD. Implicitly, all materials activate LOD index 0 for values less than the first entry, so you do not have to specify ’0’ at the start of the list. You must ensure that there is at least one Technique with a [lod index], page 22 value for each value in the list (so if you specify 3 values, you must have techniques for LOD indexes 0, 1, 2 and 3). Note you must always have at least one Technique at lod index 0.
Format: lod values ... Default: none
Example: lod strategy Distance lod values 300.0 600.5 1200
Chapter 3: Scripts
20
The above example would cause the material to use the best Technique at lod index 0 up to a distance of 300 world units, the best from lod index 1 from 300 up to 600, lod index 2 from 600 to 1200, and lod index 3 from 1200 upwards.
receive shadows This attribute controls whether objects using this material can have shadows cast upon them.
Format: receive shadows Default: on
Whether or not an object receives a shadow is the combination of a number of factors, See Chapter 7 [Shadows], page 180 for full details; however this allows you to make a material opt-out of receiving shadows if required. Note that transparent materials never receive shadows so this option only has an effect on solid materials.
transparency casts shadows This attribute controls whether transparent materials can cast certain kinds of shadow.
Format: transparency casts shadows Default: off Whether or not an object casts a shadow is the combination of a number of factors, See Chapter 7 [Shadows], page 180 for full details; however this allows you to make a transparent material cast shadows, when it would otherwise not. For example, when using texture shadows, transparent materials are normally not rendered into the shadow texture because they should not block light. This flag overrides that.
set texture alias This attribute associates a texture alias with a texture name.
Format: set texture alias
This attribute can be used to set the textures used in texture unit states that were inherited from another material.(See Section 3.1.12 [Texture Aliases], page 100)
Chapter 3: Scripts
21
3.1.1 Techniques A "technique" section in your material script encapsulates a single method of rendering an object. The simplest of material definitions only contains a single technique, however since PC hardware varies quite greatly in it’s capabilities, you can only do this if you are sure that every card for which you intend to target your application will support the capabilities which your technique requires. In addition, it can be useful to define simpler ways to render a material if you wish to use material LOD, such that more distant objects use a simpler, less performance-hungry technique.
When a material is used for the first time, it is ’compiled’. That involves scanning the techniques which have been defined, and marking which of them are supportable using the current rendering API and graphics card. If no techniques are supportable, your material will render as blank white. The compilation examines a number of things, such as: • The number of texture unit entries in each pass Note that if the number of texture unit entries exceeds the number of texture units in the current graphics card, the technique may still be supportable so long as a fragment program is not being used. In this case, Ogre will split the pass which has too many entries into multiple passes for the less capable card, and the multitexture blend will be turned into a multipass blend (See [colour op multipass fallback], page 58). • Whether vertex, geometry or fragment programs are used, and if so which syntax they use (e.g. vs 1 1, ps 2 x, arbfp1 etc.) • Other effects like cube mapping and dot3 blending • Whether the vendor or device name of the current graphics card matches some userspecified rules In a material script, techniques must be listed in order of preference, i.e. the earlier techniques are preferred over the later techniques. This normally means you will list your most advanced, most demanding techniques first in the script, and list fallbacks afterwards.
To help clearly identify what each technique is used for, the technique can be named but its optional. Techniques not named within the script will take on a name that is the technique index number. For example: the first technique in a material is index 0, its name would be "0" if it was not given a name in the script. The technique name must be unique within the material or else the final technique is the resulting merge of all techniques with the same name in the material. A warning message is posted in the Ogre.log if this occurs. Named techniques can help when inheriting a material and modifying an existing technique: (See Section 3.1.11 [Script Inheritence], page 96)
Format: technique name
Techniques have only a small number of attributes of their own:
Chapter 3: Scripts
• • • • • •
22
[scheme], page 22 [lod index], page 22 (and also see [lod distances], page 19 in the parent material) [shadow caster material], page 23 [shadow receiver material], page 23 [gpu vendor rule], page 23 [gpu device rule], page 23
scheme Sets the ’scheme’ this Technique belongs to. Material schemes are used to control toplevel switching from one set of techniques to another. For example, you might use this to define ’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick a performance / quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines, rendering all objects using unclamped shaders, and a simpler pipeline for others; this can be implemented using schemes. The active scheme is typically controlled at a viewport level, and the active one defaults to ’Default’.
Format: scheme Example: scheme hdr Default: scheme Default
lod index Sets the level-of-detail (LOD) index this Technique belongs to.
Format: lod index NB Valid values are 0 (highest level of detail) to 65535, although this is unlikely. You should not leave gaps in the LOD indexes between Techniques.
Example: lod index 1
All techniques must belong to a LOD index, by default they all belong to index 0, i.e. the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. For readability, it is advised that you list your techniques in order of LOD, then in order of preference, although the latter is the only prerequisite (OGRE determines which one is ’best’ by which one is listed first). You must always have at least one Technique at lod index 0. The distance at which a LOD level is applied is determined by the lod distances attribute of the containing material, See [lod distances], page 19 for details.
Chapter 3: Scripts
23
Default: lod index 0
Techniques also contain one or more passes (and there must be at least one), See Section 3.1.2 [Passes], page 24.
shadow caster material When using See Section 7.2 [Texture-based Shadows], page 185 you can specify an alternate material to use when rendering the object using this material into the shadow texture. This is like a more advanced version of using shadow caster vertex program, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
shadow receiver material When using See Section 7.2 [Texture-based Shadows], page 185 you can specify an alternate material to use when performing the receiver shadow pass. Note that this explicit ’receiver’ pass is only done when you’re not using [Integrated Texture Shadows], page 189 - i.e. the shadow rendering is done separately (either as a modulative pass, or a masked light pass). This is like a more advanced version of using shadow receiver vertex program and shadow receiver fragment program, however note that for the moment you are expected to render the shadow in one pass, i.e. only the first pass is respected.
gpu vendor rule and gpu device rule Although Ogre does a good job of detecting the capabilities of graphics cards and setting the supportability of techniques from that, occasionally card-specific behaviour exists which is not necessarily detectable and you may want to ensure that your materials go down a particular path to either use or avoid that behaviour. This is what these rules are for you can specify matching rules so that a technique will be considered supportable only on cards from a particular vendor, or which match a device name pattern, or will be considered supported only if they don’t fulfill such matches. The format of the rules are as follows:
gpu vendor rule gpu device rule [case sensitive] An ’include’ rule means that the technique will only be supported if one of the include rules is matched (if no include rules are provided, anything will pass). An ’exclude’ rules means that the technique is considered unsupported if any of the exclude rules are matched. You can provide as many rules as you like, although and must obviously be unique. The valid list of values is currently ’nvidia’, ’ati’, ’intel’, ’s3’, ’matrox’ and ’3dlabs’. can be any string, and you can use wildcards (’*’) if you need to match variants. Here’s an example:
Chapter 3: Scripts
24
gpu vendor rule include nvidia gpu vendor rule include intel gpu device rule exclude *950* These rules, if all included in one technique, will mean that the technique will only be considered supported on graphics cards made by NVIDIA and Intel, and so long as the device name doesn’t have ’950’ in it.
Note that these rules can only mark a technique ’unsupported’ when it would otherwise be considered ’supported’ judging by the hardware capabilities. Even if a technique passes these rules, it is still subject to the usual hardware support tests.
3.1.2 Passes A pass is a single render of the geometry in question; a single call to the rendering API with a certain set of rendering properties. A technique can have between one and 16 passes, although clearly the more passes you use, the more expensive the technique will be to render.
To help clearly identify what each pass is used for, the pass can be named but its optional. Passes not named within the script will take on a name that is the pass index number. For example: the first pass in a technique is index 0 so its name would be "0" if it was not given a name in the script. The pass name must be unique within the technique or else the final pass is the resulting merge of all passes with the same name in the technique. A warning message is posted in the Ogre.log if this occurs. Named passes can help when inheriting a material and modifying an existing pass: (See Section 3.1.11 [Script Inheritence], page 96)
Passes have a set of global attributes (described below), zero or more nested texture unit entries (See Section 3.1.3 [Texture Units], page 45), and optionally a reference to a vertex and / or a fragment program (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 79).
Here are the attributes you can use in a ’pass’ section of a .material script: • [ambient], page 25 • [diffuse], page 26 • [specular], page 26 • [emissive], page 27 • [scene blend], page 28 • [separate scene blend], page 29 • [scene blend op], page 30
Chapter 3: Scripts
• • • • • • • • • • • • • • • • • • • • • • • • • • • • •
25
[separate scene blend op], page 30 [depth check], page 30 [depth write], page 31 [depth func], page 31 [depth bias], page 32 [iteration depth bias], page 32 [alpha rejection], page 32 [alpha to coverage], page 33 [light scissor], page 33 [light clip planes], page 34 [illumination stage], page 35 [transparent sorting], page 35 [normalise normals], page 35 [cull hardware], page 36 [cull software], page 36 [lighting], page 37 [shading], page 37 [polygon mode], page 38 [polygon mode overrideable], page 38 [fog override], page 39 [colour write], page 39 [max lights], page 40 [start light], page 40 [iteration], page 41 [point size], page 44 [point sprites], page 44 [point size attenuation], page 45 [point size min], page 45 [point size max], page 45
Attribute Descriptions ambient Sets the ambient colour reflectance properties of this pass. This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.
Format: ambient ( []| vertexcolour) NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts
26
Example: ambient 0.0 0.8 0.0
The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much ambient light (directionless global light) is reflected. It is also possible to make the ambient reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. The default is full white, meaning objects are completely globally illuminated. Reduce this if you want to see diffuse or specular light effects, or change the blend of colours to make the object have a base colour other than white. This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
Default: ambient 1.0 1.0 1.0 1.0
diffuse Sets the diffuse colour reflectance properties of this pass. This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.
Format: diffuse ( []| vertexcolour) NB valid colour values are between 0.0 and 1.0.
Example: diffuse 1.0 0.5 0.5
The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much diffuse light (light from instances of the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. The default is full white, meaning objects reflect the maximum white light they can from Light objects. This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
Default: diffuse 1.0 1.0 1.0 1.0
Chapter 3: Scripts
27
specular Sets the specular colour reflectance properties of this pass. This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.
Format: specular ( []| vertexcolour) NB valid colour values are between 0.0 and 1.0. Shininess can be any value greater than 0.
Example: specular 1.0 1.0 1.0 12.5
The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much specular light (highlights from instances of the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. The default is to reflect no specular light. The colour of the specular highlights is determined by the colour parameters, and the size of the highlights by the separate shininess parameter.. The higher the value of the shininess parameter, the sharper the highlight ie the radius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the the specular colour to be applied to the whole surface that has the material applied to it. When the viewing angle to the surface changes, ugly flickering will also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers. This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
Default: specular 0.0 0.0 0.0 0.0 0.0
emissive Sets the amount of self-illumination an object has. This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.
Format: emissive ( []| vertexcolour) NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts
28
Example: emissive 1.0 0.0 0.0
If an object is self-illuminating, it does not need external sources to light it, ambient or otherwise. It’s like the object has it’s own personal ambient light. Unlike the name suggests, this object doesn’t act as a light source for other objects in the scene (if you want it to, you have to create a light which is centered on the object). It is also possible to make the emissive colour track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
Default: emissive 0.0 0.0 0.0 0.0
scene blend Sets the kind of blending this pass has with the existing contents of the scene. Wheras the texture blending operations seen in the texture unit entries are concerned with blending between texture layers, this blending is about combining the output of this pass as a whole with the existing contents of the rendering target. This blending therefore allows object transparency and other special effects. There are 2 formats, one using predefined blend types, the other allowing a roll-your-own approach using source and destination factors.
Format1: scene blend
Example: scene blend add
This is the simpler form, where the most commonly used blending modes are enumerated using a single parameter. Valid parameters are: add
The colour of the rendering output is added to the scene. Good for explosions, flares, lights, ghosts etc. Equivalent to ’scene blend one one’.
modulate
The colour of the rendering output is multiplied with the scene contents. Generally colours and darkens the scene, good for smoked glass, semi-transparent objects etc. Equivalent to ’scene blend dest colour zero’.
colour blend Colour the scene based on the brightness of the input colours, but don’t darken. Equivalent to ’scene blend src colour one minus src colour’ alpha blend The alpha value of the rendering output is used as a mask. Equivalent to ’scene blend src alpha one minus src alpha’
Chapter 3: Scripts
29
Format2: scene blend
Example: scene blend one one minus dest alpha
This version of the method allows complete control over the blending operation, by specifying the source and destination blending factors. The resulting colour which is written to the rendering target is (texture * sourceFactor) + (scene pixel * destFactor). Valid values for both parameters are: one
Constant value of 1.0
zero
Constant value of 0.0
dest colour The existing pixel colour src colour The texture pixel (texel) colour one minus dest colour 1 - (dest colour) one minus src colour 1 - (src colour) dest alpha The existing pixel alpha value src alpha
The texel alpha value
one minus dest alpha 1 - (dest alpha) one minus src alpha 1 - (src alpha) Default: scene blend one zero (opaque) Also see [separate scene blend], page 29.
separate scene blend This option operates in exactly the same way as [scene blend], page 28, except that it allows you to specify the operations to perform between the rendered pixel and the frame buffer separately for colour and alpha components. By nature this option is only useful when rendering to targets which have an alpha channel which you’ll use for later processing, such as a render texture.
Format1: separate scene blend
Chapter 3: Scripts
30
Example: separate scene blend add modulate
This example would add colour components but multiply alpha components. The blend modes available are as in [scene blend], page 28. The more advanced form is also available:
Format2: separate scene blend pha src factor>
Example: separate scene blend one one minus dest alpha one one
Again the options available in the second format are the same as those in the second format of [scene blend], page 28.
scene blend op This directive changes the operation which is applied between the two components of the scene blending equation, which by default is ’add’ (sourceFactor * source + destFactor * dest). You may change this to ’add’, ’subtract’, ’reverse subtract’, ’min’ or ’max’.
Format: scene blend op Default: scene blend op add
separate scene blend op This directive is as scene blend op, except that you can set the operation for colour and alpha separately. Format: separate scene blend op Default: separate scene blend op add add
depth check Sets whether or not this pass renders with depth-buffer checking on or not.
Format: depth check
Chapter 3: Scripts
31
If depth-buffer checking is on, whenever a pixel is about to be written to the frame buffer the depth buffer is checked to see if the pixel is in front of all other pixels written at that point. If not, the pixel is not written. If depth checking is off, pixels are written no matter what has been rendered before. Also see depth func for more advanced depth check configuration.
Default: depth check on
depth write Sets whether or not this pass renders with depth-buffer writing on or not. Format: depth write
If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth buffer is updated with the depth value of that new pixel, thus affecting future rendering operations if future pixels are behind this one. If depth writing is off, pixels are written without updating the depth buffer. Depth writing should normally be on but can be turned off when rendering static backgrounds or when rendering a collection of transparent objects at the end of a scene so that they overlap each other correctly.
Default: depth write on
depth func Sets the function used to compare depth values when depth checking is on.
Format: depth func
If depth checking is enabled (see depth check) a comparison occurs between the depth value of the pixel to be written and the current contents of the buffer. This comparison is normally less equal, i.e. the pixel is written if it is closer (or at the same distance) than the current contents. The possible functions are: always fail Never writes a pixel to the render target always pass Always writes a pixel to the render target less
Write if (new Z < existing Z)
Chapter 3: Scripts
less equal
Write if (new Z <= existing Z)
equal
Write if (new Z == existing Z)
not equal
Write if (new Z != existing Z)
32
greater equal Write if (new Z >= existing Z) greater
Write if (new Z >existing Z)
Default: depth func less equal
depth bias Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygons appear on top of others e.g. for decals.
Format: depth bias []
The final depth bias value is constant bias * minObservableDepth + maxSlope * slopescale bias. Slope scale biasing is relative to the angle of the polygon to the camera, which makes for a more appropriate bias value, but this is ignored on some older hardware. Constant biasing is expressed as a factor of the minimum depth value, so a value of 1 will nudge the depth by one ’notch’ if you will. Also see [iteration depth bias], page 32
iteration depth bias Sets an additional bias derived from the number of times a given pass has been iterated. Operates just like [depth bias], page 32 except that it applies an additional bias factor to the base depth bias value, multiplying the provided value by the number of times this pass has been iterated before, through one of the [iteration], page 41 variants. So the first time the pass will get the depth bias value, the second time it will get depth bias + iteration depth bias, the third time it will get depth bias + iteration depth bias * 2, and so on. The default is zero.
Format: iteration depth bias
Chapter 3: Scripts
33
alpha rejection Sets the way the pass will have use alpha to totally reject pixels from the pipeline.
Format: alpha rejection
Example: alpha rejection greater equal 128
The function parameter can be any of the options listed in the material depth function attribute. The value parameter can theoretically be any value between 0 and 255, but is best limited to 0 or 128 for hardware compatibility.
Default: alpha rejection always pass
alpha to coverage Sets whether this pass will use ’alpha to coverage’, a way to multisample alpha texture edges so they blend more seamlessly with the background. This facility is typically only available on cards from around 2006 onwards, but it is safe to enable it anyway - Ogre will just ignore it if the hardware does not support it. The common use for alpha to coverage is foliage rendering and chain-link fence style textures.
Format: alpha to coverage
Default: alpha to coverage off
light scissor Sets whether when rendering this pass, rendering will be limited to a screen-space scissor rectangle representing the coverage of the light(s) being used in this pass, derived from their attenuation ranges.
Format: light scissor Default: light scissor off
Chapter 3: Scripts
34
This option is usually only useful if this pass is an additive lighting pass, and is at least the second one in the technique. Ie areas which are not affected by the current light(s) will never need to be rendered. If there is more than one light being passed to the pass, then the scissor is defined to be the rectangle which covers all lights in screen-space. Directional lights are ignored since they are infinite.
This option does not need to be specified if you are using a standard additive shadow mode, i.e. SHADOWTYPE STENCIL ADDITIVE or SHADOWTYPE TEXTURE ADDITIVE, since it is the default behaviour to use a scissor for each additive shadow pass. However, if you’re not using shadows, or you’re using [Integrated Texture Shadows], page 189 where passes are specified in a custom manner, then this could be of use to you.
light clip planes Sets whether when rendering this pass, triangle setup will be limited to clipping volume covered by the light. Directional lights are ignored, point lights clip to a cube the size of the attenuation range or the light, and spotlights clip to a pyramid bounding the spotlight angle and attenuation range.
Format: light clip planes Default: light clip planes off
This option will only function if there is a single non-directional light being used in this pass. If there is more than one light, or only directional lights, then no clipping will occur. If there are no lights at all then the objects won’t be rendered at all.
When using a standard additive shadow mode, ie SHADOWTYPE STENCIL ADDITIVE or SHADOWTYPE TEXTURE ADDITIVE, you have the option of enabling clipping for all light passes by calling SceneManager::setShadowUseLightClipPlanes regardless of this pass setting, since rendering is done lightwise anyway. This is off by default since using clip planes is not always faster - it depends on how much of the scene the light volumes cover. Generally the smaller your lights are the more chance you’ll see a benefit rather than a penalty from clipping. If you’re not using shadows, or you’re using [Integrated Texture Shadows], page 189 where passes are specified in a custom manner, then specify the option per-pass using this attribute. A specific note about OpenGL: user clip planes are completely ignored when you use an ARB vertex program. This means light clip planes won’t help much if you use ARB vertex programs on GL, although OGRE will perform some optimisation of its own, in that if it sees that the clip volume is completely off-screen, it won’t perform a render at all. When using GLSL, user clipping can be used but you have to use glClipVertex in your
Chapter 3: Scripts
35
shader, see the GLSL documentation for more information. In Direct3D user clip planes are always respected.
illumination stage When using an additive lighting mode (SHADOWTYPE STENCIL ADDITIVE or SHADOWTYPE TEXTURE ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or pre-lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually OGRE figures out how to categorise your passes automatically, but there are some effects you cannot achieve without manually controlling the illumination. For example specular effects are muted by the typical sequence because all textures are saved until the ’decal’ stage which mutes the specular effect. Instead, you could do texturing within the per-light stage if it’s possible for your material and thus add the specular on after the decal texturing, and have no post-light rendering.
If you assign an illumination stage to a pass you have to assign it to all passes in the technique otherwise it will be ignored. Also note that whilst you can have more than one pass in each group, they cannot alternate, ie all ambient passes will be before all per-light passes, which will also be before all decal passes. Within their categories the passes will retain their ordering though. Format: illumination stage Default: none (autodetect)
normalise normals Sets whether or not this pass renders with all vertex normals being automatically renormalised. Format: normalise normals
Scaling objects causes normals to also change magnitude, which can throw off your lighting calculations. By default, the SceneManager detects this and will automatically renormalise normals for any scaled object, but this has a cost. If you’d prefer to control this manually, call SceneManager::setNormaliseNormalsOnScale(false) and then use this option on materials which are sensitive to normals being resized.
Default: normalise normals off
Chapter 3: Scripts
36
transparent sorting Sets if transparent textures should be sorted by depth or not.
Format: transparent sorting
By default all transparent materials are sorted such that renderables furthest away from the camera are rendered first. This is usually the desired behaviour but in certain cases this depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure the rendering order does not change from one frame to the next. In this case you could set the value to ’off’ to prevent sorting.
You can also use the keyword ’force’ to force transparent sorting on, regardless of other circumstances. Usually sorting is only used when the pass is also transparent, and has a depth write or read which indicates it cannot reliably render without sorting. By using ’force’, you tell OGRE to sort this pass no matter what other circumstances are present.
Default: transparent sorting on
cull hardware Sets the hardware culling mode for this pass.
Format: cull hardware
A typical way for the hardware rendering engine to cull triangles is based on the ’vertex winding’ of triangles. Vertex winding refers to the direction in which the vertices are passed or indexed to in the rendering operation as viewed from the camera, and will wither be clockwise or anticlockwise (that’s ’counterclockwise’ for you Americans out there ;). If the option ’cull hardware clockwise’ is set, all triangles whose vertices are viewed in clockwise order from the camera will be culled by the hardware. ’anticlockwise’ is the reverse (obviously), and ’none’ turns off hardware culling so all triagles are rendered (useful for creating 2-sided passes).
Default: cull hardware clockwise NB this is the same as OpenGL’s default but the opposite of Direct3D’s default (because Ogre uses a right-handed coordinate system like OpenGL).
Chapter 3: Scripts
37
cull software Sets the software culling mode for this pass.
Format: cull software
In some situations the engine will also cull geometry in software before sending it to the hardware renderer. This setting only takes effect on SceneManager’s that use it (since it is best used on large groups of planar world geometry rather than on movable geometry since this would be expensive), but if used can cull geometry before it is sent to the hardware. In this case the culling is based on whether the ’back’ or ’front’ of the triangle is facing the camera - this definition is based on the face normal (a vector which sticks out of the front side of the polygon perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of the face, ’cull software back’ is the software equivalent of ’cull hardware clockwise’ setting, which is why they are both the default. The naming is different to reflect the way the culling is done though, since most of the time face normals are pre-calculated and they don’t have to be the way Ogre expects - you could set ’cull hardware none’ and completely cull in software based on your own face normals, if you have the right SceneManager which uses them.
Default: cull software back
lighting Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turned off, all objects rendered using the pass will be fully lit. This attribute has no effect if a vertex program is used.
Format: lighting
Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading properties for this pass redundant. When lighting is turned on, objects are lit according to their vertex normals for diffuse and specular light, and globally for ambient and emissive.
Default: lighting on
Chapter 3: Scripts
38
shading Sets the kind of shading which should be used for representing dynamic lighting for this pass.
Format: shading
When dynamic lighting is turned on, the effect is to generate colour values at each vertex. Whether these values are interpolated across the face (and how) depends on this setting.
flat
No interpolation takes place. Each face is shaded with a single colour determined from the first vertex in the face.
gouraud
Colour at each vertex is linearly interpolated across the face.
phong
Vertex normals are interpolated across the face, and these are used to determine colour at each pixel. Gives a more natural lighting effect but is more expensive and works better at high levels of tessellation. Not supported on all hardware.
Default: shading gouraud
polygon mode Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn as lines or points.
Format: polygon mode
solid
The normal situation - polygons are filled in.
wireframe Polygons are drawn in outline only. points
Only the points of each polygon are rendered.
Default: polygon mode solid
polygon mode overrideable Sets whether or not the [polygon mode], page 38 set on this pass can be downgraded by the camera, if the camera itself is set to a lower polygon mode. If set to false, this pass will always be rendered at its own chosen polygon mode no matter what the camera says. The default is true.
Chapter 3: Scripts
39
Format: polygon mode overrideable
fog override Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very useful for things that you don’t want to be affected by fog when the rest of the scene is fogged, or vice versa. Note that this only affects fixed-function fog - the original scene fog parameters are still sent to shaders which use the fog params parameter binding (this allows you to turn off fixed function fog and calculate it in the shader instead; if you want to disable shader fog you can do that through shader parameters anyway).
Format: fog override [ ]
Default: fog override false
If you specify ’true’ for the first parameter and you supply the rest of the parameters, you are telling the pass to use these fog settings in preference to the scene settings, whatever they might be. If you specify ’true’ but provide no further parameters, you are telling this pass to never use fogging no matter what the scene says. Here is an explanation of the parameters: type
none = No fog, equivalent of just using ’fog override true’ linear = Linear fog from the and distances exp = Fog increases exponentially from the camera (fog = 1/e^(distance * density)), use param to control it exp2 = Fog increases at the square of FOG EXP, i.e. even quicker (fog = 1/e^(distance * density)^2), use param to control it
colour
Sequence of 3 floating point values from 0 to 1 indicating the red, green and blue intensities
density
The density parameter used in the ’exp’ or ’exp2’ fog types. Not used in linear mode but param must still be there as a placeholder
start
The start distance from the camera of linear fog. Must still be present in other modes, even though it is not used.
end
The end distance from the camera of linear fog. Must still be present in other modes, even though it is not used.
Example: fog override true exp 1 1 1 0.002 100 10000
Chapter 3: Scripts
40
colour write Sets whether or not this pass renders with colour writing on or not. Format: colour write
If colour writing is off no visible pixels are written to the screen during this pass. You might think this is useless, but if you render with colour writing off, and with very minimal other settings, you can use this pass to initialise the depth buffer before subsequently rendering other passes which fill in the colour data. This can give you significant performance boosts on some newer cards, especially when using complex fragment programs, because if the depth check fails then the fragment program is never run.
Default: colour write on
start light Sets the first light which will be considered for use with this pass. Format: start light
You can use this attribute to offset the starting point of the lights for this pass. In other words, if you set start light to 2 then the first light to be processed in that pass will be the third actual light in the applicable list. You could use this option to use different passes to process the first couple of lights versus the second couple of lights for example, or use it in conjunction with the [iteration], page 41 option to start the iteration from a given point in the list (e.g. doing the first 2 lights in the first pass, and then iterating every 2 lights from then on perhaps).
Default: start light 0
max lights Sets the maximum number of lights which will be considered for use with this pass. Format: max lights
The maximum number of lights which can be used when rendering fixed-function materials is set by the rendering system, and is typically set at 8. When you are using the programmable pipeline (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 79) this limit is dependent on the program you are running, or, if you use
Chapter 3: Scripts
41
’iteration once per light’ or a variant (See [iteration], page 41), it effectively only bounded by the number of passes you are willing to use. If you are not using pass iteration, the light limit applies once for this pass. If you are using pass iteration, the light limit applies across all iterations of this pass - for example if you have 12 lights in range with an ’iteration once per light’ setup but your max lights is set to 4 for that pass, the pass will only iterate 4 times.
Default: max lights 8
iteration Sets whether or not this pass is iterated, i.e. issued more than once.
Format 1: iteration [lightType] Format 2: iteration [ [lightType]] Format 3: iteration [ [lightType]] Examples: iteration once The pass is only executed once which is the default behaviour. iteration once per light point The pass is executed once for each point light. iteration 5 The render state for the pass will be setup and then the draw call will execute 5 times. iteration 5 per light point The render state for the pass will be setup and then the draw call will execute 5 times. This will be done for each point light. iteration 1 per n lights 2 point The render state for the pass will be setup and the draw call executed once for every 2 lights.
By default, passes are only issued once. However, if you use the programmable pipeline, or you wish to exceed the normal limits on the number of lights which are supported, you might want to use the once per light option. In this case, only light index 0 is ever used, and the pass is issued multiple times, each time with a different light in light index 0. Clearly this will make the pass more expensive, but it may be the only way to achieve certain effects such as per-pixel lighting effects which take into account 1..n lights.
Chapter 3: Scripts
42
Using a number instead of "once" instructs the pass to iterate more than once after the render state is setup. The render state is not changed after the initial setup so repeated draw calls are very fast and ideal for passes using programmable shaders that must iterate more than once with the same render state i.e. shaders that do fur, motion blur, special filtering.
If you use once per light, you should also add an ambient pass to the technique before this pass, otherwise when no lights are in range of this object it will not get rendered at all; this is important even when you have no ambient light in the scene, because you would still want the objects silhouette to appear.
The lightType parameter to the attribute only applies if you use once per light, per light, or per n lights and restricts the pass to being run for lights of a single type (either ’point’, ’directional’ or ’spot’). In the example, the pass will be run once per point light. This can be useful because when you’re writing a vertex / fragment program it is a lot easier if you can assume the kind of lights you’ll be dealing with. However at least point and directional lights can be dealt with in one way. Default: iteration once
Example: Simple Fur shader material script that uses a second pass with 10 iterations to grow the fur: // GLSL simple Fur vertex_program GLSLDemo/FurVS glsl { source fur.vert default_params { param_named_auto lightPosition light_position_object_space 0 param_named_auto eyePosition camera_position_object_space param_named_auto passNumber pass_number param_named_auto multiPassNumber pass_iteration_number param_named furLength float 0.15 } } fragment_program GLSLDemo/FurFS glsl { source fur.frag default_params { param_named Ka float 0.2 param_named Kd float 0.5
Chapter 3: Scripts
param_named Ks float 0.0 param_named furTU int 0 } } material Fur { technique GLSL { pass base_coat { ambient 0.7 0.7 0.7 diffuse 0.5 0.8 0.5 specular 1.0 1.0 1.0 1.5 vertex_program_ref GLSLDemo/FurVS { } fragment_program_ref GLSLDemo/FurFS { } texture_unit { texture Fur.tga tex_coord_set 0 filtering trilinear } } pass grow_fur { ambient 0.7 0.7 0.7 diffuse 0.8 1.0 0.8 specular 1.0 1.0 1.0 64 depth_write off scene_blend src_alpha one iteration 10 vertex_program_ref GLSLDemo/FurVS { } fragment_program_ref GLSLDemo/FurFS
43
Chapter 3: Scripts
44
{ } texture_unit { texture Fur.tga tex_coord_set 0 filtering trilinear } } } } Note: use gpu program auto parameters [pass number], page 91 and [pass iteration number], page 92 to tell the vertex, geometry or fragment program the pass number and iteration number.
point size This setting allows you to change the size of points when rendering a point list, or a list of point sprites. The interpretation of this command depends on the [point size attenuation], page 45 option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed as normalised screen coordinates (1.0 is the height of the screen) when the point is at the origin.
NOTE: Some drivers have an upper limit on the size of points they support - this can even vary between APIs on the same card! Don’t rely on point sizes that cause the points to get very large on screen, since they may get clamped on some cards. Upper sizes can range from 64 to 256 pixels.
Format: point size Default: point size 1.0
point sprites This setting specifies whether or not hardware point sprite rendering is enabled for this pass. Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It is very useful to use this option if you’re using a BillboardSet and only need to use point oriented billboards which are all of the same size. You can also use it for any other point list render.
Chapter 3: Scripts
45
Format: point sprites Default: point sprites off
point size attenuation Defines whether point size is attenuated with view space distance, and in what fashion. This option is especially useful when you’re using point sprites (See [point sprites], page 44) since it defines how they reduce in size as they get further away from the camera. You can also disable this option to make point sprites a constant screen size (like points), or enable it for points so they change size with distance.
You only have to provide the final 3 parameters if you turn attenuation on. The formula for attenuation is that the size of the point is multiplied by 1 / (constant + linear * dist + quadratic * d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic = 0) and standard perspective attenuation is (constant = 0, linear = 1, quadratic = 0). The latter is assumed if you leave out the final 3 parameters when you specify ’on’.
Note that the resulting attenuated size is clamped to the minimum and maximum point size, see the next section.
Format: point size attenuation point size attenuation off
[constant
linear
quadratic]
Default:
point size min Sets the minimum point size after attenuation ([point size attenuation], page 45). For details on the size metrics, See [point size], page 44.
Format: point size min Default: point size min 0
point size max Sets the maximum point size after attenuation ([point size attenuation], page 45). For details on the size metrics, See [point size], page 44. A value of 0 means the maximum is set to the same as the max size reported by the current card.
Format: point size max Default: point size max 0
3.1.3 Texture Units Here are the attributes you can use in a ’texture unit’ section of a .material script:
Chapter 3: Scripts
46
Available Texture Layer Attributes • • • • • • • • • • • • • • • • • • • • • • • •
[texture alias], page 46 [texture], page 47 [anim texture], page 50 [cubic texture], page 50 [tex coord set], page 52 [tex address mode], page 53 [tex border colour], page 53 [filtering], page 54 [max anisotropy], page 55 [mipmap bias], page 55 [colour op], page 55 [colour op ex], page 56 [colour op multipass fallback], page 58 [alpha op ex], page 59 [env map], page 59 [scroll], page 60 [scroll anim], page 60 [rotate], page 60 [rotate anim], page 61 [scale], page 61 [wave xform], page 61 [transform], page 62 [binding type], page 51 [content type], page 51
You can also use a nested ’texture source’ section in order to use a special add-in as a source of texture data, See Chapter 6 [External Texture Sources], page 176 for details.
Attribute Descriptions texture alias Sets the alias name for this texture unit.
Format: texture alias
Example: texture alias NormalMap
Chapter 3: Scripts
47
Setting the texture alias name is useful if this material is to be inherited by other other materials and only the textures will be changed in the new material.(See Section 3.1.12 [Texture Aliases], page 100) Default: If a texture unit has a name then the texture alias defaults to the texture unit name.
texture Sets the name of the static texture image this layer will use.
Format: texture [] [unlimited | numMipMaps] [alpha] [] [gamma]
Example: texture funkywall.jpg
This setting is mutually exclusive with the anim texture attribute. Note that the texture file cannot include spaces. Those of you Windows users who like spaces in filenames, please get over it and use underscores instead. The ’type’ parameter allows you to specify a the type of texture to create - the default is ’2d’, but you can override this; here’s the full list: 1d
A 1-dimensional texture; that is, a texture which is only 1 pixel high. These kinds of textures can be useful when you need to encode a function in a texture and use it as a simple lookup, perhaps in a fragment program. It is important that you use this setting when you use a fragment program which uses 1-dimensional texture coordinates, since GL requires you to use a texture type that matches (D3D will let you get away with it, but you ought to plan for cross-compatibility). Your texture widths should still be a power of 2 for best compatibility and performance.
2d
The default type which is assumed if you omit it, your texture has a width and a height, both of which should preferably be powers of 2, and if you can, make them square because this will look best on the most hardware. These can be addressed with 2D texture coordinates.
3d
A 3 dimensional texture i.e. volume texture. Your texture has a width, a height, both of which should be powers of 2, and has depth. These can be addressed with 3d texture coordinates i.e. through a pixel shader.
cubic
This texture is made up of 6 2D textures which are pasted around the inside of a cube. Can be addressed with 3D texture coordinates and are useful for cubic reflection maps and normal maps.
The ’numMipMaps’ option allows you to specify the number of mipmaps to generate for this texture. The default is ’unlimited’ which means mips down to 1x1 size are generated.
Chapter 3: Scripts
48
You can specify a fixed number (even 0) if you like instead. Note that if you use the same texture in many material scripts, the number of mipmaps generated will conform to the number specified in the first texture unit used to load the texture - so be consistent with your usage.
The ’alpha’ option allows you to specify that a single channel (luminance) texture should be loaded as alpha, rather than the default which is to load it into the red channel. This can be helpful if you want to use alpha-only textures in the fixed function pipeline. Default: none
The option allows you to specify the desired pixel format of the texture to create, which may be different to the pixel format of the texture file being loaded. Bear in mind that the final pixel format will be constrained by hardware capabilities so you may not get exactly what you ask for. The available options are: PF L8
8-bit pixel format, all bits luminance.
PF L16
16-bit pixel format, all bits luminance.
PF A8
8-bit pixel format, all bits alpha.
PF A4L4
8-bit pixel format, 4 bits alpha, 4 bits luminance.
PF BYTE LA 2 byte pixel format, 1 byte luminance, 1 byte alpha PF R5G6B5 16-bit pixel format, 5 bits red, 6 bits green, 5 bits blue. PF B5G6R5 16-bit pixel format, 5 bits blue, 6 bits green, 5 bits red. PF R3G3B2 8-bit pixel format, 3 bits red, 3 bits green, 2 bits blue. PF A4R4G4B4 16-bit pixel format, 4 bits for alpha, red, green and blue. PF A1R5G5B5 16-bit pixel format, 1 bit for alpha, 5 bits for red, green and blue. PF R8G8B8 24-bit pixel format, 8 bits for red, green and blue. PF B8G8R8 24-bit pixel format, 8 bits for blue, green and red. PF A8R8G8B8 32-bit pixel format, 8 bits for alpha, red, green and blue. PF A8B8G8R8 32-bit pixel format, 8 bits for alpha, blue, green and red.
Chapter 3: Scripts
49
PF B8G8R8A8 32-bit pixel format, 8 bits for blue, green, red and alpha. PF R8G8B8A8 32-bit pixel format, 8 bits for red, green, blue and alpha. PF X8R8G8B8 32-bit pixel format, 8 bits for red, 8 bits for green, 8 bits for blue like PF A8R8G8B8, but alpha will get discarded PF X8B8G8R8 32-bit pixel format, 8 bits for blue, 8 bits for green, 8 bits for red like PF A8B8G8R8, but alpha will get discarded PF A2R10G10B10 32-bit pixel format, 2 bits for alpha, 10 bits for red, green and blue. PF A2B10G10R10 32-bit pixel format, 2 bits for alpha, 10 bits for blue, green and red. PF FLOAT16 R 16-bit pixel format, 16 bits (float) for red PF FLOAT16 RGB 48-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float) for blue PF FLOAT16 RGBA 64-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float) for blue, 16 bits (float) for alpha PF FLOAT32 R 16-bit pixel format, 16 bits (float) for red PF FLOAT32 RGB 96-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float) for blue PF FLOAT32 RGBA 128-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float) for blue, 32 bits (float) for alpha PF SHORT RGBA 64-bit pixel format, 16 bits for red, green, blue and alpha The ’gamma’ option informs the renderer that you want the graphics hardware to perform gamma correction on the texture values as they are sampled for rendering. This is only applicable for textures which have 8-bit colour channels (e.g.PF R8G8B8). Often, 8-bit per channel textures will be stored in gamma space in order to increase the precision of the darker colours (http://en.wikipedia.org/wiki/Gamma_correction) but this can throw out blending and filtering calculations since they assume linear space colour values. For the best quality shading, you may want to enable gamma correction so that the hardware converts the texture values to linear space for you automatically when sampling the texture, then the calculations in the pipeline can be done in a reliable linear colour space.
Chapter 3: Scripts
50
When rendering to a final 8-bit per channel display, you’ll also want to convert back to gamma space which can be done in your shader (by raising to the power 1/2.2) or you can enable gamma correction on the texture being rendered to or the render window. Note that the ’gamma’ option on textures is applied on loading the texture so must be specified consistently if you use this texture in multiple places.
anim texture Sets the images to be used in an animated texture layer. In this case an animated texture layer means one which has multiple frames, each of which is a separate image file. There are 2 formats, one for implicitly determined image names, one for explicitly named images.
Format1 (short): anim texture
Example: anim texture flame.jpg 5 2.5
This sets up an animated texture layer made up of 5 frames named flame 0.jpg, flame 1.jpg, flame 2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is set to 0, then no automatic transition takes place and frames must be changed manually in code.
Format2 (long): anim texture ...
Example: anim texture flamestart.jpg flamemore.png flameagain.jpg moreflame.jpg lastflame.tga 2.5
This sets up the same duration animation but from 5 separately named image files. The first format is more concise, but the second is provided if you cannot make your images conform to the naming standard required for it.
Default: none
cubic texture Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up the faces of a cube. These kinds of textures are used for reflection maps (if hardware supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of a particular format and a more flexible but longer format for arbitrarily
Chapter 3: Scripts
51
named textures.
Format1 (short): cubic texture
The base name in this format is something like ’skybox.jpg’, and the system will expect you to provide skybox fr.jpg, skybox bk.jpg, skybox up.jpg, skybox dn.jpg, skybox lf.jpg, and skybox rt.jpg for the individual faces.
Format2 (long): cubic texture separateUV
In this case each face is specified explicitly, incase you don’t want to conform to the image naming standards above. You can only use this for the separateUV version since the combinedUVW version requires a single texture name to be assigned to the combined 3D texture (see below).
In both cases the final parameter means the following: combinedUVW The 6 textures are combined into a single ’cubic’ texture map which is then addressed using 3D texture coordinates with U, V and W components. Necessary for reflection maps since you never know which face of the box you are going to need. Note that not all cards support cubic environment mapping. separateUV The 6 textures are kept separate but are all referenced by this single texture layer. One texture at a time is active (they are actually stored as 6 frames), and they are addressed using standard 2D UV coordinates. This type is good for skyboxes since only one face is rendered at one time and this has more guaranteed hardware support on older cards. Default: none
binding type Tells this texture unit to bind to either the fragment processing unit or the vertex processing unit (for Section 3.1.10 [Vertex Texture Fetch], page 95).
Format: binding type Default: binding type fragment
content type Tells this texture unit where it should get its content from. The default is to get texture content from a named texture, as defined with the [texture], page 47, [cubic texture],
Chapter 3: Scripts
52
page 50, [anim texture], page 50 attributes. However you can also pull texture information from other automated sources. The options are: named
The default option, this derives texture content from a texture name, loaded by ordinary means from a file or having been manually created with a given name.
shadow
This option allows you to pull in a shadow texture, and is only valid when you use texture shadows and one of the ’custom sequence’ shadowing types (See Chapter 7 [Shadows], page 180). The shadow texture in question will be from the ’n’th closest light that casts shadows, unless you use light-based pass iteration or the light start option which may start the light index higher. When you use this option in multiple texture units within the same pass, each one references the next shadow texture. The shadow texture index is reset in the next pass, in case you want to take into account the same shadow textures again in another pass (e.g. a separate specular / gloss pass). By using this option, the correct light frustum projection is set up for you for use in fixed-function, if you use shaders just reference the texture viewproj matrix auto parameter in your shader.
compositor This option allows you to reference a texture from a compositor, and is only valid when the pass is rendered within a compositor sequence. This can be either in a render scene directive inside a compositor script, or in a general pass in a viewport that has a compositor attached. Note that this is a reference only, meaning that it does not change the render order. You must make sure that the order is reasonable for what you are trying to achieve (for example, texture pooling might cause the referenced texture to be overwritten by something else by the time it is referenced). The extra parameters for the content type are only required for this type: The first is the name of the compositor being referenced. (Required) The second is the name of the texture to reference in the compositor. (Required) The third is the index of the texture to take, in case of an MRT. (Optional) Format: content type [] [] [] Default: content type named Example: content type compositor DepthCompositor OutputTexture
Chapter 3: Scripts
53
tex coord set Sets which texture coordinate set is to be used for this texture layer. A mesh can define multiple sets of texture coordinates, this sets which one this material uses.
Format: tex coord set
Example: tex coord set 2
Default: tex coord set 0
tex address mode Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can use the simple format to specify the addressing mode for all 3 potential texture coordinates at once, or you can use the 2/3 parameter extended format to specify a different mode per texture coordinate.
Simple Format: tex address mode Extended Format: tex address mode [] wrap
Any value beyond 1.0 wraps back to 0.0. Texture is repeated.
clamp
Values beyond 1.0 are clamped to 1.0. Texture ’streaks’ beyond 1.0 since last line of pixels is used across the rest of the address space. Useful for textures which need exact coverage from 0.0 to 1.0 without the ’fuzzy edge’ wrap gives when combined with filtering.
mirror
Texture flips every boundary, meaning texture is mirrored every 1.0 u or v
border
Values outside the range [0.0, 1.0] are set to the border colour, you might also set the [tex border colour], page 53 attribute too.
Default: tex address mode wrap
tex border colour Sets the border colour of border texture address mode (see [tex address mode], page 53).
Format: tex border colour [] NB valid colour values are between 0.0 and 1.0.
Chapter 3: Scripts
54
Example: tex border colour 0.0 1.0 0.3
Default: tex border colour 0.0 0.0 0.0 1.0
filtering Sets the type of texture filtering used when magnifying or minifying a texture. There are 2 formats to this attribute, the simple format where you simply specify the name of a predefined set of filtering options, and the complex format, where you individually set the minification, magnification, and mip filters yourself. Simple Format Format: filtering Default: filtering bilinear With this format, you only need to provide a single parameter which is one of the following: none
No filtering or mipmapping is used. This is equivalent to the complex format ’filtering point point none’.
bilinear
2x2 box filtering is performed when magnifying or reducing a texture, and a mipmap is picked from the list but no filtering is done between the levels of the mipmaps. This is equivalent to the complex format ’filtering linear linear point’.
trilinear
2x2 box filtering is performed when magnifying and reducing a texture, and the closest 2 mipmaps are filtered together. This is equivalent to the complex format ’filtering linear linear linear’.
anisotropic This is the same as ’trilinear’, except the filtering algorithm takes account of the slope of the triangle in relation to the camera rather than simply doing a 2x2 pixel filter in all cases. This makes triangles at acute angles look less fuzzy. Equivalent to the complex format ’filtering anisotropic anisotropic linear’. Note that in order for this to make any difference, you must also set the [max anisotropy], page 55 attribute too.
Complex Format Format: filtering Default: filtering linear linear point This format gives you complete control over the minification, magnification, and mip filters. Each parameter can be one of the following:
Chapter 3: Scripts
55
none
Nothing - only a valid option for the ’mip’ filter , since this turns mipmapping off completely. The lowest setting for min and mag is ’point’.
point
Pick the closet pixel in min or mag modes. In mip mode, this picks the closet matching mipmap.
linear
Filter a 2x2 box of pixels around the closest one. In the ’mip’ filter this enables filtering between mipmap levels.
anisotropic Only valid for min and mag modes, makes the filter compensate for cameraspace slope of the triangles. Note that in order for this to make any difference, you must also set the [max anisotropy], page 55 attribute too.
max anisotropy Sets the maximum degree of anisotropy that the renderer will try to compensate for when filtering textures. The degree of anisotropy is the ratio between the height of the texture segment visible in a screen space region versus the width - so for example a floor plane, which stretches on into the distance and thus the vertical texture coordinates change much faster than the horizontal ones, has a higher anisotropy than a wall which is facing you head on (which has an anisotropy of 1 if your line of sight is perfectly perpendicular to it). You should set the max anisotropy value to something greater than 1 to begin compensating; higher values can compensate for more acute angles. The maximum value is determined by the hardware, but it is usually 8 or 16. In order for this to be used, you have to set the minification and/or the magnification [filtering], page 54 option on this texture to anisotropic. Format: max anisotropy Default: max anisotropy 1
mipmap bias Sets the bias value applied to the mipmapping calculation, thus allowing you to alter the decision of which level of detail of the texture to use at any distance. The bias value is applied after the regular distance calculation, and adjusts the mipmap level by 1 level for each unit of bias. Negative bias values force larger mip levels to be used, positive bias values force smaller mip levels to be used. The bias is a floating point value so you can use values in between whole numbers for fine tuning. In order for this option to be used, your hardware has to support mipmap biasing (exposed through the render system capabilities), and your minification [filtering], page 54 has to be set to point or linear. Format: mipmap bias Default: mipmap bias 0
colour op Determines how the colour of this texture layer is combined with the one below it (or the lighting effect on the geometry if this is the first layer).
Chapter 3: Scripts
56
Format: colour op
This method is the simplest way to blend texture layers, because it requires only one parameter, gives you the most common blending types, and automatically sets up 2 blending methods: one for if single-pass multitexturing hardware is available, and another for if it is not and the blending must be achieved through multiple rendering passes. It is, however, quite limited and does not expose the more flexible multitexturing operations, simply because these can’t be automatically supported in multipass fallback mode. If want to use the fancier options, use [colour op ex], page 56, but you’ll either have to be sure that enough multitexturing units will be available, or you should explicitly set a fallback using [colour op multipass fallback], page 58. replace
Replace all colour with texture with no adjustment.
add
Add colour components together.
modulate
Multiply colour components together.
alpha blend Blend based on texture alpha. Default: colour op modulate
colour op ex This is an extended version of the [colour op], page 55 attribute which allows extremely detailed control over the blending applied between this and earlier layers. Multitexturing hardware can apply more complex blending operations that multipass blending, but you are limited to the number of texture units which are available in hardware.
Format: colour op ex [] [] []
Example colour op ex add signed src manual src current 0.5
See the IMPORTANT note below about the issues between multipass and multitexturing that using this method can create. Texture colour operations determine how the final colour of the surface appears when rendered. Texture units are used to combine colour values from various sources (e.g. the diffuse colour of the surface from lighting calculations, combined with the colour of the texture). This method allows you to specify the ’operation’ to be used, i.e. the calculation such as adds or multiplies, and which values to use as arguments,
Chapter 3: Scripts
57
such as a fixed value or a value from a previous calculation.
Operation options source1
Use source1 without modification
source2
Use source2 without modification
modulate
Multiply source1 and source2 together.
modulate x2 Multiply source1 and source2 together, then by 2 (brightening). modulate x4 Multiply source1 and source2 together, then by 4 (brightening). add
Add source1 and source2 together.
add signed Add source1 and source2 then subtract 0.5. add smooth Add source1 and source2, subtract the product subtract
Subtract source2 from source1
blend diffuse alpha Use interpolated alpha value from vertices to scale source1, then add source2 scaled by (1-alpha). blend texture alpha As blend diffuse alpha but use alpha from texture blend current alpha As blend diffuse alpha but use current alpha from previous stages (same as blend diffuse alpha for first layer) blend manual As blend diffuse alpha but use a constant manual alpha value specified in dotproduct The dot product of source1 and source2 blend diffuse colour Use interpolated colour value from vertices to scale source1, then add source2 scaled by (1-colour). Source1 and source2 options src current The colour as built up from previous stages. src texture The colour derived from the texture assigned to this layer.
Chapter 3: Scripts
58
src diffuse The interpolated diffuse colour from the vertices (same as ’src current’ for first layer). src specular The interpolated specular colour from the vertices. src manual The manual colour specified at the end of the command. For example ’modulate’ takes the colour results of the previous layer, and multiplies them with the new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0 so multiplying them together will result in values in the same range, ’tinted’ by the multiply. Note however that a straight multiply normally has the effect of darkening the textures for this reason there are brightening operations like modulate x2. Note that because of the limitations on some underlying APIs (Direct3D included) the ’texture’ argument can only be used as the first argument, not the second.
Note that the last parameter is only required if you decide to pass a value manually into the operation. Hence you only need to fill these in if you use the ’blend manual’ operation.
IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers together. However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it has to fall back on multipass rendering, i.e. rendering the same object multiple times with different textures. This is both less efficient and there is a smaller range of blending operations which can be performed. For this reason, if you use this method you really should set the colour op multipass fallback attribute to specify which effect you want to fall back on if sufficient hardware is not available (the default is just ’modulate’ which is unlikely to be what you want if you’re doing swanky blending here). If you wish to avoid having to do this, use the simpler colour op attribute which allows less flexible blending options but sets up the multipass fallback automatically, since it only allows operations which have direct multipass equivalents.
Default: none (colour op modulate)
colour op multipass fallback Sets the multipass fallback operation for this layer, if you used colour op ex and not enough multitexturing hardware is available.
Format: colour op multipass fallback
Chapter 3: Scripts
59
Example: colour op multipass fallback one one minus dest alpha
Because some of the effects you can create using colour op ex are only supported under multitexturing hardware, if the hardware is lacking the system must fallback on multipass rendering, which unfortunately doesn’t support as many effects. This attribute is for you to specify the fallback operation which most suits you.
The parameters are the same as in the scene blend attribute; this is because multipass rendering IS effectively scene blending, since each layer is rendered on top of the last using the same mechanism as making an object transparent, it’s just being rendered in the same place repeatedly to get the multitexture effect. If you use the simpler (and less flexible) colour op attribute you don’t need to call this as the system sets up the fallback for you.
alpha op ex Behaves in exactly the same away as [colour op ex], page 56 except that it determines how alpha values are combined between texture layers rather than colour values.The only difference is that the 2 manual colours at the end of colour op ex are just single floatingpoint values in alpha op ex.
env map Turns on/off texture coordinate effect that makes this layer an environment map.
Format: env map
Environment maps make an object look reflective by using automatic texture coordinate generation depending on the relationship between the objects vertices or normals and the eye.
spherical
A spherical environment map. Requires a single texture which is either a fisheye lens view of the reflected scene, or some other texture which looks good as a spherical map (a texture of glossy highlights is popular especially in car sims). This effect is based on the relationship between the eye direction and the vertex normals of the object, so works best when there are a lot of gradually changing normals, i.e. curved objects.
planar
Similar to the spherical environment map, but the effect is based on the position of the vertices in the viewport rather than vertex normals. This effect is therefore useful for planar geometry (where a spherical env map would not look good because the normals are all the same) or objects without normals.
Chapter 3: Scripts
60
cubic reflection A more advanced form of reflection mapping which uses a group of 6 textures making up the inside of a cube, each of which is a view if the scene down each axis. Works extremely well in all cases but has a higher technical requirement from the card than spherical mapping. Requires that you bind a [cubic texture], page 50 to this texture unit and use the ’combinedUVW’ option. cubic normal Generates 3D texture coordinates containing the camera space normal vector from the normal information held in the vertex data. Again, full use of this feature requires a [cubic texture], page 50 with the ’combinedUVW’ option. Default: env map off
scroll Sets a fixed scroll offset for the texture.
Format: scroll
This method offsets the texture in this layer by a fixed amount. Useful for small adjustments without altering texture coordinates in models. However if you wish to have an animated scroll effect, see the [scroll anim], page 60 attribute.
scroll anim Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling effects on a texture layer (for varying scroll speeds, see [wave xform], page 61).
Format: scroll anim
rotate Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a texture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation, see [rotate anim], page 61.
Format: rotate
Chapter 3: Scripts
61
The parameter is a anti-clockwise angle in degrees.
rotate anim Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation animations (for varying speeds, see [wave xform], page 61).
Format: rotate anim
The parameter is a number of anti-clockwise revolutions per second.
scale Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of textures without making changes to geometry. This is a fixed scaling factor, if you wish to animate this see [wave xform], page 61.
Format: scale
Valid scale values are greater than 0, with a scale factor of 2 making the texture twice as big in that dimension etc.
wave xform Sets up a transformation animation based on a wave function. Useful for more advanced texture layer transform effects. You can add multiple instances of this attribute to a single texture layer if you wish.
Format: wave xform
Example: wave xform scale x sine 1.0 0.2 0.0 5.0
xform type scroll x
Animate the x scroll value
Chapter 3: Scripts
62
scroll y
Animate the y scroll value
rotate
Animate the rotate value
scale x
Animate the x scale value
scale y
Animate the y scale value
sine
A typical sine wave which smoothly loops between min and max values
triangle
An angled wave which increases & decreases at constant speed, changing instantly at the extremes
square
Max for half the wavelength, min for the rest with instant transition between
sawtooth
Gradual steady increase from min to max over the period with an instant return to min at the end.
wave type
inverse sawtooth Gradual steady decrease from max to min over the period, with an instant return to max at the end. base
The base value, the minimum if amplitude > 0, the maximum if amplitude < 0
frequency
The number of wave iterations per second, i.e. speed
phase
Offset of the wave start
amplitude The size of the wave The range of the output of the wave will be base, base+amplitude. So the example above scales the texture in the x direction between 1 (normal size) and 5 along a sine wave at one cycle every 5 second (0.2 waves per second).
transform This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thus replacing the individual scroll, rotate and scale attributes mentioned above.
Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31 m32 m33
The indexes of the 4x4 matrix value above are expressed as m.
Chapter 3: Scripts
63
3.1.4 Declaring Vertex/Geometry/Fragment Programs In order to use a vertex, geometry or fragment program in your materials (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 79), you first have to define them. A single program definition can be used by any number of materials, the only prerequisite is that a program must be defined before being referenced in the pass section of a material.
The definition of a program can either be embedded in the .material script itself (in which case it must precede any references to it in the script), or if you wish to use the same program across multiple .material files, you can define it in an external .program script. You define the program in exactly the same way whether you use a .program script or a .material script, the only difference is that all .program scripts are guaranteed to have been parsed before all .material scripts, so you can guarantee that your program has been defined before any .material script that might use it. Just like .material scripts, .program scripts will be read from any location which is on your resource path, and you can define many programs in a single script.
Vertex, geometry and fragment programs can be low-level (i.e. assembler code written to the specification of a given low level syntax such as vs 1 1 or arbfp1) or high-level such as DirectX9 HLSL, Open GL Shader Language, or nVidia’s Cg language (See [High-level Programs], page 67). High level languages give you a number of advantages, such as being able to write more intuitive code, and possibly being able to target multiple architectures in a single program (for example, the same Cg program might be able to be used in both D3D and GL, whilst the equivalent low-level programs would require separate techniques, each targeting a different API). High-level programs also allow you to use named parameters instead of simply indexed ones, although parameters are not defined here, they are used in the Pass.
Here is an example of a definition of a low-level vertex program: vertex_program myVertexProgram asm { source myVertexProgram.asm syntax vs_1_1 } As you can see, that’s very simple, and defining a fragment or geometry program is exactly the same, just with vertex program replaced with fragment program or geometry program, respectively. You give the program a name in the header, followed by the word ’asm’ to indicate that this is a low-level program. Inside the braces, you specify where the source is going to come from (and this is loaded from any of the resource locations as with other media), and also indicate the syntax being used. You might wonder why the syntax specification is required when many of the assembler syntaxes have a header identifying them anyway - well the reason is that the engine needs to know what syntax the
Chapter 3: Scripts
64
program is in before reading it, because during compilation of the material, we want to skip programs which use an unsupportable syntax quickly, without loading the program first.
The current supported syntaxes are: vs 1 1
This is one of the DirectX vertex shader assembler syntaxes. Supported on cards from: ATI Radeon 8500, nVidia GeForce 3
vs 2 0
Another one of the DirectX vertex shader assembler syntaxes. Supported on cards from: ATI Radeon 9600, nVidia GeForce FX 5 series
vs 2 x
Another one of the DirectX vertex shader assembler syntaxes. Supported on cards from: ATI Radeon X series, nVidia GeForce FX 6 series
vs 3 0
Another one of the DirectX vertex shader assembler syntaxes. Supported on cards from: ATI Radeon HD 2000+, nVidia GeForce FX 6 series
arbvp1
This is the OpenGL standard assembler format for vertex programs. roughly equivalent to DirectX vs 1 1.
vp20
This is an nVidia-specific OpenGL vertex shader syntax which is a superset of vs 1.1. ATI Radeon HD 2000+ also supports it.
vp30
Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD 2000+ also supports it.
vp40
Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 3.0, which is supported on nVidia GeForce FX 6 series and higher.
It’s
ps 1 1, ps 1 2, ps 1 3 DirectX pixel shader (ie fragment program) assembler syntax. Supported on cards from: ATI Radeon 8500, nVidia GeForce 3 NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extension in OpenGL which is very similar in function to ps 1 4 in DirectX. Ogre has a built in ps 1 x to atifs compiler that is automatically invoked when ps 1 x is used in OpenGL on ATI hardware. ps 1 4
DirectX pixel shader (ie fragment program) assembler syntax. Supported on cards from: ATI Radeon 8500, nVidia GeForce FX 5 series NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extension in OpenGL which is very similar in function to ps 1 4 in DirectX. Ogre has a built in ps 1 x to atifs compiler that is automatically invoked when ps 1 x is used in OpenGL on ATI hardware.
ps 2 0
DirectX pixel shader (ie fragment program) assembler syntax. Supported cards: ATI Radeon 9600, nVidia GeForce FX 5 series
Chapter 3: Scripts
65
ps 2 x
DirectX pixel shader (ie fragment program) assembler syntax. This is basically ps 2 0 with a higher number of instructions. Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series
ps 3 0
DirectX pixel shader (ie fragment program) assembler syntax. Supported cards: ATI Radeon HD 2000+, nVidia GeForce FX6 series
ps 3 x
DirectX pixel shader (ie fragment program) assembler syntax. Supported cards: nVidia GeForce FX7 series
arbfp1
This is the OpenGL standard assembler format for fragment programs. It’s roughly equivalent to ps 2 0, which means that not all cards that support basic pixel shaders under DirectX support arbfp1 (for example neither the GeForce3 or GeForce4 support arbfp1, but they do support ps 1 1).
fp20
This is an nVidia-specific OpenGL fragment syntax which is a superset of ps 1.3. It allows you to use the ’nvparse’ format for basic fragment programs. It actually uses NV texture shader and NV register combiners to provide functionality equivalent to DirectX’s ps 1 1 under GL, but only for nVidia cards. However, since ATI cards adopted arbfp1 a little earlier than nVidia, it is mainly nVidia cards like the GeForce3 and GeForce4 that this will be useful for. You can find more information about nvparse at http://developer.nvidia.com/object/nvparse.html.
fp30
Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 2.0, which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD 2000+ also supports it.
fp40
Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 3.0, which is supported on nVidia GeForce FX 6 series and higher.
gpu gp, gp4 gp An nVidia-specific OpenGL geometry shader syntax. Supported cards: nVidia GeForce FX8 series You can get a definitive list of the syntaxes supported by the current card by calling GpuProgramManager::getSingleton().getSupportedSyntax().
Specifying Named Constants for Assembler Shaders Assembler shaders don’t have named constants (also called uniform parameters) because the language does not support them - however if you for example decided to precompile your shaders from a high-level language down to assembler for performance or obscurity, you might still want to use the named parameters. Well, you actually can - GpuNamedConstants which contains the named parameter mappings has a ’save’ method which you can use to write this data to disk, where you can reference it later using the manual named constants directive inside your assembler program declaration, e.g.
Chapter 3: Scripts
66
vertex_program myVertexProgram asm { source myVertexProgram.asm syntax vs_1_1 manual_named_constants myVertexProgram.constants } In this case myVertexProgram.constants has been created by calling highLevelGpuProgram->getNamedConstants().save("myVertexProgram.constants"); sometime earlier as preparation, from the original high-level program. Once you’ve used this directive, you can use named parameters here even though the assembler program itself has no knowledge of them.
Default Program Parameters While defining a vertex, geometry or fragment program, you can also specify the default parameters to be used for materials which use it, unless they specifically override them. You do this by including a nested ’default params’ section, like so: vertex_program Ogre/CelShadingVP cg { source Example_CelShading.cg entry_point main_vp profiles vs_1_1 arbvp1 default_params { param_named_auto lightPosition light_position_object_space 0 param_named_auto eyePosition camera_position_object_space param_named_auto worldViewProj worldviewproj_matrix param_named shininess float 10 } } The syntax of the parameter definition is exactly the same as when you define parameters when using programs, See [Program Parameter Specification], page 80. Defining default parameters allows you to avoid rebinding common parameters repeatedly (clearly in the above example, all but ’shininess’ are unlikely to change between uses of the program) which makes your material declarations shorter.
Declaring Shared Parameters Often, not every parameter you want to pass to a shader is unique to that program, and perhaps you want to give the same value to a number of different programs, and a number of different materials using that program. Shared parameter sets allow you to define a ’holding area’ for shared parameters that can then be referenced when you need them in particular shaders, while keeping the definition of that value in one place. To define a set of shared parameters, you do this: shared_params YourSharedParamsName { shared_param_named mySharedParam1 float4 0.1 0.2 0.3 0.4
Chapter 3: Scripts
67
... } As you can see, you need to use the keyword ’shared params’ and follow it with the name that you will use to identify these shared parameters. Inside the curly braces, you can define one parameter per line, in a way which is very similar to the [param named], page 92 syntax. The definition of these lines is: Format: shared param name tial values>]
[<[array size]>]
[
The param name must be unique within the set, and the param type can be any one of float, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2, matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. The array size option allows you to define arrays of param type should you wish, and if present must be a number enclosed in square brackets (and note, must be separated from the param type with whitespace). If you wish, you can also initialise the parameters by providing a list of values.
Once you have defined the shared parameters, you can reference them inside default params and params blocks using [shared params ref], page 93. You can also obtain a reference to them in your code via GpuProgramManager::getSharedParameters, and update the values for all instances using them.
High-level Programs Support for high level vertex and fragment programs is provided through plugins; this is to make sure that an application using OGRE can use as little or as much of the high-level program functionality as they like. OGRE currently supports 3 high-level program types, Cg (Section 3.1.5 [Cg], page 69) (an API- and card-independent, high-level language which lets you write programs for both OpenGL and DirectX for lots of cards), DirectX 9 HighLevel Shader Language (Section 3.1.6 [HLSL], page 70), and OpenGL Shader Language (Section 3.1.7 [GLSL], page 71). HLSL can only be used with the DirectX rendersystem, and GLSL can only be used with the GL rendersystem. Cg can be used with both, although experience has shown that more advanced programs, particularly fragment programs which perform a lot of texture fetches, can produce better code in the rendersystem-specific shader language.
One way to support both HLSL and GLSL is to include separate techniques in the material script, each one referencing separate programs. However, if the programs are basically the same, with the same parameters, and the techniques are complex this can bloat your material scripts with duplication fairly quickly. Instead, if the only difference is the language of the vertex & fragment program you can use OGRE’s Section 3.1.8 [Unified Highlevel Programs], page 76 to automatically pick a program suitable for your rendersystem whilst using a single technique.
Chapter 3: Scripts
68
Skeletal Animation in Vertex Programs You can implement skeletal animation in hardware by writing a vertex program which uses the per-vertex blending indices and blending weights, together with an array of world matrices (which will be provided for you by Ogre if you bind the automatic parameter ’world matrix array 3x4’). However, you need to communicate this support to Ogre so it does not perform skeletal animation in software for you. You do this by adding the following attribute to your vertex program definition: includes_skeletal_animation true When you do this, any skeletally animated entity which uses this material will forgo the usual animation blend and will expect the vertex program to do it, for both vertex positions and normals. Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (See Chapter 8 [Animation], page 197) then all techniques must be hardware accelerated for any to be.
Morph Animation in Vertex Programs You can implement morph animation in hardware by writing a vertex program which linearly blends between the first and second position keyframes passed as positions and the first free texture coordinate set, and by binding the animation parametric value to a parameter (which tells you how far to interpolate between the two). However, you need to communicate this support to Ogre so it does not perform morph animation in software for you. You do this by adding the following attribute to your vertex program definition: includes_morph_animation true When you do this, any skeletally animated entity which uses this material will forgo the usual software morph and will expect the vertex program to do it. Note that if your model includes both skeletal animation and morph animation, they must both be implemented in the vertex program if either is to be hardware acceleration. Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (See Chapter 8 [Animation], page 197) then all techniques must be hardware accelerated for any to be.
Pose Animation in Vertex Programs You can implement pose animation (blending between multiple poses based on weight) in a vertex program by pulling in the original vertex data (bound to position), and as many pose offset buffers as you’ve defined in your ’includes pose animation’ declaration, which will be in the first free texture unit upwards. You must also use the animation parametric parameter to define the starting point of the constants which will contain the pose weights; they will start at the parameter you define and fill ’n’ constants, where ’n’ is the max number of poses this shader can blend, i.e. the parameter to includes pose animation. includes_pose_animation 4 Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (See Chapter 8 [Animation], page 197) then all techniques must be hardware accelerated for any to be.
Chapter 3: Scripts
69
Vertex texture fetching in vertex programs If your vertex program makes use of Section 3.1.10 [Vertex Texture Fetch], page 95, you should declare that with the ’uses vertex texture fetch’ directive. This is enough to tell Ogre that your program uses this feature and that hardware support for it should be checked. uses_vertex_texture_fetch true
Adjacency information in Geometry Programs Some geometry programs require adjacency information from the geometry. It means that a geometry shader doesn’t only get the information of the primitive it operates on, it also has access to its neighbors (in the case of lines or triangles). This directive will tell Ogre to send the information to the geometry shader. uses_adjacency_information true
Vertex Programs With Shadows When using shadows (See Chapter 7 [Shadows], page 180), the use of vertex programs can add some additional complexities, because Ogre can only automatically deal with everything when using the fixed-function pipeline. If you use vertex programs, and you are also using shadows, you may need to make some adjustments.
If you use stencil shadows, then any vertex programs which do vertex deformation can be a problem, because stencil shadows are calculated on the CPU, which does not have access to the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see section above) because Ogre knows how to replicate the effect in software, but any other vertex deformation cannot be replicated, and you will either have to accept that the shadow will not reflect this deformation, or you should turn off shadows for that object.
If you use texture shadows, then vertex deformation is acceptable; however, when rendering the object into a shadow texture (the shadow caster pass), the shadow has to be rendered in a solid colour (linked to the ambient colour for modulative shadows, black for additive shadows). You must therefore provide an alternative vertex program, so Ogre provides you with a way of specifying one to use when rendering the caster, See [Shadows and Vertex Programs], page 93.
3.1.5 Cg programs In order to define Cg programs, you have to have to load Plugin CgProgramManager.so/.dll at startup, either through plugins.cfg or through your own plugin loading code. They are very easy to define: fragment_program myCgFragmentProgram cg { source myCgFragmentProgram.cg entry_point main profiles ps_2_0 arbfp1
Chapter 3: Scripts
70
} There are a few differences between this and the assembler program - to begin with, we declare that the fragment program is of type ’cg’ rather than ’asm’, which indicates that it’s a high-level program using Cg. The ’source’ parameter is the same, except this time it’s referencing a Cg source file instead of a file of assembler. Here is where things start to change. Firstly, we need to define an ’entry point’, which is the name of a function in the Cg program which will be the first one called as part of the fragment program. Unlike assembler programs, which just run top-to-bottom, Cg programs can include multiple functions and as such you must specify the one which start the ball rolling. Next, instead of a fixed ’syntax’ parameter, you specify one or more ’profiles’; profiles are how Cg compiles a program down to the low-level assembler. The profiles have the same names as the assembler syntax codes mentioned above; the main difference is that you can list more than one, thus allowing the program to be compiled down to more low-level syntaxes so you can write a single high-level program which runs on both D3D and GL. You are advised to just enter the simplest profiles under which your programs can be compiled in order to give it the maximum compatibility. The ordering also matters; if a card supports more than one syntax then the one listed first will be used.
Lastly, there is a final option called ’compile arguments’, where you can specify arguments exactly as you would to the cgc command-line compiler, should you wish to.
3.1.6 DirectX9 HLSL DirectX9 HLSL has a very similar language syntax to Cg but is tied to the DirectX API. The only benefit over Cg is that it only requires the DirectX 9 render system plugin, not any additional plugins. Declaring a DirectX9 HLSL program is very similar to Cg. Here’s an example: vertex_program myHLSLVertexProgram hlsl { source myHLSLVertexProgram.txt entry_point main target vs_2_0 } As you can see, the main syntax is almost identical, except that instead of ’profiles’ with a list of assembler formats, you have a ’target’ parameter which allows a single assembler target to be specified - obviously this has to be a DirectX assembler format syntax code.
Important Matrix Ordering Note: One thing to bear in mind is that HLSL allows you to use 2 different ways to multiply a vector by a matrix - mul(v,m) or mul(m,v). The only difference between them is that the matrix is effectively transposed. You should use mul(m,v) with the matrices passed in from Ogre - this agrees with the shaders produced from tools like RenderMonkey, and is consistent with Cg too, but disagrees with the Dx9
Chapter 3: Scripts
71
SDK and FX Composer which use mul(v,m) - you will have to switch the parameters to mul() in those shaders.
Note that if you use the float3x4 / matrix3x4 type in your shader, bound to an OGRE auto-definition (such as bone matrices) you should use the column major matrices = false option (discussed below) in your program definition. This is because OGRE passes float3x4 as row-major to save constant space (3 float4’s rather than 4 float4’s with only the top 3 values used) and this tells OGRE to pass all matrices like this, so that you can use mul(m,v) consistently for all calculations. OGRE will also to tell the shader to compile in row-major form (you don’t have to set the /Zpr compile option or #pragma pack(row-major) option, OGRE does this for you). Note that passing bones in float4x3 form is not supported by OGRE, but you don’t need it given the above.
Advanced options preprocessor defines This allows you to define symbols which can be used inside the HLSL shader code to alter the behaviour (through #ifdef or #if clauses). Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’ operator within them to specify a definition value. Those without an ’=’ will implicitly have a definition of 1. column major matrices The default for this option is ’true’ so that OGRE passes matrices auto-bound matrices in a form where mul(m,v) works. Setting this option to false does 2 things - it transpose auto-bound 4x4 matrices and also sets the /Zpr (row-major) option on the shader compilation. This means you can still use mul(m,v), but the matrix layout is row-major instead. This is only useful if you need to use bone matrices (float3x4) in a shader since it saves a float4 constant for every bone involved. optimisation level Set the optimisation level, which can be one of ’default’, ’none’, ’0’, ’1’, ’2’, or ’3’. This corresponds to the /O parameter of fxc.exe, except that in ’default’ mode, optimisation is disabled in debug mode and set to 1 in release mode (fxc.exe uses 1 all the time). Unsurprisingly the default value is ’default’. You may want to change this if you want to tweak the optimisation, for example if your shader gets so complex that it will not longer compile without some minimum level of optimisation.
3.1.7 OpenGL GLSL OpenGL GLSL has a similar language syntax to HLSL but is tied to the OpenGL API. The are a few benefits over Cg in that it only requires the OpenGL render system plugin, not any additional plugins. Declaring a OpenGL GLSL program is similar to Cg but simpler. Here’s an example:
Chapter 3: Scripts
72
vertex_program myGLSLVertexProgram glsl { source myGLSLVertexProgram.txt } In GLSL, no entry point needs to be defined since it is always ’main()’ and there is no target definition since GLSL source is compiled into native GPU code and not intermediate assembly.
GLSL supports the use of modular shaders. This means you can write GLSL external functions that can be used in multiple shaders. vertex_program myExternalGLSLFunction1 glsl { source myExternalGLSLfunction1.txt } vertex_program myExternalGLSLFunction2 glsl { source myExternalGLSLfunction2.txt } vertex_program myGLSLVertexProgram1 glsl { source myGLSLfunction.txt attach myExternalGLSLFunction1 myExternalGLSLFunction2 } vertex_program myGLSLVertexProgram2 glsl { source myGLSLfunction.txt attach myExternalGLSLFunction1 } External GLSL functions are attached to the program that needs them by using ’attach’ and including the names of all external programs required on the same line separated by spaces. This can be done for both vertex and fragment programs.
GLSL Texture Samplers To pass texture unit index values from the material script to texture samplers in glsl use ’int’ type named parameters. See the example below: excerpt from GLSL example.frag source: varying vec2 UV; uniform sampler2D diffuseMap; void main(void)
Chapter 3: Scripts
73
{ gl_FragColor = texture2D(diffuseMap, UV); } In material script: fragment_program myFragmentShader glsl { source example.frag } material exampleGLSLTexturing { technique { pass { fragment_program_ref myFragmentShader { param_named diffuseMap int 0 } texture_unit { texture myTexture.jpg 2d } } } } An index value of 0 refers to the first texture unit in the pass, an index value of 1 refers to the second unit in the pass and so on.
Matrix parameters Here are some examples of passing matrices to GLSL mat2, mat3, mat4 uniforms: material exampleGLSLmatrixUniforms { technique matrix_passing { pass examples { vertex_program_ref myVertexShader { // mat4 uniform param_named OcclusionMatrix matrix4x4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 // or param_named ViewMatrix float16 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
Chapter 3: Scripts
// mat3 param_named TextRotMatrix float9 1 0 0
74
0 1 0
0 0 1
} fragment_program_ref myFragmentShader { // mat2 uniform param_named skewMatrix float4 0.5 0 -0.5 1.0 } } } }
Accessing OpenGL states in GLSL GLSL can access most of the GL states directly so you do not need to pass these states through [param named auto], page 93 in the material script. This includes lights, material state, and all the matrices used in the openGL state i.e. model view matrix, worldview projection matrix etc.
Binding vertex attributes GLSL natively supports automatic binding of the most common incoming per-vertex attributes (e.g. gl Vertex, gl Normal, gl MultiTexCoord0 etc). However, there are some which are not automatically bound, which must be declared in the shader using the ’attribute ’ syntax, and the vertex data bound to it by Ogre.
In addition to the built in attributes described in section 7.3 of the GLSL manual, Ogre supports a number of automatically bound custom vertex attributes. There are some drivers that do not behave correctly when mixing built-in vertex attributes like gl Normal and custom vertex attributes, so for maximum compatibility you may well wish to use all custom attributes in shaders where you need at least one (e.g. for skeletal animation). vertex
Binds VES POSITION, declare as ’attribute vec4 vertex;’.
normal
Binds VES NORMAL, declare as ’attribute vec3 normal;’.
colour
Binds VES DIFFUSE, declare as ’attribute vec4 colour;’.
secondary colour Binds VES SPECULAR, declare as ’attribute vec4 secondary colour;’. uv0 - uv7
Binds VES TEXTURE COORDINATES, declare as ’attribute vec4 uv0;’. Note that uv6 and uv7 share attributes with tangent and binormal respectively so cannot both be present.
tangent
Binds VES TANGENT, declare as ’attribute vec3 tangent;’.
Chapter 3: Scripts
binormal
75
Binds VES BINORMAL, declare as ’attribute vec3 binormal;’.
blendIndices Binds VES BLEND INDICES, declare as ’attribute vec4 blendIndices;’. blendWeights Binds VES BLEND WEIGHTS, declare as ’attribute vec4 blendWeights;’.
Preprocessor definitions GLSL supports using preprocessor definitions in your code - some are defined by the implementation, but you can also define your own, say in order to use the same source code for a few different variants of the same technique. In order to use this feature, include preprocessor conditions in your GLSL code, of the kind #ifdef SYMBOL, #if SYMBOL==2 etc. Then in your program definition, use the ’preprocessor defines’ option, following it with a string if definitions. Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’ operator within them to specify a definition value. Those without an ’=’ will implicitly have a definition of 1. For example:
// in your GLSL #ifdef CLEVERTECHNIQUE // some clever stuff here #else // normal technique #endif #if NUM_THINGS==2 // Some specific code #else // something else #endif // in your program definition preprocessor_defines CLEVERTECHNIQUE,NUMTHINGS=2 This way you can use the same source code but still include small variations, each one defined as a different Ogre program name but based on the same source code.
GLSL Geometry shader specification GLSL allows the same shader to run on different types of geometry primitives. In order to properly link the shaders together, you have to specify which primitives it will receive as input, which primitives it will emit and how many vertices a single run of the shader can generate. The GLSL geometry program definition requires three additional parameters : input operation type The operation type of the geometry that the shader will receive. Can be ’point list’, ’line list’, ’line strip’, ’triangle list’, ’triangle strip’ or ’triangle fan’.
Chapter 3: Scripts
output operation type The operation type of the geometry that the shader will emit. ’point list’, ’line strip’ or ’triangle strip’.
76
Can be
max output vertices The maximum number of vertices that the shader can emit. There is an upper limit for this value, it is exposed in the render system capabilities. For example: geometry_program Ogre/GPTest/Swizzle_GP_GLSL glsl { source SwizzleGP.glsl input_operation_type triangle_list output_operation_type line_strip max_output_vertices 6 }
3.1.8 Unified High-level Programs As mentioned above, it can often be useful to write both HLSL and GLSL programs to specifically target each platform, but if you do this via multiple material techniques this can cause a bloated material definition when the only difference is the program language. Well, there is another option. You can ’wrap’ multiple programs in a ’unified’ program definition, which will automatically choose one of a series of ’delegate’ programs depending on the rendersystem and hardware support. vertex_program myVertexProgram unified { delegate realProgram1 delegate realProgram2 ... etc } This works for both vertex and fragment programs, and you can list as many delegates as you like - the first one to be supported by the current rendersystem & hardware will be used as the real program. This is almost like a mini-technique system, but for a single program and with a much tighter purpose. You can only use this where the programs take all the same inputs, particularly textures and other pass / sampler state. Where the only difference between the programs is the language (or possibly the target in HLSL - you can include multiple HLSL programs with different targets in a single unified program too if you want, or indeed any number of other high-level programs), this can become a very powerful feature. For example, without this feature here’s how you’d have to define a programmable material which supported HLSL and GLSL: vertex_program myVertexProgramHLSL hlsl { source prog.hlsl entry_point main_vp target vs_2_0 } fragment_program myFragmentProgramHLSL hlsl
Chapter 3: Scripts
77
{ source prog.hlsl entry_point main_fp target ps_2_0 } vertex_program myVertexProgramGLSL glsl { source prog.vert } fragment_program myFragmentProgramGLSL glsl { source prog.frag default_params { param_named tex int 0 } } material SupportHLSLandGLSLwithoutUnified { // HLSL technique technique { pass { vertex_program_ref myVertexProgramHLSL { param_named_auto worldViewProj world_view_proj_matrix param_named_auto lightColour light_diffuse_colour 0 param_named_auto lightSpecular light_specular_colour 0 param_named_auto lightAtten light_attenuation 0 } fragment_program_ref myFragmentProgramHLSL { } } } // GLSL technique technique { pass { vertex_program_ref myVertexProgramHLSL { param_named_auto worldViewProj world_view_proj_matrix param_named_auto lightColour light_diffuse_colour 0 param_named_auto lightSpecular light_specular_colour 0 param_named_auto lightAtten light_attenuation 0
Chapter 3: Scripts
78
} fragment_program_ref myFragmentProgramHLSL { } } } } And that’s a really small example. Everything you added to the HLSL technique, you’d have to duplicate in the GLSL technique too. So instead, here’s how you’d do it with unified program definitions: vertex_program myVertexProgramHLSL hlsl { source prog.hlsl entry_point main_vp target vs_2_0 } fragment_program myFragmentProgramHLSL hlsl { source prog.hlsl entry_point main_fp target ps_2_0 } vertex_program myVertexProgramGLSL glsl { source prog.vert } fragment_program myFragmentProgramGLSL glsl { source prog.frag default_params { param_named tex int 0 } } // Unified definition vertex_program myVertexProgram unified { delegate myVertexProgramGLSL delegate myVertexProgramHLSL } fragment_program myFragmentProgram unified { delegate myFragmentProgramGLSL delegate myFragmentProgramHLSL } material SupportHLSLandGLSLwithUnified
Chapter 3: Scripts
79
{ // HLSL technique technique { pass { vertex_program_ref myVertexProgram { param_named_auto worldViewProj world_view_proj_matrix param_named_auto lightColour light_diffuse_colour 0 param_named_auto lightSpecular light_specular_colour 0 param_named_auto lightAtten light_attenuation 0 } fragment_program_ref myFragmentProgram { } } } } At runtime, when myVertexProgram or myFragmentProgram are used, OGRE automatically picks a real program to delegate to based on what’s supported on the current hardware / rendersystem. If none of the delegates are supported, the entire technique referencing the unified program is marked as unsupported and the next technique in the material is checked fro fallback, just like normal. As your materials get larger, and you find you need to support HLSL and GLSL specifically (or need to write multiple interface-compatible versions of a program for whatever other reason), unified programs can really help reduce duplication.
3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass Within a pass section of a material script, you can reference a vertex, geometry and / or a fragment program which is been defined in a .program script (See Section 3.1.4 [Declaring Vertex/Geometry/Fragment Programs], page 63). The programs are defined separately from the usage of them in the pass, since the programs are very likely to be reused between many separate materials, probably across many different .material scripts, so this approach lets you define the program only once and use it many times.
As well as naming the program in question, you can also provide parameters to it. Here’s a simple example: vertex_program_ref myVertexProgram { param_indexed_auto 0 worldviewproj_matrix param_indexed 4 float4 10.0 0 0 0 } In this example, we bind a vertex program called ’myVertexProgram’ (which will be defined elsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning we do not have to supply a value as such, just a recognised code (in this case it’s the
Chapter 3: Scripts
80
world/view/projection matrix which is kept up to date automatically by Ogre). The second parameter is a manually specified parameter, a 4-element float. The indexes are described later.
The syntax of the link to a vertex program and a fragment or geometry program are identical, the only difference is that ’fragment program ref’ and ’geometry program ref’ are used respectively instead of ’vertex program ref’.
For many situations vertex, geometry and fragment programs are associated with each other in a pass but this is not cast in stone. You could have a vertex program that can be used by several different fragment programs. Another situation that arises is that you can mix fixed pipeline and programmable pipeline (shaders) together. You could use the non-programable vertex fixed function pipeline and then provide a fragment program ref in a pass i.e. there would be no vertex program ref section in the pass. The fragment program referenced in the pass must meet the requirements as defined in the related API in order to read from the outputs of the vertex fixed pipeline. You could also just have a vertex program that outputs to the fragment fixed function pipeline.
The requirements to read from or write to the fixed function pipeline are similar between rendering API’s (DirectX and OpenGL) but how its actually done in each type of shader (vertex, geometry or fragment) depends on the shader language. For HLSL (DirectX API) and associated asm consult MSDN at http://msdn.microsoft.com/library/. For GLSL (OpenGL), consult section 7.6 of the GLSL spec 1.1 available at http://developer.3dlabs.com/documents/index.htm. The built in varying variables provided in GLSL allow your program to read/write to the fixed function pipeline varyings. For Cg consult the Language Profiles section in CgUsersManual.pdf that comes with the Cg Toolkit available at http://developer.nvidia.com/object/cg_toolkit.html. For HLSL and Cg its the varying bindings that allow your shader programs to read/write to the fixed function pipeline varyings.
Parameter specification Parameters can be specified using one of 4 commands as shown below. The same syntax is used whether you are defining a parameter just for this particular use of the program, or when specifying the [Default Program Parameters], page 66. Parameters set in the specific use of the program override the defaults. • [param indexed], page 80 • [param indexed auto], page 81 • [param named], page 92 • [param named auto], page 93 • [shared params ref], page 93
Chapter 3: Scripts
81
param indexed This command sets the value of an indexed parameter.
format: param indexed example: param indexed 0 float4 10.0 0 0 0
The ’index’ is simply a number representing the position in the parameter list which the value should be written, and you should derive this from your program definition. The index is relative to the way constants are stored on the card, which is in 4-element blocks. For example if you defined a float4 parameter at index 0, the next index would be 1. If you defined a matrix4x4 at index 0, the next usable index would be 4, since a 4x4 matrix takes up 4 indexes.
The value of ’type’ can be float4, matrix4x4, float, int4, int. Note that ’int’ parameters are only available on some more advanced program syntaxes, check the D3D or GL vertex / fragment program documentation for full details. Typically the most useful ones will be float4 and matrix4x4. Note that if you use a type which is not a multiple of 4, then the remaining values up to the multiple of 4 will be filled with zeroes for you (since GPUs always use banks of 4 floats per constant even if only one is used).
’value’ is simply a space or tab-delimited list of values which can be converted into the type you have specified.
param indexed auto This command tells Ogre to automatically update a given parameter with a derived value. This frees you from writing code to update program parameters every frame when they are always changing.
format: param indexed auto example: param indexed auto 0 worldviewproj matrix
’index’ has the same meaning as [param indexed], page 80; note this time you do not have to specify the size of the parameter because the engine knows this already. In the example, the world/view/projection matrix is being used so this is implicitly a matrix4x4.
Chapter 3: Scripts
82
’value code’ is one of a list of recognised values: world matrix The current world matrix. inverse world matrix The inverse of the current world matrix. transpose world matrix The transpose of the world matrix inverse transpose world matrix The inverse transpose of the world matrix world matrix array 3x4 An array of world matrices, each represented as only a 3x4 matrix (3 rows of 4columns) usually for doing hardware skinning. You should make enough entries available in your vertex program for the number of bones in use, ie an array of numBones*3 float4’s. view matrix The current view matrix. inverse view matrix The inverse of the current view matrix. transpose view matrix The transpose of the view matrix inverse transpose view matrix The inverse transpose of the view matrix projection matrix The current projection matrix. inverse projection matrix The inverse of the projection matrix transpose projection matrix The transpose of the projection matrix inverse transpose projection matrix The inverse transpose of the projection matrix worldview matrix The current world and view matrices concatenated. inverse worldview matrix The inverse of the current concatenated world and view matrices. transpose worldview matrix The transpose of the world and view matrices inverse transpose worldview matrix The inverse transpose of the current concatenated world and view matrices.
Chapter 3: Scripts
83
viewproj matrix The current view and projection matrices concatenated. inverse viewproj matrix The inverse of the view & projection matrices transpose viewproj matrix The transpose of the view & projection matrices inverse transpose viewproj matrix The inverse transpose of the view & projection matrices worldviewproj matrix The current world, view and projection matrices concatenated. inverse worldviewproj matrix The inverse of the world, view and projection matrices transpose worldviewproj matrix The transpose of the world, view and projection matrices inverse transpose worldviewproj matrix The inverse transpose of the world, view and projection matrices texture matrix The transform matrix of a given texture unit, as it would usually be seen in the fixed-function pipeline. This requires an index in the ’extra params’ field, and relates to the ’nth’ texture unit of the pass in question. NB if the given index exceeds the number of texture units available for this pass, then the parameter will be set to Matrix4::IDENTITY. render target flipping The value use to adjust transformed y position if bypassed projection matrix transform. It’s -1 if the render target requires texture flipping, +1 otherwise. vertex winding Indicates what vertex winding mode the render state is in at this point; +1 for standard, -1 for inverted (e.g. when processing reflections). light diffuse colour The diffuse colour of a given light; this requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light - note that directional lights are always first in the list and always present). NB if there are no lights this close, then the parameter will be set to black. light specular colour The specular colour of a given light; this requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to black. light attenuation A float4 containing the 4 light attenuation variables for a given light. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest
Chapter 3: Scripts
84
light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. The order of the parameters is range, constant attenuation, linear attenuation, quadric attenuation. spotlight params A float4 containing the 3 spotlight parameters and a control value. The order of the parameters is cos(inner angle /2 ), cos(outer angle / 2), falloff, and the final w value is 1.0f. For non-spotlights the value is float4(1,0,0,1). This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). If there are less lights than this, the details are like a non-spotlight. light position The position of a given light in world space. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. light direction The direction of a given light in world space. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED - this property only works on directional lights, and we recommend that you use light position instead since that returns a generic 4D vector. light position object space The position of a given light in object space (i.e. when the object is at (0,0,0)). This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. light direction object space The direction of a given light in object space (i.e. when the object is at (0,0,0)). This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED, except for spotlights - for directional lights we recommend that you use light position object space instead since that returns a generic 4D vector.
Chapter 3: Scripts
85
light distance object space The distance of a given light from the centre of the object - this is a useful approximation to per-vertex distance calculations for relatively small objects. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. light position view space The position of a given light in view space (i.e. when the camera is at (0,0,0)). This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. light direction view space The direction of a given light in view space (i.e. when the camera is at (0,0,0)). This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED, except for spotlights - for directional lights we recommend that you use light position view space instead since that returns a generic 4D vector. light power The ’power’ scaling for a given light, useful in HDR rendering. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). light diffuse colour power scaled As light diffuse colour, except the RGB channels of the passed colour have been pre-scaled by the light’s power scaling as given by light power. light specular colour power scaled As light specular colour, except the RGB channels of the passed colour have been pre-scaled by the light’s power scaling as given by light power. light number When rendering, there is generally a list of lights available for use by all of the passes for a given object, and those lights may or may not be referenced in one or more passes. Sometimes it can be useful to know where in that overall list a given light light (as seen from a pass) is. For example if you use iterate once per light, the pass always sees the light as index 0, but in each iteration the actual light referenced is different. This binding lets you pass through the actual index of the light in that overall list. You just need to give it a parameter of the pass-relative light number and it will map it to the overall list index.
Chapter 3: Scripts
86
light diffuse colour array As light diffuse colour, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with lightbased pass iteration options but can be used for single-pass lighting. light specular colour array As light specular colour, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light diffuse colour power scaled array As light diffuse colour power scaled, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light specular colour power scaled array As light specular colour power scaled, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light attenuation array As light attenuation, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with lightbased pass iteration options but can be used for single-pass lighting. spotlight params array As spotlight params, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with lightbased pass iteration options but can be used for single-pass lighting. light position array As light position, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with lightbased pass iteration options but can be used for single-pass lighting. light direction array As light direction, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with lightbased pass iteration options but can be used for single-pass lighting. light position object space array As light position object space, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number
Chapter 3: Scripts
87
of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light direction object space array As light direction object space, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light distance object space array As light distance object space, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light position view space array As light position view space, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light direction view space array As light direction view space, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light power array As light power, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. light count The total number of lights active in this pass. light casts shadows Sets an integer parameter to 1 if the given light casts shadows, 0 otherwise, Requires a light index parameter. ambient light colour The colour of the ambient light currently set in the scene. surface ambient colour The ambient colour reflectance properties of the pass (See [ambient], page 25). This allows you access to fixed-function pipeline property handily. surface diffuse colour The diffuse colour reflectance properties of the pass (See [diffuse], page 26). This allows you access to fixed-function pipeline property handily. surface specular colour The specular colour reflectance properties of the pass (See [specular], page 26). This allows you access to fixed-function pipeline property handily.
Chapter 3: Scripts
88
surface emissive colour The amount of self-illumination of the pass (See [emissive], page 27). This allows you access to fixed-function pipeline property handily. surface shininess The shininess of the pass, affecting the size of specular highlights (See [specular], page 26). This allows you bind to fixed-function pipeline property handily. derived ambient light colour The derived ambient light colour, with ’r’, ’g’, ’b’ components filled with product of surface ambient colour and ambient light colour, respectively, and ’a’ component filled with surface ambient alpha component. derived scene colour The derived scene colour, with ’r’, ’g’ and ’b’ components filled with sum of derived ambient light colour and surface emissive colour, respectively, and ’a’ component filled with surface diffuse alpha component. derived light diffuse colour The derived light diffuse colour, with ’r’, ’g’ and ’b’ components filled with product of surface diffuse colour, light diffuse colour and light power, respectively, and ’a’ component filled with surface diffuse alpha component. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). derived light specular colour The derived light specular colour, with ’r’, ’g’ and ’b’ components filled with product of surface specular colour and light specular colour, respectively, and ’a’ component filled with surface specular alpha component. This requires an index in the ’extra params’ field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to the closest light). derived light diffuse colour array As derived light diffuse colour, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. derived light specular colour array As derived light specular colour, except that this populates an array of parameters with a number of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights to be processed. This parameter is not compatible with light-based pass iteration options but can be used for single-pass lighting. fog colour The colour of the fog currently set in the scene. fog params The parameters of the fog currently set in the scene. Packed as (exp density, linear start, linear end, 1.0 / (linear end - linear start)). camera position The current cameras position in world space.
Chapter 3: Scripts
89
camera position object space The current cameras position in object space (i.e. when the object is at (0,0,0)). lod camera position The current LOD camera position in world space. A LOD camera is a separate camera associated with the rendering camera which allows LOD calculations to be calculated separately. The classic example is basing the LOD of the shadow texture render on the position of the main camera, not the shadow camera. lod camera position object space The current LOD camera position in object space (i.e. when the object is at (0,0,0)). time
The current time, factored by the optional parameter (or 1.0f if not supplied).
time 0 x
Single float time value, which repeats itself based on "cycle time" given as an ’extra params’ field
costime 0 x Cosine of time 0 x sintime 0 x Sine of time 0 x tantime 0 x Tangent of time 0 x time 0 x packed 4-element vector of time0 x, sintime0 x, costime0 x, tantime0 x time 0 1
As time0 x but scaled to [0..1]
costime 0 1 As costime0 x but scaled to [0..1] sintime 0 1 As sintime0 x but scaled to [0..1] tantime 0 1 As tantime0 x but scaled to [0..1] time 0 1 packed As time0 x packed but all values scaled to [0..1] time 0 2pi As time0 x but scaled to [0..2*Pi] costime 0 2pi As costime0 x but scaled to [0..2*Pi] sintime 0 2pi As sintime0 x but scaled to [0..2*Pi] tantime 0 2pi As tantime0 x but scaled to [0..2*Pi] time 0 2pi packed As time0 x packed but scaled to [0..2*Pi]
Chapter 3: Scripts
90
frame time The current frame time, factored by the optional parameter (or 1.0f if not supplied). fps
The current frames per second
viewport width The current viewport width in pixels viewport height The current viewport height in pixels inverse viewport width 1.0/the current viewport width in pixels inverse viewport height 1.0/the current viewport height in pixels viewport size 4-element vector of viewport width, viewport height, inverse viewport width, inverse viewport height texel offsets Provides details of the rendersystem-specific texture coordinate offsets required to map texels onto pixels. float4(horizontalOffset, verticalOffset, horizontalOffset / viewport width, verticalOffset / viewport height). view direction View direction vector in object space view side vector View local X axis view up vector View local Y axis fov
Vertical field of view, in radians
near clip distance Near clip distance, in world units far clip distance Far clip distance, in world units (may be 0 for infinite view projection) texture viewproj matrix Applicable to vertex programs which have been specified as the ’shadow receiver’ vertex program alternative, or where a texture unit is marked as content type shadow; this provides details of the view/projection matrix for the current shadow projector. The optional ’extra params’ entry specifies which light the projector refers to (for the case of content type shadow where more than one shadow texture may be present in a single pass), where 0 is the default and refers to the first light referenced in this pass. texture viewproj matrix array As texture viewproj matrix, except an array of matrices is passed, up to the number that you specify as the ’extra params’ value.
Chapter 3: Scripts
91
texture worldviewproj matrix As texture viewproj matrix except it also includes the world matrix. texture worldviewproj matrix array As texture worldviewproj matrix, except an array of matrices is passed, up to the number that you specify as the ’extra params’ value. spotlight viewproj matrix Provides a view / projection matrix which matches the set up of a given spotlight (requires an ’extra params’ entry to indicate the light index, which must be a spotlight). Can be used to project a texture from a given spotlight. spotlight worldviewproj matrix As spotlight viewproj matrix except it also includes the world matrix. scene depth range Provides information about the depth range as viewed from the current camera being used to render. Provided as float4(minDepth, maxDepth, depthRange, 1 / depthRange). shadow scene depth range Provides information about the depth range as viewed from the shadow camera relating to a selected light. Requires a light index parameter. Provided as float4(minDepth, maxDepth, depthRange, 1 / depthRange). shadow colour The shadow colour (for modulative shadows) as set via SceneManager::setShadowColour. shadow extrusion distance The shadow extrusion distance as determined by the range of a non-directional light or set via SceneManager::setShadowDirectionalLightExtrusionDistance for directional lights. texture size Provides texture size of the selected texture unit. Requires a texture unit index parameter. Provided as float4(width, height, depth, 1). For 2D-texture, depth sets to 1, for 1D-texture, height and depth sets to 1. inverse texture size Provides inverse texture size of the selected texture unit. Requires a texture unit index parameter. Provided as float4(1 / width, 1 / height, 1 / depth, 1). For 2D-texture, depth sets to 1, for 1D-texture, height and depth sets to 1. packed texture size Provides packed texture size of the selected texture unit. Requires a texture unit index parameter. Provided as float4(width, height, 1 / width, 1 / height). For 3D-texture, depth is ignored, for 1D-texture, height sets to 1. pass number Sets the active pass index number in a gpu parameter. The first pass in a technique has an index of 0, the second an index of 1 and so on. This is useful for multipass shaders (i.e. fur or blur shader) that need to know what pass it is.
Chapter 3: Scripts
92
By setting up the auto parameter in a [Default Program Parameters], page 66 list in a program definition, there is no requirement to set the pass number parameter in each pass and lose track. (See [fur example], page 42) pass iteration number Useful for GPU programs that need to know what the current pass iteration number is. The first iteration of a pass is numbered 0. The last iteration number is one less than what is set for the pass iteration number. If a pass has its iteration attribute set to 5 then the last iteration number (5th execution of the pass) is 4.(See [iteration], page 41) animation parametric Useful for hardware vertex animation. For morph animation, sets the parametric value (0..1) representing the distance between the first position keyframe (bound to positions) and the second position keyframe (bound to the first free texture coordinate) so that the vertex program can interpolate between them. For pose animation, indicates a group of up to 4 parametric weight values applying to a sequence of up to 4 poses (each one bound to x, y, z and w of the constant), one for each pose. The original positions are held in the usual position buffer, and the offsets to take those positions to the pose where weight == 1.0 are in the first ’n’ free texture coordinates; ’n’ being determined by the value passed to includes pose animation. If more than 4 simultaneous poses are required, then you’ll need more than 1 shader constant to hold the parametric values, in which case you should use this binding more than once, referencing a different constant entry; the second one will contain the parametrics for poses 5-8, the third for poses 9-12, and so on. custom
This allows you to map a custom parameter on an individual Renderable (see Renderable::setCustomParameter) to a parameter on a GPU program. It requires that you complete the ’extra params’ field with the index that was used in the Renderable::setCustomParameter call, and this will ensure that whenever this Renderable is used, it will have it’s custom parameter mapped in. It’s very important that this parameter has been defined on all Renderables that are assigned the material that contains this automatic mapping, otherwise the process will fail.
param named This is the same as param indexed, but uses a named parameter instead of an index. This can only be used with high-level programs which include parameter names; if you’re using an assembler program then you have no choice but to use indexes. Note that you can use indexed parameters for high-level programs too, but it is less portable since if you reorder your parameters in the high-level program the indexes will change. format: param named example: param named shininess float4 10.0 0 0 0 The type is required because the program is not compiled and loaded when the material
Chapter 3: Scripts
93
script is parsed, so at this stage we have no idea what types the parameters are. Programs are only loaded and compiled when they are used, to save memory.
param named auto This is the named equivalent of param indexed auto, for use with high-level programs. Format: param named auto Example: param named auto worldViewProj WORLDVIEWPROJ MATRIX
The allowed value codes and the meaning of extra params are detailed in [param indexed auto], page 81.
shared params ref This option allows you to reference shared parameter sets as defined in [Declaring Shared Parameters], page 66. Format: shared params ref Example: shared params ref mySharedParams
The only required parameter is a name, which must be the name of an already defined shared parameter set. All named parameters which are present in the program that are also present in the shared parameter set will be linked, and the shared parameters used as if you had defined them locally. This is dependent on the definitions (type and array size) matching between the shared set and the program.
Shadows and Vertex Programs When using shadows (See Chapter 7 [Shadows], page 180), the use of vertex programs can add some additional complexities, because Ogre can only automatically deal with everything when using the fixed-function pipeline. If you use vertex programs, and you are also using shadows, you may need to make some adjustments.
If you use stencil shadows, then any vertex programs which do vertex deformation can be a problem, because stencil shadows are calculated on the CPU, which does not have access to the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see section above) because Ogre knows how to replicate the effect in software, but any other vertex deformation cannot be replicated, and you will either have to accept that the shadow will not reflect this deformation, or you should turn off shadows for that object.
Chapter 3: Scripts
94
If you use texture shadows, then vertex deformation is acceptable; however, when rendering the object into the shadow texture (the shadow caster pass), the shadow has to be rendered in a solid colour (linked to the ambient colour). You must therefore provide an alternative vertex program, so Ogre provides you with a way of specifying one to use when rendering the caster. Basically you link an alternative vertex program, using exactly the same syntax as the original vertex program link: shadow_caster_vertex_program_ref myShadowCasterVertexProgram { param_indexed_auto 0 worldviewproj_matrix param_indexed_auto 4 ambient_light_colour } When rendering a shadow caster, Ogre will automatically use the alternate program. You can bind the same or different parameters to the program - the most important thing is that you bind ambiend light colour, since this determines the colour of the shadow in modulative texture shadows. If you don’t supply an alternate program, Ogre will fall back on a fixed-function material which will not reflect any vertex deformation you do in your vertex program.
In addition, when rendering the shadow receivers with shadow textures, Ogre needs to project the shadow texture. It does this automatically in fixed function mode, but if the receivers use vertex programs, they need to have a shadow receiver program which does the usual vertex deformation, but also generates projective texture coordinates. The additional program linked into the pass like this: shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram { param_indexed_auto 0 worldviewproj_matrix param_indexed_auto 4 texture_viewproj_matrix } For the purposes of writing this alternate program, there is an automatic parameter binding of ’texture viewproj matrix’ which provides the program with texture projection parameters. The vertex program should do it’s normal vertex processing, and generate texture coordinates using this matrix and place them in texture coord sets 0 and 1, since some shadow techniques use 2 texture units. The colour of the vertices output by this vertex program must always be white, so as not to affect the final colour of the rendered shadow.
When using additive texture shadows, the shadow pass render is actually the lighting render, so if you perform any fragment program lighting you also need to pull in a custom fragment program. You use the shadow receiver fragment program ref for this: shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram { param_named_auto lightDiffuse light_diffuse_colour 0
Chapter 3: Scripts
95
} You should pass the projected shadow coordinates from the custom vertex program. As for textures, texture unit 0 will always be the shadow texture. Any other textures which you bind in your pass will be carried across too, but will be moved up by 1 unit to make room for the shadow texture. Therefore your shadow receiver fragment program is likely to be the same as the bare lighting pass of your normal material, except that you insert an extra texture sampler at index 0, which you will use to adjust the result by (modulating diffuse and specular components).
3.1.10 Vertex Texture Fetch Introduction More recent generations of video card allow you to perform a read from a texture in the vertex program rather than just the fragment program, as is traditional. This allows you to, for example, read the contents of a texture and displace vertices based on the intensity of the colour contained within.
Declaring the use of vertex texture fetching Since hardware support for vertex texture fetching is not ubiquitous, you should use the uses vertex texture fetch (See hundefinedi [Vertex texture fetching in vertex programs], page hundefinedi) directive when declaring your vertex programs which use vertex textures, so that if it is not supported, technique fallback can be enabled. This is not strictly necessary for DirectX-targeted shaders, since vertex texture fetching is only supported in vs 3 0, which can be stated as a required syntax in your shader definition, but for OpenGL (GLSL), there are cards which support GLSL but not vertex textures, so you should be explicit about your need for them.
Render system texture binding differences Unfortunately the method for binding textures so that they are available to a vertex program is not well standardised. As at the time of writing, Shader Model 3.0 (SM3.0) hardware under DirectX9 include 4 separate sampler bindings for the purposes of vertex textures. OpenGL, on the other hand, is able to access vertex textures in GLSL (and in assembler through NV vertex program 3, although this is less popular), but the textures are shared with the fragment pipeline. I expect DirectX to move to the GL model with the advent of DirectX10, since a unified shader architecture implies sharing of texture resources between the two stages. As it is right now though, we’re stuck with an inconsistent situation.
To reflect this, you should use the [binding type], page 51 attribute in a texture unit to indicate which unit you are targeting with your texture - ’fragment’ (the default) or ’vertex’. For render systems that don’t have separate bindings, this actually does nothing. But for those that do, it will ensure your texture gets bound to the right processing unit. Note that whilst DirectX9 has separate bindings for the vertex and fragment pipelines, binding a texture to the vertex processing unit still uses up a ’slot’ which is then not available
Chapter 3: Scripts
96
for use in the fragment pipeline. I didn’t manage to find this documented anywhere, but the nVidia samples certainly avoid binding a texture to the same index on both vertex and fragment units, and when I tried to do it, the texture did not appear correctly in the fragment unit, whilst it did as soon as I moved it into the next unit.
Texture format limitations Again as at the time of writing, the types of texture you can use in a vertex program are limited to 1- or 4-component, full precision floating point formats. In code that equates to PF FLOAT32 R or PF FLOAT32 RGBA. No other formats are supported. In addition, the textures must be regular 2D textures (no cube or volume maps) and mipmapping and filtering is not supported, although you can perform filtering in your vertex program if you wish by sampling multiple times.
Hardware limitations As at the time of writing (early Q3 2006), ATI do not support texture fetch in their current crop of cards (Radeon X1n00). nVidia do support it in both their 6n00 and 7n00 range. ATI support an alternative called ’Render to Vertex Buffer’, but this is not standardised at this time and is very much different in its implementation, so cannot be considered to be a drop-in replacement. This is the case even though the Radeon X1n00 cards claim to support vs 3 0 (which requires vertex texture fetch).
3.1.11 Script Inheritence When creating new script objects that are only slight variations of another object, it’s good to avoid copying and pasting between scripts. Script inheritence lets you do this; in this section we’ll use material scripts as an example, but this applies to all scripts parsed with the script compilers in Ogre 1.6 onwards.
For example, to make a new material that is based on one previously defined, add a colon : after the new material name followed by the name of the material that is to be copied.
Format: material :
The only caveat is that a parent material must have been defined/parsed prior to the child material script being parsed. The easiest way to achieve this is to either place parents at the beginning of the material script file, or to use the ’import’ directive (See Section 3.1.14 [Script Import Directive], page 105). Note that inheritence is actually a copy - after scripts are loaded into Ogre, objects no longer maintain their copy inheritance structure. If a parent material is modified through code at runtime, the changes have no effect on child materials that were copied from it in the script.
Chapter 3: Scripts
97
Material copying within the script alleviates some drudgery from copy/paste but having the ability to identify specific techniques, passes, and texture units to modify makes material copying easier. Techniques, passes, texture units can be identified directly in the child material without having to layout previous techniques, passes, texture units by associating a name with them, Techniques and passes can take a name and texture units can be numbered within the material script. You can also use variables, See Section 3.1.13 [Script Variables], page 104.
Names become very useful in materials that copy from other materials. In order to override values they must be in the correct technique, pass, texture unit etc. The script could be lain out using the sequence of techniques, passes, texture units in the child material but if only one parameter needs to change in say the 5th pass then the first four passes prior to the fifth would have to be placed in the script:
Here is an example: material test2 : test1 { technique { pass { } pass { } pass { } pass { } pass { ambient 0.5 0.7 0.3 1.0 } } } This method is tedious for materials that only have slight variations to their parent. An easier way is to name the pass directly without listing the previous passes:
Chapter 3: Scripts
98
material test2 : test1 { technique 0 { pass 4 { ambient 0.5 0.7 0.3 1.0 } } } The parent pass name must be known and the pass must be in the correct technique in order for this to work correctly. Specifying the technique name and the pass name is the best method. If the parent technique/pass are not named then use their index values for their name as done in the example.
Adding new Techniques, Passes, to copied materials: If a new technique or pass needs to be added to a copied material then use a unique name for the technique or pass that does not exist in the parent material. Using an index for the name that is one greater than the last index in the parent will do the same thing. The new technique/pass will be added to the end of the techniques/passes copied from the parent material.
Note: if passes or techniques aren’t given a name, they will take on a default name based on their index. For example the first pass has index 0 so its name will be 0.
Identifying Texture Units to override values A specific texture unit state (TUS) can be given a unique name within a pass of a material so that it can be identified later in cloned materials that need to override specified texture unit states in the pass without declaring previous texture units. Using a unique name for a Texture unit in a pass of a cloned material adds a new texture unit at the end of the texture unit list for the pass.
material BumpMap2 : BumpMap1 { technique ati8500 { pass 0 { texture_unit NormalMap { texture BumpyMetalNM.png }
Chapter 3: Scripts
99
} } }
Advanced Script Inheritence Starting with Ogre 1.6, script objects can now inherit from each other more generally. The previous concept of inheritance, material copying, was restricted only to the top-level material objects. Now, any level of object can take advantage of inheritance (for instance, techniques, passes, and compositor targets).
material Test { technique { pass : ParentPass { } } } Notice that the pass inherits from ParentPass. This allows for the creation of more fine-grained inheritance hierarchies.
Along with the more generalized inheritance system comes an important new keyword: "abstract." This keyword is used at a top-level object declaration (not inside any other object) to denote that it is not something that the compiler should actually attempt to compile, but rather that it is only for the purpose of inheritance. For example, a material declared with the abstract keyword will never be turned into an actual usable material in the material framework. Objects which cannot be at a top-level in the document (like a pass) but that you would like to declare as such for inheriting purpose must be declared with the abstract keyword.
abstract pass ParentPass { diffuse 1 0 0 1 } That declares the ParentPass object which was inherited from in the above example. Notice the abstract keyword which informs the compiler that it should not attempt to actually turn this object into any sort of Ogre resource. If it did attempt to do so, then it would obviously fail, since a pass all on its own like that is not valid.
The final matching option is based on wildcards. Using the ’*’ character, you can make a powerful matching scheme and override multiple objects at once, even if you don’t know
Chapter 3: Scripts
100
exact names or positions of those objects in the inherited object.
abstract technique Overrider { pass *color* { diffuse 0 0 0 0 } } This technique, when included in a material, will override all passes matching the wildcard "*color*" (color has to appear in the name somewhere) and turn their diffuse properties black. It does not matter their position or exact name in the inherited technique, this will match them.
3.1.12 Texture Aliases Texture aliases are useful for when only the textures used in texture units need to be specified for a cloned material. In the source material i.e. the original material to be cloned, each texture unit can be given a texture alias name. The cloned material in the script can then specify what textures should be used for each texture alias. Note that texture aliases are a more specific version of Section 3.1.13 [Script Variables], page 104 which can be used to easily set other values.
Using texture aliases within texture units: Format: texture alias
Default: will default to texture unit if set texture_unit DiffuseTex { texture diffuse.jpg } texture alias defaults to DiffuseTex. Example: The base material to be cloned: material TSNormalSpecMapping { technique GLSL { pass
Chapter 3: Scripts
{ ambient 0.1 0.1 0.1 diffuse 0.7 0.7 0.7 specular 0.7 0.7 0.7 128 vertex_program_ref GLSLDemo/OffsetMappingVS { param_named_auto lightPosition light_position_object_space 0 param_named_auto eyePosition camera_position_object_space param_named textureScale float 1.0 } fragment_program_ref GLSLDemo/TSNormalSpecMappingFS { param_named normalMap int 0 param_named diffuseMap int 1 param_named fxMap int 2 } // Normal map texture_unit NormalMap { texture defaultNM.png tex_coord_set 0 filtering trilinear } // Base diffuse texture map texture_unit DiffuseMap { texture defaultDiff.png filtering trilinear tex_coord_set 1 } // spec map for shininess texture_unit SpecMap { texture defaultSpec.png filtering trilinear tex_coord_set 2 } } }
101
Chapter 3: Scripts
102
technique HLSL_DX9 { pass { vertex_program_ref { param_named_auto param_named_auto param_named_auto }
FxMap_HLSL_VS worldViewProj_matrix worldviewproj_matrix lightPosition light_position_object_space 0 eyePosition camera_position_object_space
fragment_program_ref FxMap_HLSL_PS { param_named ambientColor float4 0.2 0.2 0.2 0.2 } // Normal map texture_unit { texture_alias NormalMap texture defaultNM.png tex_coord_set 0 filtering trilinear } // Base diffuse texture map texture_unit { texture_alias DiffuseMap texture defaultDiff.png filtering trilinear tex_coord_set 1 } // spec map for shininess texture_unit { texture_alias SpecMap texture defaultSpec.png filtering trilinear tex_coord_set 2 } } }
Chapter 3: Scripts
103
} Note that the GLSL and HLSL techniques use the same textures. For each texture usage type a texture alias is given that describes what the texture is used for. So the first texture unit in the GLSL technique has the same alias as the TUS in the HLSL technique since its the same texture used. Same goes for the second and third texture units. For demonstration purposes, the GLSL technique makes use of texture unit naming and therefore the texture alias name does not have to be set since it defaults to the texture unit name. So why not use the default all the time since its less typing? For most situations you can. Its when you clone a material that and then want to change the alias that you must use the texture alias command in the script. You cannot change the name of a texture unit in a cloned material so texture alias provides a facility to assign an alias name.
Now we want to clone the material but only want to change the textures used. We could copy and paste the whole material but if we decide to change the base material later then we also have to update the copied material in the script. With set texture alias, copying a material is very easy now. set texture alias is specified at the top of the material definition. All techniques using the specified texture alias will be effected by set texture alias.
Format: set texture alias material fxTest : TSNormalSpecMapping { set_texture_alias NormalMap fxTestNMap.png set_texture_alias DiffuseMap fxTestDiff.png set_texture_alias SpecMap fxTestMap.png } The textures in both techniques in the child material will automatically get replaced with the new ones we want to use.
The same process can be done in code as long you set up the texture alias names so then there is no need to traverse technique/pass/TUS to change a texture. You just call myMaterialPtr->applyTextureAliases(myAliasTextureNameList) which will update all textures in all texture units that match the alias names in the map container reference you passed as a parameter.
You don’t have to supply all the textures in the copied material. material fxTest2 : fxTest { set_texture_alias DiffuseMap fxTest2Diff.png
Chapter 3: Scripts
104
set_texture_alias SpecMap fxTest2Map.png } Material fxTest2 only changes the diffuse and spec maps of material fxTest and uses the same normal map.
Another example: material fxTest3 : TSNormalSpecMapping { set_texture_alias DiffuseMap fxTest2Diff.png } fxTest3 will end up with the default textures for the normal map and spec map setup in TSNormalSpecMapping material but will have a different diffuse map. So your base material can define the default textures to use and then the child materials can override specific textures.
3.1.13 Script Variables A very powerful new feature in Ogre 1.6 is variables. Variables allow you to parameterize data in materials so that they can become more generalized. This enables greater reuse of scripts by targeting specific customization points. Using variables along with inheritance allows for huge amounts of overrides and easy object reuse.
abstract pass ParentPass { diffuse $diffuse_colour } material Test { technique { pass : ParentPass { set $diffuse_colour "1 0 0 1" } } } The ParentPass object declares a variable called "diffuse colour" which is then overridden in the Test material’s pass. The "set" keyword is used to set the value of that variable. The variable assignment follows lexical scoping rules, which means that the value of "1 0 0 1" is only valid inside that pass definition. Variable assignment in outer scopes carry over into inner scopes.
Chapter 3: Scripts
105
material Test { set $diffuse_colour "1 0 0 1" technique { pass : ParentPass { } } } The $diffuse colour assignment carries down through the technique and into the pass.
3.1.14 Script Import Directive Imports are a feature introduced to remove ambiguity from script dependencies. When using scripts that inherit from each other but which are defined in separate files sometimes errors occur because the scripts are loaded in incorrect order. Using imports removes this issue. The script which is inheriting another can explicitly import its parent’s definition which will ensure that no errors occur because the parent’s definition was not found.
import * from "parent.material" material Child : Parent { } The material "Parent" is defined in parent.material and the import ensures that those definitions are found properly. You can also import specific targets from within a file. import Parent from "parent.material" If there were other definitions in the parent.material file, they would not be imported.
Note, however that importing does not actually cause objects in the imported script to be fully parsed & created, it just makes the definitions available for inheritence. This has a specific ramification for vertex / fragment program definitions, which must be loaded before any parameters can be specified. You should continue to put common program definitions in .program files to ensure they are fully parsed before being referenced in multiple .material files. The ’import’ command just makes sure you can resolve dependencies between equivalent script definitions (e.g. material to material).
Chapter 3: Scripts
106
3.2 Compositor Scripts The compositor framework is a subsection of the OGRE API that allows you to easily define full screen post-processing effects. Compositor scripts offer you the ability to define compositor effects in a script which can be reused and modified easily, rather than having to use the API to define them. You still need to use code to instantiate a compositor against one of your visible viewports, but this is a much simpler process than actually defining the compositor itself.
Compositor Fundamentals Performing post-processing effects generally involves first rendering the scene to a texture, either in addition to or instead of the main window. Once the scene is in a texture, you can then pull the scene image into a fragment program and perform operations on it by rendering it through full screen quad. The target of this post processing render can be the main result (e.g. a window), or it can be another render texture so that you can perform multi-stage convolutions on the image. You can even ’ping-pong’ the render back and forth between a couple of render textures to perform convolutions which require many iterations, without using a separate texture for each stage. Eventually you’ll want to render the result to the final output, which you do with a full screen quad. This might replace the whole window (thus the main window doesn’t need to render the scene itself), or it might be a combinational effect.
So that we can discuss how to implement these techniques efficiently, a number of definitions are required:
Compositor Definition of a fullscreen effect that can be applied to a user viewport. This is what you’re defining when writing compositor scripts as detailed in this section. Compositor Instance An instance of a compositor as applied to a single viewport. You create these based on compositor definitions, See Section 3.2.4 [Applying a Compositor], page 120. Compositor Chain It is possible to enable more than one compositor instance on a viewport at the same time, with one compositor taking the results of the previous one as input. This is known as a compositor chain. Every viewport which has at least one compositor attached to it has a compositor chain. See Section 3.2.4 [Applying a Compositor], page 120 Target
This is a RenderTarget, i.e. the place where the result of a series of render operations is sent. A target may be the final output (and this is implicit, you don’t have to declare it), or it may be an intermediate render texture, which you declare in your script with the [compositor texture], page 109. A target
Chapter 3: Scripts
107
which is not the output target has a defined size and pixel format which you can control. Output Target As Target, but this is the single final result of all operations. The size and pixel format of this target cannot be controlled by the compositor since it is defined by the application using it, thus you don’t declare it in your script. However, you do declare a Target Pass for it, see below. Target Pass A Target may be rendered to many times in the course of a composition effect. In particular if you ’ping pong’ a convolution between a couple of textures, you will have more than one Target Pass per Target. Target passes are declared in the script using a Section 3.2.2 [Compositor Target Passes], page 112, the latter being the final output target pass, of which there can be only one. Pass
Within a Target Pass, there are one or more individual Section 3.2.3 [Compositor Passes], page 114, which perform a very specific action, such as rendering the original scene (or pulling the result from the previous compositor in the chain), rendering a fullscreen quad, or clearing one or more buffers. Typically within a single target pass you will use the either a ’render scene’ pass or a ’render quad’ pass, not both. Clear can be used with either type.
Loading scripts Compositor scripts are loaded when resource groups are initialised: OGRE looks in all resource locations associated with the group (see Root::addResourceLocation) for files with the ’.compositor’ extension and parses them. If you want to parse files manually, use CompositorSerializer::parseScript.
Format Several compositors may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’ (note, no nested form comments allowed). The general format is shown below in the example below:
// This is a comment // Black and white effect compositor B&W { technique { // Temporary textures texture rt0 target_width target_height PF_A8R8G8B8
Chapter 3: Scripts
108
target rt0 { // Render output from previous compositor (or original scene) input previous } target_output { // Start with clear output input none // Draw a fullscreen quad with the black and white image pass render_quad { // Renders a fullscreen quad with a material material Ogre/Compositor/BlackAndWhite input 0 rt0 } } } } Every compositor in the script must be given a name, which is the line ’compositor ’ before the first opening ’’. This name must be globally unique. It can include path characters (as in the example) to logically divide up your compositors, and also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a string. Names can include spaces but must be surrounded by double quotes ie compositor "My Name".
The major components of a compositor are the Section 3.2.1 [Compositor Techniques], page 108, the Section 3.2.2 [Compositor Target Passes], page 112 and the Section 3.2.3 [Compositor Passes], page 114, which are covered in detail in the following sections.
3.2.1 Techniques A compositor technique is much like a Section 3.1.1 [Techniques], page 21 in that it describes one approach to achieving the effect you’re looking for. A compositor definition can have more than one technique if you wish to provide some fallback should the hardware not support the technique you’d prefer to use. Techniques are evaluated for hardware support based on 2 things: Material support All Section 3.2.3 [Compositor Passes], page 114 that render a fullscreen quad use a material; for the technique to be supported, all of the materials referenced must have at least one supported material technique. If they don’t, the compositor technique is marked as unsupported and won’t be used.
Chapter 3: Scripts
109
Texture format support This one is slightly more complicated. When you request a [compositor texture], page 109 in your technique, you request a pixel format. Not all formats are natively supported by hardware, especially the floating point formats. However, in this case the hardware will typically downgrade the texture format requested to one that the hardware does support - with compositor effects though, you might want to use a different approach if this is the case. So, when evaluating techniques, the compositor will first look for native support for the exact pixel format you’ve asked for, and will skip onto the next technique if it is not supported, thus allowing you to define other techniques with simpler pixel formats which use a different approach. If it doesn’t find any techniques which are natively supported, it tries again, this time allowing the hardware to downgrade the texture format and thus should find at least some support for what you’ve asked for. As with material techniques, compositor techniques are evaluated in the order you define them in the script, so techniques declared first are preferred over those declared later. Format: technique
Techniques can have the following nested elements: • [compositor texture], page 109 • [compositor texture ref], page 111 • [compositor scheme], page 111 • [compositor logic], page 111 • Section 3.2.2 [Compositor Target Passes], page 112 • Section 3.2.2 [Compositor Target Passes], page 112
texture This declares a render texture for use in subsequent Section 3.2.2 [Compositor Target Passes], page 112.
Format: texture [] [] [pooled] [gamma] [no fsaa] []
Here is a description of the parameters: Name
A name to give the render texture, which must be unique within this compositor. This name is used to reference the texture in Section 3.2.2 [Compositor Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3 [Compositor Passes], page 114, when the texture is used as input to a material rendering a fullscreen quad.
Chapter 3: Scripts
110
Width, Height The dimensions of the render texture. You can either specify a fixed width and height, or you can request that the texture is based on the physical dimensions of the viewport to which the compositor is attached. The options for the latter are ’target width’, ’target height’, ’target width scaled ’ and ’target height scaled ’ - where ’factor’ is the amount by which you wish to multiply the size of the main target to derive the dimensions. Pixel Format The pixel format of the render texture. This affects how much memory it will take, what colour channels will be available, and what precision you will have within those channels. The available options are PF A8R8G8B8, PF R8G8B8A8, PF R8G8B8, PF FLOAT16 RGBA, PF FLOAT16 RGB, PF FLOAT16 R, PF FLOAT32 RGBA, PF FLOAT32 RGB, and PF FLOAT32 R. pooled
If present, this directive makes this texture ’pooled’ among compositor instances, which can save some memory.
gamma
If present, this directive means that sRGB gamma correction will be enabled on writes to this texture. You should remember to include the opposite sRGB conversion when you read this texture back in another material, such as a quad. This option will automatically enabled if you use a render scene pass on this texture and the viewport on which the compositor is based has sRGB write support enabled.
no fsaa
If present, this directive disables the use of anti-aliasing on this texture. FSAA is only used if this texture is subject to a render scene pass and FSAA was enabled on the original viewport on which this compositor is based; this option allows you to override it and disable the FSAA if you wish.
scope
If present, this directive sets the scope for the texture for being accessed by other compositors using the [compositor texture ref], page 111 directive. There are three options : ’local scope’ (which is also the default) means that only the compositor defining the texture can access it. ’chain scope’ means that the compositors after this compositor in the chain can reference its textures, and ’global scope’ means that the entire application can access the texture. This directive also affects the creation of the textures (global textures are created once and thus can’t be used with the pooled directive, and can’t rely on viewport size).
Example: texture rt0 512 512 PF R8G8B8A8 Example: texture rt1 target width target height PF FLOAT32 RGB
You can in fact repeat this element if you wish. If you do so, that means that this render texture becomes a Multiple Render Target (MRT), when the GPU writes to multiple textures at once. It is imperative that if you use MRT that the shaders that render to it render to ALL the targets. Not doing so can cause undefined results. It is also important to note that although you can use different pixel formats for each target in a MRT, each
Chapter 3: Scripts
111
one should have the same total bit depth since most cards do not support independent bit depths. If you try to use this feature on cards that do not support the number of MRTs you’ve asked for, the technique will be skipped (so you ought to write a fallback technique). Example : texture mrt output target width target height PF FLOAT16 RGBA PF FLOAT16 RGBA chain scope
texture ref This declares a reference of a texture from another compositor to be used in this compositor.
Format: texture ref Here is a description of the parameters: Local Name A name to give the referenced texture, which must be unique within this compositor. This name is used to reference the texture in Section 3.2.2 [Compositor Target Passes], page 112, when the texture is rendered to, and in Section 3.2.3 [Compositor Passes], page 114, when the texture is used as input to a material rendering a fullscreen quad. Reference Compositor The name of the compositor that we are referencing a texture from Reference Texture Name The name of the texture in the compositor that we are referencing Make sure that the texture being referenced is scoped accordingly (either chain or global scope) and placed accordingly during chain creation (if referencing a chain-scoped texture, the compositor must be present in the chain and placed before the compositor referencing it). Example : texture ref GBuffer GBufferCompositor mrt output
scheme This gives a compositor technique a scheme name, allowing you to manually switch between different techniques for this compositor when instantiated on a viewport by calling CompositorInstance::setScheme.
Format: material scheme
compositor logic This connects between a compositor and code that it requires in order to function correctly. When an instance of this compositor will be created, the compositor logic will be notified and will have the chance to prepare the compositor’s operation (for example, adding a listener).
Chapter 3: Scripts
112
Format: compositor logic Registration of compositor logics is done by name through CompositorManager::registerCompositorLogic.
3.2.2 Target Passes A target pass is the action of rendering to a given target, either a render texture or the final output. You can update the same render texture multiple times by adding more than one target pass to your compositor script - this is very useful for ’ping pong’ renders between a couple of render textures to perform complex convolutions that cannot be done in a single render, such as blurring.
There are two types of target pass, the sort that updates a render texture: Format: target ... and the sort that defines the final output render: Format: target output
The contents of both are identical, the only real difference is that you can only have a single target output entry, whilst you can have many target entries. Here are the attributes you can use in a ’target’ or ’target output’ section of a .compositor script: • [compositor target input], page 112 • [only initial], page 113 • [visibility mask], page 113 • [compositor lod bias], page 113 • [material scheme], page 114 • [compositor shadows], page 113 • Section 3.2.3 [Compositor Passes], page 114
Attribute Descriptions input Sets input mode of the target, which tells the target pass what is pulled in before any of its own passes are rendered. Format: input (none | previous) Default: input none
Chapter 3: Scripts
113
none
The target will have nothing as input, all the contents of the target must be generated using its own passes. Note this does not mean the target will be empty, just no data will be pulled in. For it to truly be blank you’d need a ’clear’ pass within this target.
previous
The target will pull in the previous contents of the viewport. This will be either the original scene if this is the first compositor in the chain, or it will be the output from the previous compositor in the chain if the viewport has multiple compositors enabled.
only initial If set to on, this target pass will only execute once initially after the effect has been enabled. This could be useful to perform once-off renders, after which the static contents are used by the rest of the compositor. Format: only initial (on | off) Default: only initial off
visibility mask Sets the visibility mask for any render scene passes performed in this target pass. This is a bitmask (although it must be specified as decimal, not hex) and maps to SceneManager::setVisibilityMask. Format: visibility mask Default: visibility mask 4294967295
lod bias Set the scene LOD bias for any render scene passes performed in this target pass. The default is 1.0, everything below that means lower quality, higher means higher quality. Format: lod bias Default: lod bias 1.0
shadows Sets whether shadows should be rendered during any render scene pass performed in this target pass. The default is ’on’. Format: shadows (on | off) Default: shadows on
Chapter 3: Scripts
114
material scheme If set, indicates the material scheme to use for any render scene pass. Useful for performing special-case rendering effects. Format: material scheme Default: None
3.2.3 Compositor Passes A pass is a single rendering action to be performed in a target pass. Format: ’pass’ (render quad | clear | stencil | render scene | render custom) [custom name]
There are four types of pass: clear
This kind of pass sets the contents of one or more buffers in the target to a fixed value. So this could clear the colour buffer to a fixed colour, set the depth buffer to a certain set of contents, fill the stencil buffer with a value, or any combination of the above.
stencil
This kind of pass configures stencil operations for the subsequent passes. It can set the stencil compare function, operations and reference values for you to perform your own stencil effects.
render scene This kind of pass performs a regular rendering of the scene. It will use the [visibility mask], page 113, [compositor lod bias], page 113, and [material scheme], page 114 from the parent target pass. render quad This kind of pass renders a quad over the entire render target, using a given material. You will undoubtedly want to pull in the results of other target passes into this operation to perform fullscreen effects. render custom This kind of pass is just a callback to user code for the composition pass specified in the custom name (and registered via CompositorManager::registerCustomCompositionPass) and allows the user to create custom render operations for more advanced effects. This is the only pass type that requires the custom name parameter. Here are the attributes you can use in a ’pass’ section of a .compositor script:
Chapter 3: Scripts
115
Available Pass Attributes • • • • • • • •
[material], page 115 [compositor pass input], page 115 [compositor pass identifier], page 115 [first render queue], page 116 [last render queue], page 116 [compositor pass material scheme], page 116 [compositor clear], page 116 [compositor stencil], page 117
material For passes of type ’render quad’, sets the material used to render the quad. You will want to use shaders in this material to perform fullscreen effects, and use the [compositor pass input], page 115 attribute to map other texture targets into the texture bindings needed by this material. Format: material
input For passes of type ’render quad’, this is how you map one or more local render textures (See [compositor texture], page 109) into the material you’re using to render the fullscreen quad. To bind more than one texture, repeat this attribute with different sampler indexes. Format: input []
sampler
The texture sampler to set, must be a number in the range [0, OGRE MAX TEXTURE LAYERS-1].
Name
The name of the local render texture to bind, as declared in [compositor texture], page 109 and rendered to in one or more Section 3.2.2 [Compositor Target Passes], page 112.
MRTIndex If the local texture that you’re referencing is a Multiple Render Target (MRT), this identifies the surface from the MRT that you wish to reference (0 is the first surface, 1 the second etc). Example: input 0 rt0
identifier Associates a numeric identifier with the pass. This is useful for registering a listener with the compositor (CompositorInstance::addListener), and being able to identify which pass it is that’s being processed when you get events regarding it. Numbers between 0 and 2^32
Chapter 3: Scripts
116
are allowed. Format: identifier Example: identifier 99945 Default: identifier 0
first render queue For passes of type ’render scene’, this sets the first render queue id that is included in the render. Defaults to the value of RENDER QUEUE SKIES EARLY. Format: first render queue Default: first render queue 0
last render queue For passes of type ’render scene’, this sets the last render queue id that is included in the render. Defaults to the value of RENDER QUEUE SKIES LATE. Format: last render queue Default: last render queue 95
material scheme If set, indicates the material scheme to use for this pass only. Useful for performing specialcase rendering effects. This will overwrite the scheme if set at the target scope as well. Format: material scheme Default: None
Clear Section For passes of type ’clear’, this section defines the buffer clearing parameters. Format: clear
Chapter 3: Scripts
117
Here are the attributes you can use in a ’clear’ section of a .compositor script: • • • •
[compositor [compositor [compositor [compositor
clear clear clear clear
buffers], page 117 colour value], page 117 depth value], page 117 stencil value], page 117
buffers Sets the buffers cleared by this pass.
Format: buffers [colour] [depth] [stencil] Default: buffers colour depth
colour value Set the colour used to fill the colour buffer by this pass, if the colour buffer is being cleared ([compositor clear buffers], page 117). Format: colour value Default: colour value 0 0 0 0
depth value Set the depth value used to fill the depth buffer by this pass, if the depth buffer is being cleared ([compositor clear buffers], page 117). Format: depth value Default: depth value 1.0
stencil value Set the stencil value used to fill the stencil buffer by this pass, if the stencil buffer is being cleared ([compositor clear buffers], page 117). Format: stencil value Default: stencil value 0.0
Chapter 3: Scripts
118
Stencil Section For passes of type ’stencil’, this section defines the stencil operation parameters.
Format: stencil
Here are the attributes you can use in a ’stencil’ section of a .compositor script: • [compositor stencil check], page 118 • [compositor stencil comp func], page 118 • [compositor stencil ref value], page 118 • [compositor stencil mask], page 119 • [compositor stencil fail op], page 119 • [compositor stencil depth fail op], page 119 • [compositor stencil pass op], page 120 • [compositor stencil two sided], page 120
check Enables or disables the stencil check, thus enabling the use of the rest of the features in this section. The rest of the options in this section do nothing if the stencil check is off. Format: check (on | off)
comp func Sets the function used to perform the following comparison: (ref value & mask) comp func (Stencil Buffer Value & mask)
What happens as a result of this comparison will be one of 3 actions on the stencil buffer, depending on whether the test fails, succeeds but with the depth buffer check still failing, or succeeds with the depth buffer check passing too. You set the actions in the [compositor stencil fail op], page 119, [compositor stencil depth fail op], page 119 and [compositor stencil pass op], page 120 respectively. If the stencil check fails, no colour or depth are written to the frame buffer. Format: comp func (always fail | always pass | less | less equal | not equal | greater equal | greater) Default: comp func always pass
Chapter 3: Scripts
119
ref value Sets the reference value used to compare with the stencil buffer as described in [compositor stencil comp func], page 118. Format: ref value Default: ref value 0.0
mask Sets the mask used to compare with the stencil buffer as described in [compositor stencil comp func], page 118. Format: mask Default: mask 4294967295
fail op Sets what to do with the stencil buffer value if the result of the stencil comparison ([compositor stencil comp func], page 118) and depth comparison is that both fail. Format: fail op (keep | zero | replace | increment | decrement | increment wrap | decrement wrap | invert) Default: depth fail op keep These actions mean: keep
Leave the stencil buffer unchanged.
zero
Set the stencil value to zero.
replace
Set the stencil value to the reference value.
increment Add one to the stencil value, clamping at the maximum value. decrement Subtract one from the stencil value, clamping at 0. increment wrap Add one to the stencil value, wrapping back to 0 at the maximum. decrement wrap Subtract one from the stencil value, wrapping to the maximum below 0. invert
invert the stencil value.
depth fail op Sets what to do with the stencil buffer value if the result of the stencil comparison ([compositor stencil comp func], page 118) passes but the depth comparison fails.
Chapter 3: Scripts
120
Format: depth fail op (keep | zero | replace | increment | decrement | increment wrap | decrement wrap | invert) Default: depth fail op keep
pass op Sets what to do with the stencil buffer value if the result of the stencil comparison ([compositor stencil comp func], page 118) and the depth comparison pass. Format: pass op (keep | zero | replace | increment | decrement | increment wrap | decrement wrap | invert) Default: pass op keep
two sided Enables or disables two-sided stencil operations, which means the inverse of the operations applies to back-facing polygons. Format: two sided (on | off) Default: two sided off
3.2.4 Applying a Compositor Adding a compositor instance to a viewport is very simple. All you need to do is:
CompositorManager::getSingleton().addCompositor(viewport, compositorName);
Where viewport is a pointer to your viewport, and compositorName is the name of the compositor to create an instance of. By doing this, a new instance of a compositor will be added to a new compositor chain on that viewport. You can call the method multiple times to add further compositors to the chain on this viewport. By default, each compositor which is added is disabled, but you can change this state by calling:
CompositorManager::getSingleton().setCompositorEnabled(viewport, compositorName, enabledOrDi
Chapter 3: Scripts
121
For more information on defining and using compositors, see Demo Compositor in the Samples area, together with the Examples.compositor script in the media area.
3.3 Particle Scripts Particle scripts allow you to define particle systems to be instantiated in your code without having to hard-code the settings themselves in your source code, allowing a very quick turnaround on any changes you make. Particle systems which are defined in scripts are used as templates, and multiple actual systems can be created from them at runtime.
Loading scripts Particle system scripts are loaded at initialisation time by the system: by default it looks in all common resource locations (see Root::addResourceLocation) for files with the ’.particle’ extension and parses them. If you want to parse files with a different extension, use the ParticleSystemManager::getSingleton().parseAllSources method with your own extension, or if you want to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.
Once scripts have been parsed, your code is free to instantiate systems based on them using the SceneManager::createParticleSystem() method which can take both a name for the new system, and the name of the template to base it on (this template name is in the script).
Format Several particle systems may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces (), and comments indicated by starting a line with ’//’ (note, no nested form comments allowed). The general format is shown below in a typical example: // A sparkly purple fountain particle_system Examples/PurpleFountain { material Examples/Flare2 particle_width 20 particle_height 20 cull_each false quota 10000 billboard_type oriented_self // Area emitter emitter Point
Chapter 3: Scripts
122
{ angle 15 emission_rate 75 time_to_live 3 direction 0 1 0 velocity_min 250 velocity_max 300 colour_range_start 1 0 0 colour_range_end 0 0 1 } // Gravity affector LinearForce { force_vector 0 -100 0 force_application add } // Fader affector ColourFader { red -0.25 green -0.25 blue -0.25 } }
Every particle system in the script must be given a name, which is the line before the first opening ’’, in the example this is ’Examples/PurpleFountain’. This name must be globally unique. It can include path characters (as in the example) to logically divide up your particle systems, and also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a string.
A system can have top-level attributes set using the scripting commands available, such as ’quota’ to set the maximum number of particles allowed in the system. Emitters (which create particles) and affectors (which modify particles) are added as nested definitions within the script. The parameters available in the emitter and affector sections are entirely dependent on the type of emitter / affector.
For a detailed description of the core particle system attributes, see the list below:
Available Particle System Attributes • [quota], page 123
Chapter 3: Scripts
• • • • • • • • • • • • • • • •
123
[particle material], page 123 [particle width], page 124 [particle height], page 124 [cull each], page 124 [billboard type], page 125 [billboard origin], page 126 [billboard rotation type], page 127 [common direction], page 127 [common up vector], page 128 [particle renderer], page 124 [particle sorted], page 125 [particle localspace], page 125 [particle point rendering], page 128 [particle accurate facing], page 129 [iteration interval], page 129 [nonvisible update timeout], page 129
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors], page 137
3.3.1 Particle System Attributes This section describes to attributes which you can set on every particle system using scripts. All attributes have default values so all settings are optional in your script.
quota Sets the maximum number of particles this system is allowed to contain at one time. When this limit is exhausted, the emitters will not be allowed to emit any more particles until some destroyed (e.g. through their time to live running out). Note that you will almost always want to change this, since it defaults to a very low value (particle pools are only ever increased in size, never decreased).
format: quota example: quota 10000 default: 10
material Sets the name of the material which all particles in this system will use. All particles in a system use the same material, although each particle can tint this material through the use of it’s colour property.
Chapter 3: Scripts
124
format: material example: material Examples/Flare default: none (blank material)
particle width Sets the width of particles in world coordinates. Note that this property is absolute when billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicular common’. format: particle width example: particle width 20 default: 100
particle height Sets the height of particles in world coordinates. Note that this property is absolute when billboard type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length of the direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicular common’. format: particle height example: particle height 20 default: 100
cull each All particle systems are culled by the bounding box which contains all the particles in the system. This is normally sufficient for fairly locally constrained particle systems where most particles are either visible or not visible together. However, for those that spread particles over a wider area (e.g. a rain system), you may want to actually cull each particle individually to save on time, since it is far more likely that only a subset of the particles will be visible. You do this by setting the cull each parameter to true.
format: cull each example: cull each true default: false
renderer Particle systems do not render themselves, they do it through ParticleRenderer classes. Those classes are registered with a manager in order to provide particle systems with a
Chapter 3: Scripts
125
particular ’look’. OGRE comes configured with a default billboard-based renderer, but more can be added through plugins. Particle renders are registered with a unique name, and you can use that name in this attribute to determine the renderer to use. The default is ’billboard’.
Particle renderers can have attributes, which can be passed by setting them on the root particle system.
format: renderer default: billboard
sorted By default, particles are not sorted. By setting this attribute to ’true’, the particles will be sorted with respect to the camera, furthest first. This can make certain rendering effects look better at a small sorting expense.
format: sorted default: false
local space By default, particles are emitted into world space, such that if you transform the node to which the system is attached, it will not affect the particles (only the emitters). This tends to give the normal expected behaviour, which is to model how real world particles travel independently from the objects they are emitted from. However, to create some effects you may want the particles to remain attached to the local space the emitter is in and to follow them directly. This option allows you to do that.
format: local space default: false
billboard type This is actually an attribute of the ’billboard’ particle renderer (the default), and is an example of passing attributes to a particle renderer by declaring them directly within the system declaration. Particles using the default renderer are rendered using billboards, which are rectangles formed by 2 triangles which rotate to face the given direction. However, there is more than 1 way to orient a billboard. The classic approach is for the billboard to directly face the camera: this is the default behaviour. However this arrangement only looks good for particles which are representing something vaguely spherical like a light flare. For more linear effects like laser fire, you actually want the particle to have an orientation of it’s own.
Chapter 3: Scripts
126
format: billboard type example: billboard type oriented self default: point The options for this parameter are: point
The default arrangement, this approximates spherical particles and the billboards always fully face the camera.
oriented common Particles are oriented around a common, typically fixed direction vector (see [common direction], page 127), which acts as their local Y axis. The billboard rotates only around this axis, giving the particle some sense of direction. Good for rainstorms, starfields etc where the particles will traveling in one direction - this is slightly faster than oriented self (see below). oriented self Particles are oriented around their own direction vector, which acts as their local Y axis. As the particle changes direction, so the billboard reorients itself to face this way. Good for laser fire, fireworks and other ’streaky’ particles that should look like they are traveling in their own direction. perpendicular common Particles are perpendicular to a common, typically fixed direction vector (see [common direction], page 127), which acts as their local Z axis, and their local Y axis coplanar with common direction and the common up vector (see [common up vector], page 128). The billboard never rotates to face the camera, you might use double-side material to ensure particles never culled by back-facing. Good for aureolas, rings etc where the particles will perpendicular to the ground - this is slightly faster than perpendicular self (see below). perpendicular self Particles are perpendicular to their own direction vector, which acts as their local Z axis, and their local Y axis coplanar with their own direction vector and the common up vector (see [common up vector], page 128). The billboard never rotates to face the camera, you might use double-side material to ensure particles never culled by back-facing. Good for rings stack etc where the particles will perpendicular to their traveling direction.
billboard origin Specifying the point which acts as the origin point for all billboard particles, controls the fine tuning of where a billboard particle appears in relation to it’s position.
format: billboard origin
Chapter 3: Scripts
127
example: billboard origin top right default: center The options for this parameter are: top left
The billboard origin is the top-left corner.
top center The billboard origin is the center of top edge. top right
The billboard origin is the top-right corner.
center left The billboard origin is the center of left edge. center
The billboard origin is the center.
center right The billboard origin is the center of right edge. bottom left The billboard origin is the bottom-left corner. bottom center The billboard origin is the center of bottom edge. bottom right The billboard origin is the bottom-right corner.
billboard rotation type By default, billboard particles will rotate the texture coordinates to according with particle rotation. But rotate texture coordinates has some disadvantage, e.g. the corners of the texture will lost after rotate, and the corners of the billboard will fill with unwanted texture area when using wrap address mode or sub-texture sampling. This settings allow you specifying other rotation type.
format: billboard rotation type example: billboard rotation type vertex default: texcoord The options for this parameter are: vertex
Billboard particles will rotate the vertices around their facing direction to according with particle rotation. Rotate vertices guarantee texture corners exactly match billboard corners, thus has advantage mentioned above, but should take more time to generate the vertices.
texcoord
Billboard particles will rotate the texture coordinates to according with particle rotation. Rotate texture coordinates is faster than rotate vertices, but has some disadvantage mentioned above.
Chapter 3: Scripts
128
common direction Only required if [billboard type], page 125 is set to oriented common or perpendicular common, this vector is the common direction vector used to orient all particles in the system.
format: common direction example: common direction 0 -1 0 default: 0 0 1
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors], page 137
common up vector Only required if [billboard type], page 125 is set to perpendicular self or perpendicular common, this vector is the common up vector used to orient all particles in the system.
format: common up vector example: common up vector 0 1 0 default: 0 1 0
See also: Section 3.3.2 [Particle Emitters], page 130, Section 3.3.5 [Particle Affectors], page 137
point rendering This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whether or not the BillboardSet will use point rendering rather than manually generated quads.
By default a BillboardSet is rendered by generating geometry for a textured quad in memory, taking into account the size and orientation settings, and uploading it to the video card. The alternative is to use hardware point rendering, which means that only one position needs to be sent per billboard rather than 4 and the hardware sorts out how this is rendered based on the render state.
Using point rendering is faster than generating quads manually, but is more restrictive. The following restrictions apply: • Only the ’point’ orientation type is supported • Size and appearance of each particle is controlled by the material pass ([point size], page 44, [point size attenuation], page 45, [point sprites], page 44)
Chapter 3: Scripts
129
• Per-particle size is not supported (stems from the above) • Per-particle rotation is not supported, and this can only be controlled through texture unit rotation in the material definition • Only ’center’ origin is supported • Some drivers have an upper limit on the size of points they support - this can even vary between APIs on the same card! Don’t rely on point sizes that cause the point sprites to get very large on screen, since they may get clamped on some cards. Upper sizes can range from 64 to 256 pixels. You will almost certainly want to enable in your material pass both point attenuation and point sprites if you use this option.
accurate facing This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whether or not the BillboardSet will use a slower but more accurate calculation for facing the billboard to the camera. Bt default it uses the camera direction, which is faster but means the billboards don’t stay in the same orientation as you rotate the camera. The ’accurate facing true’ option makes the calculation based on a vector from each billboard to the camera, which means the orientation is constant even whilst the camera rotates.
format: accurate facing on|off default: accurate facing off 0
iteration interval Usually particle systems are updated based on the frame rate; however this can give variable results with more extreme frame rate ranges, particularly at lower frame rates. You can use this option to make the update frequency a fixed interval, whereby at lower frame rates, the particle update will be repeated at the fixed interval until the frame time is used up. A value of 0 means the default frame time iteration.
format: iteration interval example: iteration interval 0.01 default: iteration interval 0
Chapter 3: Scripts
130
nonvisible update timeout Sets when the particle system should stop updating after it hasn’t been visible for a while. By default, visible particle systems update all the time, even when not in view. This means that they are guaranteed to be consistent when they do enter view. However, this comes at a cost, updating particle systems can be expensive, especially if they are perpetual. This option lets you set a ’timeout’ on the particle system, so that if it isn’t visible for this amount of time, it will stop updating until it is next visible. A value of 0 disables the timeout and always updates.
format: nonvisible update timeout example: nonvisible update timeout 10 default: nonvisible update timeout 0
3.3.2 Particle Emitters Particle emitters are classified by ’type’ e.g. ’Point’ emitters emit from a single point whilst ’Box’ emitters emit randomly from an area. New emitters can be added to Ogre by creating plugins. You add an emitter to a system by nesting another section within it, headed with the keyword ’emitter’ followed by the name of the type of emitter (case sensitive). Ogre currently supports ’Point’, ’Box’, ’Cylinder’, ’Ellipsoid’, ’HollowEllipsoid’ and ’Ring’ emitters.
It is also possible to ’emit emitters’ - that is, have new emitters spawned based on the position of particles. See [Emitting Emitters], page 137
Particle Emitter Universal Attributes • [angle], page 131 • [colour], page 131 • [colour range start], page 131 • [colour range end], page 131 • [direction], page 132 • [emission rate], page 132 • [position], page 132 • [velocity], page 132 • [velocity min], page 133 • [velocity max], page 133 • [time to live], page 133 • [time to live min], page 133
Chapter 3: Scripts
• • • • • • •
131
[time to live max], page 133 [duration], page 133 [duration min], page 134 [duration max], page 134 [repeat delay], page 134 [repeat delay min], page 134 [repeat delay max], page 134
See also: Section 3.3 [Particle Scripts], page 121, Section 3.3.5 [Particle Affectors], page 137
3.3.3 Particle Emitter Attributes This section describes the common attributes of all particle emitters. Specific emitter types may also support their own extra attributes.
angle Sets the maximum angle (in degrees) which emitted particles may deviate from the direction of the emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees in any direction away from the emitter’s direction. A value of 180 means emit in any direction, whilst 0 means emit always exactly in the direction of the emitter.
format: angle example: angle 30 default: 0
colour Sets a static colour for all particle emitted. Also see the colour range start and colour range end attributes for setting a range of colours. The format of the colour parameter is "r g b a", where each component is a value from 0 to 1, and the alpha value is optional (assumes 1 if not specified).
format: colour [] example: colour 1 0 0 1 default: 1 1 1 1
colour range start & colour range end As the ’colour’ attribute, except these 2 attributes must be specified together, and indicate the range of colours available to emitted particles. The actual colour will be randomly
Chapter 3: Scripts
132
chosen between these 2 values.
format: as colour example (generates random colours between red and blue): colour range start 1 0 0 colour range end 0 0 1 default: both 1 1 1 1
direction Sets the direction of the emitter. This is relative to the SceneNode which the particle system is attached to, meaning that as with other movable objects changing the orientation of the node will also move the emitter.
format: direction example: direction 0 1 0 default: 1 0 0
emission rate Sets how many particles per second should be emitted. The specific emitter does not have to emit these in a continuous burst - this is a relative parameter and the emitter may choose to emit all of the second’s worth of particles every half-second for example, the behaviour depends on the emitter. The emission rate will also be limited by the particle system’s ’quota’ setting.
format: emission rate example: emission rate 50 default: 10
position Sets the position of the emitter relative to the SceneNode the particle system is attached to.
format: position example: position 10 0 40 default: 0 0 0