Shader Database

ncompileshaders.cc 파일은 'shader.xml' 파일로부터 shader들을 읽어 들여서 컴파일하는 커맨더 라인 툴이라고 설명이 되어 있는데, 소스 파일을 살펴 보면 다음의 헤더 파일들을 포함하고 있는 것을 볼 수 있다.

tools/ncompileshaders.cc 파일:


#include "kernel/nfileserver2.h"
#include "shaderdb/nshaderdefinitionparser.h"
#include "shaderdb/nshaderwriter.h"
#include "shaderdb/nshaderparser.h"

shaderx에서도 소개된 적이 있는 'Generating Shaders From HLSL Fragments' 를 바탕으로 생각해 보면 nshaderdefinitionparser는 shader를 정의하는 파일에서 shader 정의에 대한 정보를 파싱하고 nshaderwriter에서는 파싱한 정보를 바탕으로 shader를 작성하는 것이 아닌가 하는 생각이 든다. 이전 릴리즈에서 shader에 대해서 각각 .fx 파일이 따로 존재 하던 것들이 갑자기 shader.fx 파일 하나에 모두 합쳐진 것도 이런 생각을 뒤받침하는 증거이다.

실제 개발에서는 수많은 shader들이 사용되므로 많은 수의 shader를 편리하게 원하는대로 조합하고 관리할 수 있는 방법을 필요로 하게 된다. 현재 공개된 버전의 Nebula에서는 이러한 처리에 대한 솔루션은 없지만 Radonlabs 내부적으로는 ncompileshaders.cc 파일등에서 알 수 있듯이 이미 사용하고 있을 듯 하다. 다음 버전의 릴리즈에서는 이 부분에 대한 명쾌한 해답을 기대해 본다.

그리고, 이전에 Trignarion에서도 비슷한 내용에 대해서 언급한 적이 있다. 다음은 bugzilla에 올라 온 그 내용에 해당하는 글이다.


현재 nmaxtoolbox의 머티리얼 에디터의 Nebula2 custom shader를 front-end라고 하면 그래픽 디자이너들이 쉽게 작업할 수 있는 front-end는 변경하지 않으면서, 실제로 엔진에서 사용하는 쉐이더를 쉽게 구성할 수 있는 back-end가 필요하다. (언리얼3의 경우 shader fragment를 이용해서 그래픽 디자이너들이 직접 wysiwg한 인터페이스를 가진 툴을 제공하고 있다.)이러한 back-end의 방법으로 다음과 같은 방법을 생각해 볼 수 있다.

또한 전체적인 pipeline으로는 FX-Composer2.0에서는 사용자가 임의로 쉐이더의 annotation등을 변경할 수 있으므로 FX Composer2.0을 사용하는 방법도 고려해 볼만 하다.

------------------------------------------------------------------------

A dynamic shader creation method based on shader fragments suggested by Trignarion Studio. It was  originally posted on bugzilla which is available on #329.




At public request, I'm releasing a clean version of the design document for the original material system developed at Tragnarion Studios. I've updated the original document (that was closely related to the scene model as it was some months ago) to the last code merge, but some things need some more rewriting.
Still, the concepts are there.

An updated version of the design proposed is currently at work here at Tragnarion, but the code can't be publicly released for two reasons: for one, the nebula2 scene model we are using is a modified version of the one in open source, and I wouldn't like to introduce additional confusion, so I've kept references to the scene traversal as generic as possible; for the second reason, it is highly tailored to the needs of our content creation team, so for the community it would be completely useless. What I'm trying is to get your opinion on a solution to everyone's needs, not make the public agree with our in-house solution.

What I can release, though, is a primitive and untested version of the shader generator script, based on some old version of the N2 public shader database, and using a very limited number of material attributes. It is not intended for actual use, but as an illustration of: what can be accomplished with this method; and the complexity of maintaining such a material system -which, as we have experienced, can quickly reach the critical mass.

As of lately, I've been pointed several documents describing other approaches to building shader code for materials, eg.

* http://www.talula.demon.co.uk/hlsl_fragments/hlsl_fragments.html
(as published in the ShaderX 3 book)

While the approach I'm proposing is useful and was easy to implement, I think a generic solution should resemble more like the one proposed by Hargreaves. Also, the use of D3DX shader fragments could be considered. And any other ideas you'd like to propose, I'm willing to hear.

Remember that this is not a ready-to-use solution, but a proposal. I'll be available for any comments, suggestions and clarifications both here and in the IRC channel (look for me as trag-ma). Remember also that I've already developed this module once, and I don't have the time to completely re-implement it for
the open source community. So, there will be some work to be done either officially by Radon Labs or as a contributed module by open source guys before we get this working and usable.






Nebula Materials

A "Material" in a graphics engine is the set of visual features that describe how a surface is to be rendered. This document describes the current support for materials in nebula2 and the set of requirements and improvements that a full-featured material system could provide. Also, it proposes an architecture for materials in the N2 scene model.

The Nebula2 shader system so far

The Nebula2 shader system has changed significantly in the last merge with RadonLabs code base. This section summarizes the current set of features in N2 support for materials, and how they are used from the scene renderer.

Currently, the render path acts as the only database for materials supported in nebula. It abstracts a scene node implementing some geometry rendering (nMaterialNode) from the specific implementation of that geometry's surface (a shader file). This way, a material node in the scene simply states a shader identifier that it is intended to use to render that surface, thus decoupling its actual implementation, eg. the "static" shader in a nMaterialNode doesn't refer to any specific shader file nor technique. It woun't be until render path traversal when this will be determined.

The render path contains the set of shader meta-information that map a shader identifier ("static") to a shader implementation: the folder ("home:data/shaders/2.0"), file ("shaders:shaders.fx") and technique or techniques ("tStaticDepth", "tStaticColorShadow", "tStaticColor") that are executed to implement the "static" material. At scene traversal, geometry is sorted for optimal shader rendering, and the render path is fully traversed (passes, phases, sequences). when a specific surface (a material node) is rendered, the required shader parameters (texture maps, etc.) are assigned and the geometry is finally drawn.

There are a number of problems related to this approach:

* Because the shader itself doesn't provide any information on the operation it implements, some kind of meta-information needs to be maintained to be able to generate the set of values and textures required by that shader. In the "static" material case above: DiffMap0, BumpMap0 textures, the MatEmissive, MatDiffuse, etc. surface components, and the LightDiffuse, LightSpecular, etc. light color components. Currently, the shaders.xml file is where this information on which shader parameters are required (along with some tips to render gui controls and labels in a user-friendly way). There is also some need for exporters or plugins to know the vertex components that are required by a given shader.

* Maintenance of both the shader code (.fx files) _and_ the material meta-description (shaders.xml) is hard and complex. Radon Labs has made a huge amount of work encapsulating common operations in reusable methods that can be shared by different techniques. Still, the main vertex and pixel shader code must be hand-maintained and lots of code are duplicated. A single change in the interface of one of the shared functions needs to be updated in all vs* and ps* functions that use it. The use of uniform parameters (hdr, shadow) that create variations of a single shader function for free is a big step, but still, there is a huge amount of code to track when some new material attribute is to be introduced.

* The environment factors required by the shader implementation are fixed when the shader is coded: number and type of lights, fog, environment and shadow maps, etc. Of course you could always fill the missing elements with default values, but then, you'd hurt your performance with unnecessary calculations. Although Nebula2 abstracts the material identifier ("static") from the actual shader code, it is still a one-to-one mapping when resolved. This way, all instances of the same material use the same shader when rendered.

Material Architecture

In a shader-centered engine like Nebula, materials have a different meaning than in other engines. Materials used to be sets of well-defined properties for a given surface that were handled by the graphics API in a fixed way. Using programmable shaders, materials are defined as a set of parameters and a set of shaders using them in order to render the geometry. The problem with this kind of architecture is that building shaders can quickly become a repetitive task, prone to code duplication and hard to maintain.

Another problem for surfaces described with a shader is that not only the surface is statically defined, but also the requirements from other parts of the scene with which a shader interacts and retrieves parameters, ie. lights, fog and other environment elements. Surfaces provided this way set a static set of requirements from the scene around the geometry, a set that can't change during the game. As a blatant example, a shape can't be drawn when lit by 1 OR 2 lights, depending on how many lights there are in the stage: the number and type of lights is fixed at the time of defining the material.

The requirements for this new material system to overcome these limitations are:

* __High level of abstraction.__ Materials are described, instead of provided. A material consists of a set of attributes describing how a surface should look; later, at the scene level, specific parameters for several numerical properties will be defined just as they were until now. The exact way in which material attributes are implemented is abstracted from the material definition, and from the scene level using them.

* __Environment interaction.__ Rendering a surface requires not only a material but all environment factors influencing the shape: lights, fog, shadows, etc. A shader-oriented rendering architecture means multiple variations on the same shader when several environmental factors are combined. Thus, the material-to-shader mapping can't be done in a 1-to-1 basis.

* __Reusability and extensibility.__ Down to shader level, code for generating all required shaders from material definitions shouldn't be defined over and over, instead being reused for all cases. Also, when new material attributes are to be defined, or when new capabilities from the hardware are available, the material system should integrate new features.

The following sections describe how all of these requirements are addressed and implemented in this material system.

-= Materials, Servers, Builders and Shaders =-

The material subsystem exposes the following classes:

* __nMaterial__ is a generic attribute holder for materials. It allows for specifying an arbitrary number of named attributes in the form: ".setattribute name value" How these parameters are handled depends on a specific material builder (see below). Material attributes can be organized as a hierarchy for organizational purposes.

* __nMaterialTree__ is a class that implements a set of shaders organized as a decision tree that can be traversed to select a given shader based on several decision values, that are retrieved from nodes implementing environment factors. Thus, material tree introduce an additional indirection level to the shader architecture.

* __nMaterialBuilder__ implements dynamic building of shader(s) for a material from the set of attributes. A nMaterialBuilder for every gfx server can be defined as needed. Currently, only the nD3D9MaterialBuilder implementation is provided, based in D3D9 .fx files (but a different one could be defined to implement fixed-function materials, or use any other interface, like Cg or GLSL shaders). The material builder is responsible for composing all shaders for a given material, and building the decision tree. It is the only class that accesses to the internals of material trees.

* __nMaterialServer__ provides access to materials and builders, and it is the server from where shaders are retrieved from the scene. All materials must be registered into this server prior to be used from the scene. They can be all loaded at the beginning of the application main loop, or created on demand.

The relationshions between these classes can be summarized as follows:

* A __nMaterialNode__ is assigned a material by its identifier.

* At resource loading, it resolves a reference to the __nMaterial__ object.

* If the __nMaterial__ doesn't contain a full shader description, it loads it through the __nMaterialServer__, using its assigned __nMaterialBuilder__, that resolves:

** the set of scene passes needed to implement the material.
** the set of cases in the decision tree.
** the shader filename, technique and specific code for every defined case in the decision tree.

* From that moment on, nMaterialNode acts just like it did before, except that when its shader object for a pass is requested, it resolves it first using the corresponding __nMaterialShader__ for the pass, and the set of environment factors from the surrounding scene.

* When the actual shader has been selected this way, for every geometry instance, it is retrieved and used just like the one in the usual nMaterialNode: all requests will be redirected to the active shader, including setting parameters and execution.

Material Trees

The idea of material trees is to create a composite shader class that stores an array of references to other shaders. It will also implement a decision tree based on the value of specific selector values. Each node of the tree must provide an implementation, including the first one, that should implement the shader for the no-factors case.

-= Case study =-

As an example, think of the following set of different environment factors:

 fog
 light

which could be combined to render a shape in all the following six cases, each determining a shader like the one described. (this is an simplified scheme)

 root               a shader implementing a material with no fog, no lights
  +1light           a shader implementing a material with no fog, 1 light
   +2lights         a shader implementing a material with no fog, 2 lights
  +fog              a shader implementing a material with fog, no lights
   +fog+1light      a shader implementing a material with fog, 1 light
    +fog+2lights    a shader implementing a material with fog, 2 ligths

We have limited this case to 2 different light sources, but in the general case, it would be necessary specifying the type for both fog and lights. Other factors that could fit in this scheme are light/shadow maps, or even geometry deformations, which would allow reusing shaders with differently deformed surfaces.

Fortunately for us, this array of will be automatically generated from a custom material definition that will tell only how a surface should be drawn. Actually, if the surface specifies that no dynamic lighting will be used (as in lightmapped surfaces) all light combinations will disappear from the previous decision tree. The specifics of building shaders from materials are implemented in a material builder.

-= Materials in the scene pipeline =-

When a scene is rendered, all nodes implementing environment factors report to all material surfaces a generic type that identifies them (eg. light, fog, etc.) and then every material surface can select a specific shader from its internal decision tree. Once the shader is selected, all calls to the active to set parameters and execute shader passes shader can be performed as usual.

The scene pipeline for materials will be the following (distributed along the whole scene system):

    materialNode->SetMaterial(material);

    shapeNode->Attach(sceneServer, renderContext);
        materialNode->GetPasses();
            material->GetPasses();

    shapeNode->Render(sceneServer, renderContext);
        materialNode->GetShaderObject(sceneServer, renderContext);
            materialTree = material->GetMaterialTree(fourcc);
            ... get selectors from environment nodes
            materialTree->SetType(type, value);
            materialTree->SetType(type, value);
            ... and select the shader for the selector types
            shader = materialTree->SelectShader();
        materialNode->Render(sceneServer, renderContext);
            gfxServer->SetShader(shader);
   
    ... render scene lights, fog, etc.
    shader->SetInt(intParam);
    shader->SetFloat(floatParam);
    ...
    shader->Begin();
    shader->Pass();
    shader->Pass();
    ...

The process just described means the following:

* nMaterial describes the attributes for a surface in a descriptive way. It is then assigned to a nMaterialNode.

* nMaterialTree allows selecting and setting the active shader in the Graphics Server, so that active scene nodes can access it in the usual way.

* nMaterialNode can be assigned a nResourceLoader specific for materials. When all testing has been done, all shaders can be generated, and the nMaterialTree can load the shaders directly instead of re-creating them every time (which is slow).

* After the material tree is resolved, nMaterialNode can render the shader in the usual way.

-= Resolving a material tree =-

A material tree implements a set of "regular" shaders organized as a decision tree. Each node in the tree has a shader parameter and a value. When the shader is executed, it collects from the rest of the scene a set of selector values from which it can find the node in the decision tree providing a shader that implements the set of operations for displaying the material in that specific context.

Collecting environmental factors from the rest of the scene is done prior to actual rendering, and before the node organizing geometry for optimal shader rendering collects the current shader from the decision tree. If the shader object of which the scene is aware for a material node was the material tree itself, it would mistakenly assume that two shapes with the very same material but in different environmental contexts are the same, and so both would be drawn using the same 'primitive' shader no matter what. This is not what we want.

In order to select the right shader from the material tree, and then properly sort geometry by shader, we need to access the selected shader once it has been selected. And so, the material node exposes as its "shader object" for the scene to know not the material tree, but the selected shader __inside__ the material tree. As a first consequence, the whole material tree interface greatly simplifies because it's not necessary to create an interface for accessing the rest of functions in a shader: Begin, End, Pass, SetFloat, SetMatrix, etc.

Only problem is that the material node needs to resolve the material tree(s) before any geometry is drawn. That is, it must resolve the active shader for all passes. It is done through a specific GetShaderObject function that gets its usual parameters sceneServer and renderContext. It allows changing shaders from one to another for the same shape depending on its render context, and it is stored as an attribute in the scene server, just like model transforms are stored in the scene server. Unlike the old scene model, shaders are not immutable for a single node from one render context to the next.

The GetShaderObject method in nMaterialNode is responsible for traversing the scene server and collecting parameters from all kinds of environment nodes. That means that every such kind of node will need to expose these parameters through its common shader parameter interface for the material node to get it. That means adding some values to the environment nodes for that purpose. Examples:

    .settype LightType pntl

    nLightNode      LightType       pntl (point), dirl (directional), ltng (lightning)
    nSpotLightNode  LightType       spot (spotlight)
    nLightNode      ProjectedMap    lmap (lightmap)
    nFogNode        FogMode         lnrf (linear), expf (exponential), xp2f (exp squared), flyr (layered)
    nShadowNode     ProjectedMap    smap (shadowmap)

The material node could know which variables are used in the material decision tree, and retrieve only the ones it needs from the node's variable context. NOTE: floh himself changed from variable contexts to a fixed shader parameter array for performance reasons. Maybe we should do the same in this case?

The general case in material tree is as follows:

    for (every subpass of the current)
        * access every individual node in the scene server for the linked render contexts.
        * get all of its environment values and directly assign them to the material tree
        * the material itself will dump the ones it doesn't need.
    return selectShader();

As a first working approach, it's ok that way, but there will be needed two optimizations:

* Only passes with real options need to be resolved. eg. the depth shader will usually be a single shader, and so it doesn't need to be resolved.

* Only passes that are actually active in the scene server need to be resolved. If a material tree has shaders implementing passes for several hardware or user profiles, only the ones required by the current set of passes in the scene server needs to be resolved.

-= Collecting shader parameters =-

Finally, the material node for drawing the shape will be traversed at the right time, and will be responsible for rendering all lights, etc. That could be as simple as triggering the render of all subpasses, as done in the GetShaderObject function. But here the problem gets a little more complicated.

When collecting parameters from nodes prone to be multiple in their conception, they must be rendered in the right order depending on how each light type is handled in the decision tree, or even if it is handled. For example, if a material must take fog into account, but not lights, only nodes implementing fog must be considered. Even more, if a shader has been built for the combination of two lights __of different types__ it is crucial to keep each one in the right place in the light parameter array: ModelLightPos, LightAmbient, LightDiffuse, LightSpecular. Quite surely, we don't want to mess these.

Another strange case could happen if more lights than the shader accepts are actually influencing a shape. This means that some lights must be dismissed when traversing the scene pass. Although we can assume that externally, there won't be linked more render contexts than supported, anyway they must be assigned at the right place. They could simply report their type just as they would do in Direct3D, but I don't think that's the general solution.

In the current Nebula implementation, it's the GfxServer who collects lights, and the light nodes are just containers for those. Then, shader parameters in the light node are just a way to fill up all right fields in the light object. when rendered, the light node adds the light to the graphics server, who is responsible for rendering the right parameters to the shader just as it does with matrices. In fact, matrices related to lights are computed in the GfxServer and passed to the current shader (although this code isn't used).

(a solution to the problem of multiple lights was proposed in this list some months ago: browse the archives for "A nShader2 extension proposal for multiple lights" and "Multi-light support")

Material Attributes

-= Describing shaders =-

This is a prototype of the kind of attributes that can be supported by a hypothetic nD3D9MaterialBuilder implementation. More attributes can be added to the specific section.

* Scene properties: Opaqueness, RenderPriority
* Deformations: skinned, swinging, billboard, ...
* Environment interaction: diffuseLighting, specularLighting, ...
* Texture maps: bumpMap, clipMap, lightMap, emissiveMap, diffuseMap, shadowMap, ...

__ Environment interaction __

These are the elements that could describe some environment types.

* Lights: point, directional, spotlight.
* Textures: Area light, shadow map.
* Fog: Linear, exponential, layered, animated.

The following material attributes would be related to the interaction of materials and environment:

* Dynamic lighting/shadowing.
* Affected by fog.
* ...


-= Building shaders =-

The ugliest part of the material subsystem is where shaders are built for a specific shader platform, eg. D3D9 FX shaders using HLSL. The specific nD3D9MaterialBuilder is to be defined in the most extensible and general fashion possible, just as shader attributes are.

Using the simple nMaterialBuilder interface, shaders for implementing materials are built in three steps:

* determine the set of passes needed, based in both the material and platform capabilities. Assign the resulting set of passes directly to the reference nMaterial.

* for a given rendering pass, create the whole set of cases for building a decision tree. For each case, build an abstract shader description that can be used in the following step for building a specific .fx file.

* finally, build the code for every .fx file from the shader description. This step must be repeated as many times as needed for every shader in the decision tree, and should be unaware of the pass or the material it is dealing with.

A Material builder implements the previous steps like this:

1. Set of passes

It depends on both material attributes and gfxserver. The script can access the material builder to add passes to the current material:

    builder.addpass "dept"
    builder.addpass "colr"
    builder.addpass "shdv"
    ...

2. Build a set of shader attributes

This is already a direct shader representation describing how a shader is to be built. It is very similar to attributes, but now with everything in platform-specific shader terms. It has two parts:

* Fixed shader options. These are the ones that will exist for all cases in the material tree, and could be things like:

    depthwrite
    depthfunc
    alphablend
    map stages
    ...

These options are assigned to the material builder through:
    builder.setoption depthwrite true
    builder.setoption depthfunc lessequal
    ...

* Case shader options. These are the set of cases with which the builder must build the decision tree. It's basically an array with a variable set of case variables:

    FogMode
    LightType
    LightType

And a set of case values:
    AmbientFog PointLight PointLight
    AmbientFog SpotLight PointLight
    SpotLight PointLight
    PointLight PointLight

These cases are assigned to the material builder through:

    builder.begincasevar FogMode
        builder.addcasevalue AmbientFog
        builder.begincasevar LightType
            builder.addcasevalue SpotLight
            builder.addcasevalue PointLight
            builder.begincasevar LightType
                builder.addcasevalue PointLight
            builder.endcasevar
        builder.endcasevar
    builder.endcasevar

Each of these cases goes through a set of nodes, each of which must be filled with a shader implementing all traversed options up to that point. For example, the root node must provide a shader for the case where neither fog or lights are present. It's up to the case designer to ensure that no cases are equal: the tree is traversed checking options always in the same order, namely the one in the case variables array.

Of course, if the current pass OR the material attributes state that some of these cases aren't relevant, they are omitted, and that's it. eg. if surface nofog is set, the FogMode case variable is removed from the list, but the rest of cases are built just the same. If no ambient lighting is enabled, or this is a no-light pass (eg. depth, or emissive), light case variables are ignored just the same.

Other examples of case variables, if pass implements them:
    ProjectedMap

Case values:
    ShadowMap
    LightMap

To allow the decision tree being traversed, all abstract shaders nodes could be specified a set of environment descriptors:

nLightNode
    .setenv LightType PointLight (this would be a string converted to a fourcc)
nSpotLightNode
    .setenv LightType SpotLight
nFogNode
    .setenv FogMode AmbientFog
nShadowNode
    .setenv ProjectedMap ShadowMap
nAreaLightNode
    .setenv ProjectedMap LightMap
...

3. Build the shader

This step is fed with a specific case in the tree, and so it is unaware of either the render pass or the decision node it is implementing: to it, all shader attributes come from the same place, and are fantastically mixed:

    depthwrite
    depthfunc
    alphablend
    map stages
    ...
    linearfog
    spotlight
    pointlight

And so it is much more modular building shaders through this organized set of passes.

Material Pipeline

Materials can be completely characterized, processed and recreated at several points in the scene pipeline:

* Offline generation of material trees. This process can be performed by some offline tool (or the exporter itself) that generates shaders for all materials and cases, and saves a file for the decision tree. This way, shaders don't need to be completely recreated at startup unless they need to. For compatibility reasons, material nodes would still support the usual .setmaterial method. The decision tree can now be directly specified thorugh the material's script interface.

* Also, materials can be defined and created at runtime from the editor itself. With the proper interface, this will allow changing or editing the material assigned to some geometry.

Even if they are dynamically created, shaders for materials are saved to its corresponding .fx file, so that they can be recovered in case of device loss or something. Ideally, the builder tool would allow saving the shaders and the material so that it doesn't need to be reloaded again.

-= Materials persisted =-

A nMaterialNode will accept a .setmaterial command (just as the current .setshader). A material is loaded with a complete set of shaders and a decision tree used to decide at runtime the shader to use for each geometry. The material assigned to the material node can load a material tree for each pass. This way, no nMaterialBuilder needs to be created to manufacture shaders for materials for scenes authored in a modelling tool, for these will be generated in the export process. A material can be persisted to explicitly state the set of material trees for each pass.

When a nMaterial is assigned to the nMaterialNode, it will be resolved using its path. When the material tree is assigned, it is resolved just like a shader. The following sample code shows how:

^
    # new nscenenode /usr/scene/
    # sel /usr/scene/
    new nshapenode shape0
        sel default
        # ...
        .setmaterial /usr/materials/default
    sel ..
   
    # new nroot /usr/materials
    # sel /usr/materials
    new nmaterial default
        sel default
        .beginpass "dept"
        .setshader "shaders:materials/default_dept.fx"
        .endpass
        .beginpass "colr"
        .setshader "shaders:materials/default_colr.fx"
        .beginnode "LightType" "LightType" "pntl"
        .setshader "shaders:materials/default_colr_pntl.fx"
        .endnode
        .beginnode "LightType" "LightType" "spot"
        .setshader "shaders:materials/default_colr_spot.fx"
        .endnode
        .endpass
    sel ..
^

NOTE: please keep in mind that this document is based in the old (pre-september RL merge) N2 material model. I've updated as many references as possible, but here the concept of pass shader needs to be severely updated -MA

This way of specifying shaders for materials would be compatible with the usual way of assigning materials. The material builder, either at export or runtime, builds the shader files directly on disk and assigns them to the material, that is now responsible for loading them when required.

The material builder will act more like a regular builder, and instead of building a material tree, it builds the decision tree and shaders, then saves them all. The material tree is assigned to the material node just as a regular shader, but with a specific loader. The material tree will be stored as a C-like struct that can be easily parsed.

-= Material database =-

Another important feature to introduce would be allowing the material builder build a database mapping materials, and their descriptive attributes, to final shaders. This would allow the material builder reuse the code for a shader if a shader with the same attributes has already been built. Of course, this is only useful for the material builder used in the exporter or the editor; no material builder will be required at all in game mode.

The material builder works only with material attributes, and when saved, it can add the material just manufactured to a shader database, besides saving it to a file for the material tree.

 

materials.tcl.txt

by kimsama | 2007/01/26 21:43 | Nebula Croquis | 트랙백(3) | 덧글(2)

트랙백 주소 : http://kimsama.egloos.com/tb/1497495
☞ 내 이글루에 이 글과 관련된 글 쓰기 (트랙백 보내기) [도움말]
Tracked from kuangjialing at 2007/01/29 13:56

제목 : 整理到的几个关于nebula的网站
整理到的几个关于nebula的网站...more

Tracked from オークリー メガネ at 2014/07/28 17:43

제목 : サングラス 人気
の Topdatum 100パーセント絶対に自由に直接ディレクトリ これは オークリー メガネ 、モデルのプライベート転送です:あなたは、あなたアカウンタビリティ /> 十分に良好な Toms Sko 、考えてみてください。 ""これらのレベルの種類は、多くの場合、現在行われていない - 私は その理由は、あなたは間違いなく、時にはこれらの本のすべてを認識していただきまし は感謝しています。 「「私が稼いで、人々が 彼らは真剣に責任を がために考案。 ""他のみんなの欲求負担誰か - ......more

Tracked from Legal repres.. at 2014/10/17 15:35

제목 : Cheap Newport Cigarettes
Legal representatives converted documentation Rolah McCabe accused o newport cigarettes website f a worldwide cigarette institution towards negligenceLINDA MOTTRAM: Skillfully the Judge抯 frightening answers surely posses made the focus in addition......more

Commented by kimsama at 2007/01/29 20:20
Thanks for that information. :-)
Commented by 모리아티 at 2007/02/28 00:34
와~~ 듕궉인이다~~~ (-ㅇ-;)

흠냥. 네뷸라는 XSI랑은 안친하나요?!

왜 자동 책상하고만 노는지... (ㅜ.ㅜ) (Max, Maya 만 되네효~ )

:         :

:

비공개 덧글

◀ 이전 페이지다음 페이지 ▶