Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic Morph Syntax #210

Closed
tparisi opened this issue Dec 20, 2013 · 54 comments
Closed

Basic Morph Syntax #210

tparisi opened this issue Dec 20, 2013 · 54 comments

Comments

@tparisi
Copy link
Contributor

tparisi commented Dec 20, 2013

Here is an example of the basic morph syntax I am proposing.

First, the animation, consisting of a channel to drive the morph from TIME input and MORPH output parameters.

"animation_1": {
        "channels": [
            {
                "sampler": "animation_1_morph_sampler",
                “target”: “BallUV3-morph”
            },

        "parameters": {
            "TIME": "accessor_0", // float buffer w times e.g. 0 0.466667 0.966667
            “MORPH”: “accessor_1” // float buffer w output weight values e.g. 0 1 0
        },

        "samplers": {
            "animation_1_morph_sampler": {
                "input": "TIME",
                "interpolation": "LINEAR",
                "output": "MORPH"
            },
        }

Now, the morph controller. NORMALIZED and RELATIVE methods are taken directly from the COLLADA spec. Do we need these? Default is NORMALIZED. In this example the single morph target has zero weight i.e. at resting position the geometry is un-morphed.

"morphs": {
"BallUV3-morph": {
“method” : “NORMALIZED”, // one of “NORMALIZED” or “RELATIVE”
“source”: "BallUV3_geometry",
"targets": [
"target-BallUV3Custom_Morph"
],
“weights”: [
0
]
}
},

Finally, the instance of the morph (will appear inside a node):

        "instanceMorph": {
                       "morph": "BallUV3-morph",
},
@pjcozzi
Copy link
Member

pjcozzi commented Dec 31, 2013

Thanks @tparisi

  • How do the generated shaders play into this? Couldn't morph targets be implemented as a targeted material parameter that is then used in the vertex shader to blend between two positions/normals/etc.?
  • Will the accessor referenced by MORPH always be scalar, e.g., INT or FLOAT? If so, why is weights in BallUV3-morph an array? Isn't it just the default weight?
  • In BallUV3-morph, what is targets? I don't see target-BallUV3Custom_Morph anywhere else in the example.
  • @fabrobinet and @RemiArnaud will know better than me if we need to include both NORMALIZED and RELATIVE, but couldn't we always do NORMALIZED and convert RELATIVE in the converter?

@fabrobinet
Copy link
Contributor

I can't really answer fully this until I got preliminary support in the converter.
But at least for shaders, yes, you can can expect the blending to happen in the vertex shaders, at least that's how I imagine it so far (and combined with skinning).

@fabrobinet
Copy link
Contributor

I will try check OpenCOLLADA support wrt morphing this week and provide some time estimates.

@pjcozzi
Copy link
Member

pjcozzi commented Nov 25, 2014

@fabrobinet post 1.0?

@fabrobinet fabrobinet modified the milestones: post 1.0, Spec 1.0 Nov 25, 2014
@fabrobinet
Copy link
Contributor

yes post 1.0

@tparisi
Copy link
Contributor Author

tparisi commented Nov 25, 2014

agreed

On Tue, Nov 25, 2014 at 9:42 AM, Fabrice Robinet [email protected]
wrote:

yes post 1.0


Reply to this email directly or view it on GitHub
#210 (comment).

Support the VR Revolution: check out my DIYVR Kickstarter at
https://www.kickstarter.com/projects/dodocase/diy-virtual-reality-open-source-future

Tony Parisi [email protected]
CTO at Large 415.902.8002
Skype auradeluxe
Follow me on Twitter! http://twitter.com/auradeluxe
Read my blog at http://www.tonyparisi.com/
Learn WebGL http://learningwebgl.com/

Read my books!

Programming 3D Applications in HTML5 and
WebGLhttp://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966
http://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966WebGL,
Up and Running

http://www.amazon.com/dp/144932357X

@pjcozzi pjcozzi removed this from the post 1.0 milestone Aug 27, 2015
@tparisi
Copy link
Contributor Author

tparisi commented Sep 22, 2015

@pjcozzi I assume morphs and non-linear interpolation are still after v1.0? the schema implies that; so, wondering why you removed the "post 1.0" tag....

@pjcozzi
Copy link
Member

pjcozzi commented Sep 22, 2015

Yes, still post 1.0. There is no post 1.0 tag anymore. Everything not 1.0 is post 1.0. We'll prioritize after we get the spec out.

@tparisi
Copy link
Contributor Author

tparisi commented Sep 22, 2015

great. thanks

On Tue, Sep 22, 2015 at 11:21 AM, Patrick Cozzi [email protected]
wrote:

Yes, still post 1.0. There is no post 1.0 tag anymore. Everything not 1.0
is post 1.0. We'll prioritize after we get the spec out.


Reply to this email directly or view it on GitHub
#210 (comment).

Tony Parisi [email protected]
Follow me on Twitter! http://twitter.com/auradeluxe
Read my blog at http://www.tonyparisi.com/
Learn WebGL http://learningwebgl.com/
Mobile 415.902.8002
Skype auradeluxe

Read my books!
Learning Virtual Reality
http://www.amazon.com/Learning-Virtual-Reality-Experiences-Applications/dp/1491922834

Programming 3D Applications in HTML5 and
WebGLhttp://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966
http://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966WebGL,
Up and Running

http://www.amazon.com/dp/144932357X

@uklumpp
Copy link
Contributor

uklumpp commented Dec 6, 2016

  • If so, why is weights in BallUV3-morph an array? Isn't it just the default weight?

Yes, always scalar. I think the assumption is that you may have multiple target geometries in the targets array. You'd then have a default weight for each in the weights array.

  • In BallUV3-morph, what is targets?

A reference to one or many geometries that are blended with the base mesh in a weighted fashion, right?

  • ...couldn't we always do NORMALIZED...

I agree that supporting only the NORMALIZED method of computing the final shape should be sufficient for glTF. It made sense to offer both in COLLADA, but I don't think it's necessary here.

@emilian0
Copy link
Contributor

The only advantage I see in keeping the RELATIVE mode around is in more efficient transmission (displacement targets have lots of zeros so they compress well).
Sparse storage ( #820 ) is an alternative way to efficiently encode morph targets.

@lexaknyazev
Copy link
Member

A look from implementation side (WebGL 1.0 (ES 2.0) caps only):

We've got 16 guaranteed vertex attributes. Positions, normals, tangents, UV_0&1 (could be packed), skin weights, skin joints - took at least 6 (if we calculate bi-tangents in shaders).

Remaining 10 could be spent on 5 morph targets (positions + normals) with weights controlled by uniforms.

So, runtime should bind needed morph targets (no more than 5), and animate/blend them via uniform updates.

Is that correct?

@emilian0
Copy link
Contributor

Yes, thank you for bringing up the implementation conversation.
Your numbers are sound and match this article.
A quick look at Three.js sources, seems to point out that they choose to only animate 4 morph at a time (8 when no normals).

@lexaknyazev
Copy link
Member

With no asset-provided shaders, we need to agree on one particular layout (e.g. 4/4) or introduce more parameters.

@lexaknyazev
Copy link
Member

There's more modern approach based on transform feedback. This should be supported with WebGL 2.0 (ES 3.0).

Should we design morph targets with that in mind?

Also, to use morph targets, we need to bind them with animations and that could be tricky, because each keyframe must contain either all possible targets (and engine will bind top 4-5 targets with non-zero weight), or explicitly bind each target each keyframe. One more option is to forbid mixing more than 4-5 targets in one animation channel.

@emilian0
Copy link
Contributor

Yes, I believe we should design morph targets with more than one API in mind (WebGL1, 2 and more advanced APIs as well).
I consider glTF as an API-independent way to transfer 3d assets for visualization.
(note: 4-5 active targets are probably ok in some cases, but I worked with way more than that).
Rather than picking a specific API and designing how to best feed it I believe we want to consider multiple possible implementations and make sure that we can enable them.
A general approach for that would be to keep the format simple and avoid baking low level implementation decisions in the format. Rather let's have the rendering engine make those decisions and scale back when appropriate.
For instance, if the current environment only supports WebGL1.0 the rendering engine will have to sort out the 4-5 "most active" targets at a given frame and animate with those(as you suggested). If the current environment supports more advanced APIs we could enable a better experience, although performance could be another reason to scale down (perhaps 4 active targets are still ok for a characters in the background...and cheaper to compute).
All of this to say: let's keep multiple implementations in mind, and let's move decisions to the rendering engine when possible (rather than baking them in the format). Do you agree?

@lexaknyazev
Copy link
Member

we want to consider multiple possible implementations and make sure that we can enable them

Of course, we do. So, let's go through different morph workloads:

  • WebGL 1.0

    • When overall per-model morph targets count is 5 or less, WebGL 1.0 engines can unpack sparse arrays to full-length vertex attributes and do morphing in GLSL. No additional per-frame sorting is needed.

    • When overall per-model morph targets count is more than 5, but active targets count is 5 or less, WebGL 1.0 engines can unpack sparse arrays to full-length vertex attributes, select most important targets, bind them and do morphing in GLSL. Such per-frame CPU sorting doesn't look good to me, so maybe we could add some hints into asset.

    • When active per-model morph targets count is more than 5, things become more expensive for WebGL 1.0 engines. One approach is to generate vertexID attribute, store sparse morph data in textures (won't work everywhere, some GPUs don't support vertex texture units), process morphing in GLSL using texture look-ups.

    • There could also be a fully-CPU fallback with per-frame vertex buffer updates (probably the worst approach).

  • WebGL 2.0 and beyond

    • Implementations could apply almost arbitrary amount of morph targets via transform feedback loop or preload needed morph targets into uniform buffer and loop through in GLSL.

It seems to me that there could be a noticeable performance gap in handling complex morph/skin animations when running on different runtimes.

@emilian0
Copy link
Contributor

Yes, performance and/or experience gap.
Thank you for the overview above, it provides a valuable background to this conversation.
Now one questions for you: what can we do, at the level of glTF specification, to help engines scale their morph target implementation depending on the supported APIs / HW capabilities / Scene complexity & layout?

@lexaknyazev
Copy link
Member

First thoughts about glTF side, maybe not 100% correct.

It should be clear how many morph targets a mesh has, what is the maximum value of active targets count.
Animations should specify used/unused targets, so runtime could skip manual sorting.

@lexaknyazev
Copy link
Member

Why do we have morph.target.weight property? Can't initial instance state be defined by node.morph field?

Is it reasonable to use the same weight for all mesh primitives? Could only one primitive of the mesh have morph targets?


Let's keep only RELATIVE mode. It implies fewer runtime computations. Transform feedback-based implementation would be also simpler than with NORMALIZED.


On morph's self-containedness:

  • Exclusive usage of node.mesh field for mesh instantiation (both morphed and not-morphed) makes implementation simpler and allows "gradual" development and debugging.
  • I'd avoid extra redundancy (node.mesh/morph.source) to not add more conformance/validation rules.
  • morph.source is connected to morph.targets:
    • it's makes no sense to use different morph.source with the same set of targets (that would require new morph object anyway);
    • but it could be useful to have several morph objects based on the same mesh to use different sets of targets on different mesh instances thus reduce runtime processing.

It looks like morph.targets is mostly mesh-related property, while everything related to weights is bound to "mesh instance", i.e., node. What do you think of extending mesh.primitive object and putting targets data there (like accessor was extended with sparse data)?

@emilian0
Copy link
Contributor

@pjcozzi re: "is target.name is truly valuable? ". I see little to no value in having specifying targets name for a run-time (not editing) format. So it makes sense for me to move it to an extension. Sounds good?

@pjcozzi
Copy link
Member

pjcozzi commented Feb 22, 2017

Instead of moving name to an extension, just remove it. Applications could put something like this in an extras property if they need.

@emilian0
Copy link
Contributor

@pjcozzi re: "blending vertex colors or UVs". I am checking with our artists what is their experience with morph targets blending UVs or vertex colors. I have none. I see how that can be useful. But I also see how this can make implementation (expecially on Webgl 1.0) much harder ( @lexaknyazev feel free to chime in). So, unless I hear something from our artists (or anyone objects), I suggest to only support POSITION and NORMALS for now. Sounds good?

@emilian0
Copy link
Contributor

@lexaknyazev : The idea is to have morph.target.weight set the default weights of a given morph.
node.morph.weights instead overrides that when instancing the morph. This way you can:

  • decide to instance a morph without specifying weights (so it will default to morph.target.weight)
  • or instantiate it using different morphs specified in node.morph.weights
    Is this ok with you @lexaknyazev ? Thanks!

@emilian0
Copy link
Contributor

@lexaknyazev I am pleased we agreed on the RELATIVE mode only!

@emilian0
Copy link
Contributor

@lexaknyazev I was thinking the same: extend the concept of mesh to make it include the case of morphable meshes. This is quite a change, I will spec it out and ping all of you for an additional pass.

@pjcozzi
Copy link
Member

pjcozzi commented Feb 23, 2017

unless I hear something from our artists (or anyone objects), I suggest to only support POSITION and NORMALS for now. Sounds good?

Yes, thanks!

@emilian0
Copy link
Contributor

emilian0 commented Feb 23, 2017

@tparisi @pjcozzi @lexaknyazev, here the update on morph targets. Please take a look and let me know what you think. Unfortunately we don't have many iteration cycles left before tomorrow.
I believe the only "invasive" change that I am suggesting is regarding the animation of morph targets. Please let me know what you think and if you have better ideas.

Morph Targets

Morph Targets are defined in glTF 2.0 as an extension to the Mesh concept.
A Morph Target is a deformable mesh where primitives' attributes are obtained by adding the original attributes to a weighted sum of targets attributes (this operation corresponds to COLLADA's RELATIVE morph targets blending method).
The targets property of primitives is an array of targets, each target is a dictionary mapping a primitive attributes to a target displacements, currently only two attributes ('POSITION' and 'NORMAL') are supported. The size of the targets array is the same for all primitives and matches the size of the weights array. All primitives are required to list morph targets in the same order.
The weights array is optional and stores the default weight associated to each target, in the absence of animations the primitives attributes are resolved using these weights. When this property is absent the default targets weights are assumed to be zero.

Here a sample JSON defining a morph target:

{
    "meshes": [
        {
            "primitives": [
                {
                    "attributes": {
                        "NORMAL": 25,
                        "POSITION": 23,
                        "TEXCOORD_0": 27
                    },
                    "indices": 21,
                    "material": 3,
                    "mode": 4, 
                    "targets": [
                        {
                            "NORMAL": 35,
                            "POSITION": 33,
                        },
                        {
                            "NORMAL": 45,
                            "POSITION": 43,
                        },
                    ]
                }
            ]
        }
    ]
}

Instantiating a morph on a node together with skinning:

{
  "mesh": 1,
  "skeletons": [21],
  "skin": 0,
  "weights":[0.0 0.5],
  "targets":[0 1]
}

The (optional) weights array is only valid when the instantiated mesh is a morph target. This array specifies the weights of the instantiated morph target, it has therefore the same size as the weights array of the referenced morph target.
The (optional) activeTargets array lists the indices of the most active targets (largest weights). It can be used by the engine to select the targets to bind to the vertex shader. (This should be the same as running quickSelect on the weight array… which in my opinion should be fast enough).

Animating a morph.
Animation needs to be extended to support arbitrary sized output vectors (not only vec3/vec4, 4 active morph targets is not an acceptable limitation).
One way to do that is to add additional vector types such as VECx. Is this reasonable to you? Better ideas?

Sample of animation of both skin and morph:

    "animations": [
        {
            "name": "Animate all properties of one node with different samplers",
            "channels": [
                {
                    "sampler": 0,
                    "target": {
                        "id": 1,
                        "path": "rotation"
                    }
                },
                {
                    "sampler": 1,
                    "target": {
                        "id": 2,
                        "path": "translation"
                    }
                },
                {
                    "sampler": 2,
                    "target": {
                        "id": 1,
                        "path": "weights"
                    }
                },
                {
                    "sampler": 3,
                    "target": {
                        "id": 1,
                        "path": "activeTargets"
                    }
                }
            ]
      }]

@lexaknyazev
Copy link
Member

currently only two attributes ('POSITION' and 'NORMAL') are supported

How tangent-space should be reconstructed for normal maps to work with morphed mesh? We must provide exact math there.

Here's relevant excerpt from GPU Gems:

We ultimately chose to have our vertex shader apply five blend shapes that modified the position and normal. The vertex shader would then orthonormalize the neutral tangent against the new normal (that is, subtract the collinear elements of the new normal from the neutral tangent and then normalize) and take the cross product for the binormal.


All primitives are required to list morph targets in the same order.

Maybe, clarify also that all primitives must have the same number of targets (do they?).


One way to do that is to add additional vector types such as VECx

Since reading and sorting will be done on CPU, we can leave them SCALAR and specify data layout depending on the number of targets.


4 active morph targets is not an acceptable limitation

I understand that, but it's almost inevitable with WebGL 1.0 (three.js supports 8 targets if they contain only positions, no normals).

With WebGL 2.0, engines could use iterative approach via Transform Feedback and apply targets in batches of 4 (e.g., 12 active targets need 3 passes).

Only with OpenGL ES 3.1+ (no web equivalent yet), it will be possible to access buffer data directly from shader and apply any number of targets in one pass.

@emilian0
Copy link
Contributor

@lexaknyazev agree on the first two points.

regarding


Since reading and sorting will be done on CPU, we can leave them SCALAR and specify data layout depending on the number of targets.


That is ok, it is a little of work for the runtime though, since that prevents decoupling between animations and morph target (it can't blend the animation curves before it knows what morph target they are for). Anyway this simplifies a lot the format, I am on board with this.

Considering these changes should I go ahead and make a pull request?

@emilian0
Copy link
Contributor

@lexaknyazev I would like to leave tangent/binormal computation out of the draft and up for discussion. It seems like different shaders implementers use different techniques to compute them and I don't see a reason to pick the one above (aside from the fact that it was published on GPU gems). sounds good?

@lexaknyazev
Copy link
Member

that prevents decoupling between animations and morph target (it can't blend the animation curves before it knows what morph target they are for)

I'm not following. glTF doesn't yet support any kind of animation blending. Could you provide an example of what we could lose?

I don't see a reason to pick the one above

I've used it just as an example. We haven't settled with tangent-space storage for non-morphed meshes yet.

It seems like different shaders implementers use different techniques to compute them

That's fine. I think, we should provide "default" technique in spec appendix or at least give some hints.

@emilian0
Copy link
Contributor

Great I agree on providing a "default" technique in the appendix. By decoupling I meant: the evaluation of the morph targets animation curves (linear or bezier in the future) requires knowledge of the destination Morph Target object (i.e. # of weights). Anyway this is a detail, I am not concerned about this. Let's roll out the draft!

emilian0 pushed a commit that referenced this issue Feb 24, 2017
emilian0 pushed a commit that referenced this issue Feb 24, 2017
emilian0 pushed a commit that referenced this issue Feb 24, 2017
…escription.It loooks like the animation example is old (glTF1.0) I am leaving it as is to avoid conflicts
This was referenced Feb 24, 2017
@emilian0
Copy link
Contributor

@lexaknyazev please take a look at
#852
and let me know if you have any concern. Thanks

@pjcozzi
Copy link
Member

pjcozzi commented Jun 15, 2017

Updated in #826

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants