Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copyedit #7

Merged
merged 1 commit into from
Nov 26, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions gltfTutorial/gltfTutorial_002_BasicGltfStructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ As shown in the image above, there are two types of objects that may contain suc

## Reading and managing external data

Reading and processing a glTF asset starts with parsing the JSON structure. After the structure has been parsed, the [`buffer`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-buffer) and [`image`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-image) objects are available in the top-level `buffers` and `images` arrays, respectively. Each of these objects may refer blocks of binary data. For further processing, this data is read into memory. Usually, the data will be be stored in an array, so that they may be looked up using the same index that is used for referring to the `buffer` or `image` object that they belong to.
Reading and processing a glTF asset starts with parsing the JSON structure. After the structure has been parsed, the [`buffer`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-buffer) and [`image`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-image) objects are available in the top-level `buffers` and `images` arrays, respectively. Each of these objects may refer to blocks of binary data. For further processing, this data is read into memory. Usually, the data will be be stored in an array so that they may be looked up using the same index that is used for referring to the `buffer` or `image` object that they belong to.


## Binary data in `buffers`
Expand Down Expand Up @@ -96,7 +96,7 @@ An [`image`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/
}
```

The reference is given as a URI that usually points to a PNG or JPG file. These formats significantly reduce the size of the files, so that they may efficiently be transferred over the web.In some cases, the `image` objects may not refer to an external file, but to data that is stored in a `buffer`. The details of this indirection will be explained in the [Textures, Images, and Samplers](gltfTutorial_016_TexturesImagesSamplers.md) section.
The reference is given as a URI that usually points to a PNG or JPG file. These formats significantly reduce the size of the files so that they may efficiently be transferred over the web. In some cases, the `image` objects may not refer to an external file, but to data that is stored in a `buffer`. The details of this indirection will be explained in the [Textures, Images, and Samplers](gltfTutorial_016_TexturesImagesSamplers.md) section.



Expand Down
2 changes: 1 addition & 1 deletion gltfTutorial/gltfTutorial_003_MinimalGltfFile.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ The `buffer`, `bufferView`, and `accessor` objects provide information about the

A [`buffer`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-buffer) defines a block of raw, unstructured data with no inherent meaning. It contains an `uri`, which can either point to an external file that contains the data, or it can be a [data URI](gltfTutorial_002_BasicGltfStructure.md#binary-data-in-data-uris) that encodes the binary data directly in the JSON file.

In the example file, the second approach is used: There is a single buffer, containing 44 bytes, and the data of a this buffer is encoded as a data URI:
In the example file, the second approach is used: there is a single buffer, containing 44 bytes, and the data of a this buffer is encoded as a data URI:

```javascript
"buffers" : [
Expand Down
14 changes: 7 additions & 7 deletions gltfTutorial/gltfTutorial_005_BuffersBufferViewsAccessors.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,11 @@ Image 5c illustrates how the raw data of a `buffer` is structured using `bufferV

### Data interleaving

The data of the attributes that are stored in a single `bufferView` may be stored as an *Array-Of-Structures*. A single `bufferView` may, for example, contain the data for vertex positions and for vertex normals in an interleaved fashion. In this case, the `byteOffset` of an accessor defines the start of the first relevant data element for the respective attribute, and the `bufferView` defines an additional `byteStride` property. This is the number of bytes between the start of one element of its accessors, and the start of the next one. An example of how interleaved position- and a normal attributes are stored inside a `bufferView` is shown in Image 5d.
The data of the attributes that are stored in a single `bufferView` may be stored as an *Array-Of-Structures*. A single `bufferView` may, for example, contain the data for vertex positions and for vertex normals in an interleaved fashion. In this case, the `byteOffset` of an accessor defines the start of the first relevant data element for the respective attribute, and the `bufferView` defines an additional `byteStride` property. This is the number of bytes between the start of one element of its accessors, and the start of the next one. An example of how interleaved position and normal attributes are stored inside a `bufferView` is shown in Image 5d.

<p align="center">
<img src="images/aos.png" /><br>
<a name="aos-png"></a>Image 5d: Interleaved acessors in one buffer view
<a name="aos-png"></a>Image 5d: Interleaved acessors in one buffer view.
</p>


Expand All @@ -143,7 +143,7 @@ An `accessor` also contains `min` and `max` properties that summarize the conten
## Sparse accessors


With version 2.0, the concept of *sparse accessors* was introduced in glTF. This is a special representation of data that allows a very compact storage of multiple data blocks that only have few different entries. For example, when there is geometry data that contains vertex positions, then this geometry data may be used for multiple objects. This may be achieved by referring to the same `accessor` from both objects. If the vertex positions for both objects are mostly the same, but differ for few vertices, then it is not necessary to store the whole geometry data twice. Instead, it is possible to store the data only once, and use a sparse accessor to only store the vertex positions that differ for the second object.
With version 2.0, the concept of *sparse accessors* was introduced in glTF. This is a special representation of data that allows very compact storage of multiple data blocks that have only a few different entries. For example, when there is geometry data that contains vertex positions, this geometry data may be used for multiple objects. This may be achieved by referring to the same `accessor` from both objects. If the vertex positions for both objects are mostly the same and differ for only a few vertices, then it is not necessary to store the whole geometry data twice. Instead, it is possible to store the data only once, and use a sparse accessor to store only the vertex positions that differ for the second object.

The following is a complete glTF asset, in embedded representation, that shows an example of sparse accessors:

Expand Down Expand Up @@ -230,10 +230,10 @@ The result of rendering this asset is shown in Image 5e:

<p align="center">
<img src="images/simpleSparseAccessor.png" /><br>
<a name="simpleSparseAccessor-png"></a>Image 5e: The result of rendering the simple sparse accessor asset
<a name="simpleSparseAccessor-png"></a>Image 5e: The result of rendering the simple sparse accessor asset.
</p>

The example contains two accessors. One for the indices of the mesh, and one for the vertex positions. The one that refers to the vertex positions defines an additional `accessor.sparse` property, which contains the information about the sparse data substitution that should be applied:
The example contains two accessors: one for the indices of the mesh, and one for the vertex positions. The one that refers to the vertex positions defines an additional `accessor.sparse` property, which contains the information about the sparse data substitution that should be applied:


```javascript
Expand Down Expand Up @@ -264,12 +264,12 @@ The example contains two accessors. One for the indices of the mesh, and one for

This `sparse` object itself defines the `count` of elements that will be affected by the substitution. The `sparse.indices` property refers to a `bufferView` that contains the indices of the elements which will be replaced. The `sparse.values` refers to a `bufferView` that contains the actual data.

In the example, the original geometry data is stored in the `bufferView` with index 1. It describes a rectangular array of vertices. The `sparse.indices` refer to the `bufferView` with index 2, which contains are the indices `[8, 10, 12]`. The `sparse.values` refers to the `bufferView` with index 3, which contains new vertex positions, namely `[(1,2,0), (3,3,0), (5,4,0)]`. The effect of applying the corresponding substitution is shown in Image 5f:
In the example, the original geometry data is stored in the `bufferView` with index 1. It describes a rectangular array of vertices. The `sparse.indices` refer to the `bufferView` with index 2, which contains the indices `[8, 10, 12]`. The `sparse.values` refers to the `bufferView` with index 3, which contains new vertex positions, namely, `[(1,2,0), (3,3,0), (5,4,0)]`. The effect of applying the corresponding substitution is shown in Image 5f.


<p align="center">
<img src="images/simpleSparseAccessorDescription.png" /><br>
<a name="simpleSparseAccessorDescription-png"></a>Image 5f: The substitution that is done with the sparse accessor
<a name="simpleSparseAccessorDescription-png"></a>Image 5f: The substitution that is done with the sparse accessor.
</p>


Expand Down
18 changes: 9 additions & 9 deletions gltfTutorial/gltfTutorial_010_Materials.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,32 +6,32 @@ Previous: [Meshes](gltfTutorial_009_Meshes.md) | [Table of Contents](README.md)

The purpose of glTF is to define a transmission format for 3D assets. As shown in the previous sections, this includes information about the scene structure and the geometric objects that appear in the scene. But a glTF asset can also contain information about the *appearance* of the objects; that is, how these objects should be rendered on the screen.

There are different possible representations for the properties of a material, and the *shading model* describes how these properties are processed. Simple shading models, like the [Phong](https://en.wikipedia.org/wiki/Phong_reflection_model) or [Blinn-Phong](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model) are directly supported by common graphics APIs like OpenGL or WebGL. These shading models are built upon a set of basic material properties. For example, the material properties involve information about the color of diffusely reflected light (often in form of a texture), the color of specularly reflected light, and a shininess parameter. Many file formats contain exactly these parameters. For example, [Wavefront OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file) files are combined with `MTL` files that contain this texture- and color information. Renderers can read this information and render the objects accordingly. But in order to describe more realistic materials, more sophisticated shading- and material models are required.
There are different possible representations for the properties of a material, and the *shading model* describes how these properties are processed. Simple shading models, like the [Phong](https://en.wikipedia.org/wiki/Phong_reflection_model) or [Blinn-Phong](https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model), are directly supported by common graphics APIs like OpenGL or WebGL. These shading models are built on a set of basic material properties. For example, the material properties involve information about the color of diffusely reflected light (often in the form of a texture), the color of specularly reflected light, and a shininess parameter. Many file formats contain exactly these parameters. For example, [Wavefront OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file) files are combined with `MTL` files that contain this texture and color information. Renderers can read this information and render the objects accordingly. But in order to describe more realistic materials, more sophisticated shading and material models are required.

## Physically-Based Rendering (PBR)

To allow renderers to display objects with a realistic appearance under different lighting conditions, the shading model has to take the *physical* properties of the object surface into account. There are different representations of these physical material properties. One that is frequently used is the *metallic-roughness-model*. Here, the information about the object surface is encoded with three main parameters:

- The *base color*, which is the "main" color of the object surface
- The *metallic* value. This is a parameter that describes how much the reflective behavior of the material resembles that of a metal
- The *roughness* value, indicating how rough the surface is, affecting the light scattering
- The *base color*, which is the "main" color of the object surface.
- The *metallic* value. This is a parameter that describes how much the reflective behavior of the material resembles that of a metal.
- The *roughness* value, indicating how rough the surface is, affecting the light scattering.

The metallic-roughness model is the representation that is used in glTF. Other material representations, like the *specular-glossiness-model*, are supported via extensions.

The effects of different metallic- and roughness values are illustrated in this image:

<p align="center">
<img src="images/metallicRoughnessSpheres.png" /><br>
<a name="metallicRoughnessSpheres-png"></a>Image 10a: Spheres with different metallic- and roughness values
<a name="metallicRoughnessSpheres-png"></a>Image 10a: Spheres with different metallic- and roughness values.
</p>

The base color, metallic and roughness properties may be given as single values and are then applied to the whole object. In order to assign different material properties to different parts of the object surface, these properties may also be given in form of textures. This allows modeling a wide range of real-world materials with a realistic appearance.
The base color, metallic, and roughness properties may be given as single values and are then applied to the whole object. In order to assign different material properties to different parts of the object surface, these properties may also be given in the form of textures. This makes it possible to model a wide range of real-world materials with a realistic appearance.

Depending on the shading model, additional effects can be applied to the object surface. These are usually given as a combination of a texture and a scaling factor:

- An *emissive* texture describes the parts of the object surface that emit light with a certain color
- The *occlusion* texture can be used to simulate the effect of parts of the objects self-shadowing each other
- The *normal map* is a texture that is applied to modulate the surface normal in a way that allows simulating finer geometric details, without the cost of a higher mesh resolution.
- An *emissive* texture describes the parts of the object surface that emit light with a certain color.
- The *occlusion* texture can be used to simulate the effect of objects self-shadowing each other.
- The *normal map* is a texture applied to modulate the surface normal in a way that makes it possible to simulate finer geometric details without the cost of a higher mesh resolution.

glTF supports all of these additional properties, and defines sensible default values for the cases that these properties are omitted.

Expand Down
4 changes: 2 additions & 2 deletions gltfTutorial/gltfTutorial_011_SimpleMaterial.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Previous: [Materials](gltfTutorial_010_Materials.md) | [Table of Contents](READM

The examples of glTF assets that have been given in the previous sections contained a basic scene structure and simple geometric objects. But they did not contain information about the appearance of the objects. When no such information is given, viewers are encouraged to render the objects with a "default" material. And as shown in the screenshot of the [minimal glTF file](gltfTutorial_003_MinimalGltfFile.md), depending on the light conditions in the scene, this default material causes the object to be rendered with a uniformly white or light gray color.

This section will start with an example of a very simple material, and explain the effect of the different material properties.
This section will start with an example of a very simple material and explain the effect of the different material properties.

This is a minimal glTF asset with a simple material:

Expand Down Expand Up @@ -115,7 +115,7 @@ A new top-level array has been added to the glTF JSON to define this material: T
],
```

The actual definition of the [`material`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-material) here only consists of the [`pbrMetallicRoughness`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-pbrmetallicroughness) object, which defines the basic properties of a material in the *metallic-roughness-model*. (All other material properties will therefore have default values, which will be explained later). The `baseColorFactor` contains the red, green, blue and alpha components of the main color of the material - here, a bright orange color. The `metallicFactor` of 0.5 indicates that the material should have reflection characteristics that resemble that are between that of a metal and a non-metal material. The `roughnessFactor` causes the material to not be perfectly mirror-like, but instead scatter the reflected light a bit.
The actual definition of the [`material`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-material) here only consists of the [`pbrMetallicRoughness`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-pbrmetallicroughness) object, which defines the basic properties of a material in the *metallic-roughness-model*. (All other material properties will therefore have default values, which will be explained later.) The `baseColorFactor` contains the red, green, blue, and alpha components of the main color of the material - here, a bright orange color. The `metallicFactor` of 0.5 indicates that the material should have reflection characteristics between that of a metal and a non-metal material. The `roughnessFactor` causes the material to not be perfectly mirror-like, but instead scatter the reflected light a bit.

## Assigning the material to objects

Expand Down
2 changes: 1 addition & 1 deletion gltfTutorial/gltfTutorial_012_TexturesImagesSamplers.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Previous: [Simple Material](gltfTutorial_011_SimpleMaterial.md) | [Table of Cont

# Textures, Images, and Samplers

An important aspect for the realistic appearance of objects are textures. They allow to define the main color of the objects, and other characteristics that are used in the material definition in order to precisely describe what the rendered object should look like.
Textures are an important aspect of giving objects a realistic appearance. They make it possible to define the main color of the objects, as well as other characteristics that are used in the material definition in order to precisely describe what the rendered object should look like.

A glTF asset may define multiple [`texture`](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0/#reference-texture) objects, which can be used as the textures of geometric objects during rendering, and which can be used to encode different material properties. Depending on the graphics API, there may be many features and settings that influence the process of texture mapping. Many of these details are beyond the scope of this tutorial. There are dedicated tutorials that explain the exact meaning of all the texture mapping parameters and settings; for example, on [webglfundamentals.org](http://webglfundamentals.org/webgl/lessons/webgl-3d-textures.html), [open.gl](https://open.gl/textures), and others. This section will only summarize how the information about textures is encoded in a glTF asset.

Expand Down
Loading