Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major refactoring of the OCA Specification #86

Open
wants to merge 46 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
4cee8fd
chore: remove old rc version which is outdated
mitfik Jan 17, 2025
90bd1dc
chore: Archive 1.0.1 version
mitfik Jan 17, 2025
735d059
docs: Improve OCA bundle normative description
mitfik Jan 17, 2025
5dacdac
docs: Editorial changes
mitfik Jan 17, 2025
55b790d
docs: Specify non-normative vs normative parts
mitfik Jan 17, 2025
ab3c357
feat: Allow for SemVer in object type
mitfik Jan 17, 2025
d80cfb7
feat: Remove PII and classification from capture base
mitfik Jan 17, 2025
edf67d3
feat: Introduce linking to overlay to overlay
mitfik Jan 20, 2025
fc1d713
docs: remove rugby model reference
mitfik Jan 20, 2025
e2ab835
feat: remove information overlay
mitfik Jan 20, 2025
422939f
feat: remove transformation overlay
mitfik Jan 20, 2025
34bb431
feat: remove presentation overlay
mitfik Jan 20, 2025
e425402
feat: remove layout overlay
mitfik Jan 20, 2025
2cc67c1
feat: enahnce sensitive overlay to replace flagging from capture base
mitfik Jan 20, 2025
7f1d5ae
docs: remove non-normative section about basic concept
mitfik Jan 20, 2025
39e345b
docs: move conventions section to the begining
mitfik Jan 20, 2025
9dd8c13
docs: editorial changes to improve clarity
mitfik Jan 20, 2025
8077b64
docs: improve attribute name description and add ABNF
mitfik Jan 20, 2025
ded80eb
docs: remove note about ISO datetime recomendation.
mitfik Jan 20, 2025
941971c
docs: improve overlay type description and align with SemVer
mitfik Jan 20, 2025
6fbbcbe
feat: Use 639-3 for language codes
mitfik Jan 20, 2025
edef45a
chore: Fix links and improve identation
mitfik Feb 6, 2025
c4812de
feat: remove categories from label overlay
mitfik Feb 6, 2025
699e68a
feat: move conditional overlay as community overlay
mitfik Feb 6, 2025
c4969f6
chore: clarity about language common attribute
mitfik Feb 6, 2025
beed75a
chore: fix versioning in examples
mitfik Feb 6, 2025
942a5ba
chore: align description with new structure
mitfik Feb 6, 2025
b97cc78
feat: add community overlay section
mitfik Feb 6, 2025
98de0db
feat: allow for community namespace in the type
mitfik Feb 7, 2025
bfe4e56
chore: fix typos
mitfik Feb 8, 2025
9a66aa3
chore: add link to eupl1.2 license
mitfik Feb 8, 2025
d0cc8c3
feat: add d field to overlay
mitfik Feb 8, 2025
f6572dd
feat: Add calculation of SAID and fix references
mitfik Feb 8, 2025
f9fe56a
chore: Update governance to OCA WG
mitfik Feb 8, 2025
2b7aa6c
chore: fix links and styling
mitfik Feb 8, 2025
4235bf9
Revert "feat: Use 639-3 for language codes"
mitfik Feb 8, 2025
f103747
chore: improve description of overlay canonical form
mitfik Feb 8, 2025
54ad392
chore: update examples and fix styling
mitfik Feb 8, 2025
6fa7276
feat: move unit mapping to community overlay
mitfik Feb 8, 2025
852e689
chore: remove empty section
mitfik Feb 8, 2025
1aea159
chore: fix section levels
mitfik Feb 9, 2025
7d27eba
chore: fix typo
mitfik Feb 13, 2025
4c204ba
chore: wrap lines
mitfik Feb 13, 2025
7a3d187
chore: reintroduce accidentally removed type
mitfik Feb 13, 2025
ae632b0
chore: remove informative part about classification of overlays
mitfik Feb 13, 2025
f2c42d8
chore: fix language codes in examples
mitfik Feb 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
197 changes: 143 additions & 54 deletions docs/specification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,14 @@ description: Official OCA specification
# OCA Technical Specification

<dl>
<dt>
Version:
</dt>
<dd>
v1.0.2
</dd>
<dt>
Latest published version:
Latest published version:
</dt>
<dd>

Expand Down Expand Up @@ -826,41 +832,150 @@ _Example 19. Code snippet for a Sensitive Overlay_

### Bundle

An OCA Bundle contains a set of OCA objects consisting of a Capture Base and bound Overlays. An encoded cryptographic digest of the contained objects produces a deterministic identifier for the bundle.
An OCA Bundle is a set of OCA objects which MUST included a `Capture Base` and MAY consist of any number of `Overlays`. An encoded cryptographic digest of the contained objects produces a
deterministic identifier for the bundle.

The following object types are REQUIRED in any OCA bundle to preserve the minimum amount of structural, definitional, and contextual information to capture the meaning of inputted data.
#### Canonical form

- Capture base
- Character encoding overlay
- Format overlay
OCA Bundles MUST be serializable to be transferred over the network. The

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does it have to be serializable? During the calculation of the digests, an interium, deterministic form of the data being hashed needs to be created, but that is not a reason to canonicalize the “at rest” representation of the Bundle. Much better to say that the ordering of items SHOULD NOT be relied upon. It is fighting against nature to try to force an ordering on moving data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I follow your question you start with serialization and then you speaking about ordering and SHOULD NOT be relied upon. If you could elaborate a bit would be helpful.

Generally serialization (with specific ordering) is required to calculate the hash as soon as that is done the format how you present, store or move bundle does not matter as soon as there is clear way to convert it (serialize it) to the form on which you can validate the hash. And this is what the specs describes. Tell you how serialized version should look like and in which order attributes should be to make sure that the hash can be calculated in deterministic way.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main question is why is the statement in the spec? We know they are JSON objects. I assume the statement is there for some reason and I’d like to know what it is? Can it be removed from the spec?

I should have left it at that. I was guessing on the answer, but I should wait to hear the answer.

serialization algorithm MUST be deterministic and operate on the canonical form
of the Bundle, which ensures proper ordering of the attributes within OCA
Objects. The serialization algorithm consists of the following rules:

The cardinality of several overlay types, particularly the language-specific ones (Entry, Information, Label, and Meta), can be multiple depending on the number of defined supported languages.
- MUST consist of following attributes in this order: `v`, `d`, `capture_base`, `overlays`
- `v` - version string defined per section [Bundle Version](#bundle-version)
- `d` - deterministic identifier of the bundle
- `capture_base` - the `Capture Base` object defined as per section [Capture Base](#capture-base)
- `overlays` - an array, containing all the overlays, sorted ASC by the `d` attribute

##### Bundle Version

To ensure proper versioning and identification of bundles within the OCA
Specification, we define a standardized string format for the bundle version.
This format encodes critical metadata about the bundle, allowing for consistent
interpretation and management across implementations.

*Bundle Version String Format*

The bundle version string must adhere to the following format:

```
EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis.json
├── E3SAKe0z83pfBnhhcZl19PGGKBheb35WeCJ3V6RdqwY8.json
├── Ejx0o0yuwp99vi0V-ssP6URZIXRMGj1oNKIZ1BXi4sHU.json
├── EZv1B5nNl4Rty8CXFTALhr8T6qXeO0CcKliM03sdrkRA.json
├── Eri3NLi1fr4QrKoFfTlK31KvWpwrSgGaZ0LLuWYQaZfI.json
├── EY0UZ8aYAPusaWk_TON8c20gHth2tvZs4eWh7XAfXBcY.json
├── E1mqEb4f6eOMgu5zR857WWlMUwGYwPzZgiM6sWRZkQ0M.json
├── ESEMKWoKKIf5qvngKecV-ei8MwcQc_pPWCH1FrTWajAM.json
├── EyzKEWuMs8kspj4r70_Lc8sdppnDx-hb9QqUQywjmDRY.json
├── EIGknekgJFqjgQ8ah2NwL8zNWbFrllvXVLqezgB6U3Yg.json
├── EgBxL29VsxoZso7YFirlMP334ZuC1mkel-lO7TxPxEq8.json
├── ED9PH0ZBaOci-nbnYfPgYZWGQdkyWxA-nW3REmB3vhu0.json
├── ElJEQGfAvfJEuB7JeNIcvmAPO2DIOaKkpkZyvxO-gQoc.json
├── EpW9bQGs0Lk6k5cJikN0Ep-DN6z29fwZIsbVzMBgTlWY.json
├── EIGj0LQKT9-6gCLV2QZVgi4YQZhrUl0-GKbN7sFTCSAI.json
├── EHDwC_Ucuttrsxh2NVptgBnyG4EMbG5D8QsdbeF9G9-M.json
└── meta.json
OCAS<major><minor><format><size>_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the size calculated? It's orthogonal to versioning? I can't think of a strong motivation to include it at this layer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TL;DR: this is for effective streaming.

If you go beyond HTTP protocol and merely focus on streams of bytes, consuming the whole chunk (Bundle) out of a stream is simply taking the <size> of bytes off the stream, effectively enabling the transfer of the Bundles over the wire along with other chunks. Furthermore, because we precisely know where to look for particular information in the stream (that's why OCA always had custom canonical form and is not RFC 8785 compliant — Bundle JSON starts with the v attribute), we can immediately decide which parser can handle this chunk. In this case, OCAS<major><minor><format> enables us to unambiguously apply the appropriate parser for further handling this chunk.

FWIW, the <size> is CESR-Base64 encoded.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Streaming = a concern for the messaging layer, not for the application layer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And that's why we have Bundles. If there's no need for exchange , there's no need for a Bundle concept.

Copy link
Collaborator

@pknowl pknowl Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blelump I'm confused by your response. Message streaming information belongs inside an exchange packet, not inside a schema. As it stands, the "version" format (e.g., "v": "OCAA11JSON00714b_") contains the byte size of the messaging stream (OCAS<major><minor><format><size>_). This is in the wrong place.

OCA is solely for defining passive objects, nothing else. It is not a messaging protocol. Messaging should be defined in exchange packets, not in the data schema itself.

If you follow the Informatics Domain Model. This separation is clearly defined:
https://zenodo.org/records/14525852

Capture = Objects = Schema
Exchange = Actions = Packet

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This information is valuable when Bundle exchanges through a continuous stream of bytes type of protocols instead of discrete messages, which is characteristic of the HTTP protocol.

You want to use the Bundle as a wire format?

Yes, see below for further explanation.

To make that actually viable the JSON serialization and encoding must be specified in detail. We haven't even specified at the Bundle itself needs to be encoded as UTF-8, let alone the subset (with/without BOM), the role of spacing, line endings etc.

Yeah, we'd need to add this information.

Given the current specification the specification you have to parse the JSON itself in order to extract the value from "v". and thus do anything useful with the length.

Thanks to the Bundle canonical form, we know where to look for specific bytes. We specifically know where to look for <format><size> counting from the start of the stream. Therefore, we don't need to deserialize the potentially valid JSON string to extract v.

If we do what everyone else does and implement this on a different layer than you can do things like reserve the first N bytes for this metadata, which enables a lot of fun stuff. I've even seen people do this by prefixing the JSON with an a 16-character string.

This is precisely what we're doing when applying CESR, that is, suffixing the JSON with a sophistically structured text that at first glance looks like garbage. Adding layering here in the context of other components we use the same way and join them, that is: <some payload, i.e., OCA Bundle in JSON><attachments><a VC in JSON><attachments><JSON><attachments><JSON><attachments> enable us to unambiguously find with what type of document we're dealing in this chain. Enveloping any of these would add more complexity — in most cases; these attachments are digital signatures; therefore, verifying information would first require de-enveloping. Going further, OCA primarily serves as a DDE enabler. When considering its features, we also consider the broader concept of DDE and how to integrate them effectively. At the same time, by providing universal tooling, we relax the entry point to OCA and let people join the ecosystem without the need to implement all this stuff on their own, but instead consume it and use it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My $0.02CDN. I’d definitely like to leave off the size of the bundle in the version as it is a pain. Doable if the calculation is well-defined, but annoying at the application layer. I agree that if anyone wants to stream OCA data (which really doesn’t make sense to me), they are welcome to do that by putting a minimal wrapper / prefix that has the size. But it should be outside of the OCA specification.

I definitely agree that a digest and version at the same level as capture_base and overlays are needed. I’d like the version defined as simply a semver.

Copy link
Collaborator

@pknowl pknowl Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Informatics Domain Model (IDM) should be the blueprint for data-centric modeling, not the OCA Bundle itself. The OCA Bundle belongs strictly in the Object domain (passive) [i.e., no mechanics], and must maintain distinct separation from event logs (Event domain), active execution algorithms (Intelligence domain), and framed concepts (Knowledge domain).

Blurring these domain boundaries creates two major issues:

  1. Search & Discovery Breakdown

Each domain supports a distinct type of search:

a.) Attribute search (Object) → Finds structural attributes in an OCA Bundle.
b.) Field search (Event) → Queries recorded fields in an event history.
c.) Term search (Concept) → Searches by ontological terms or controlled vocabulary.
d.) Value search (Action) → Retrieves explicit exchange metadata and execution values, which may include:

  • Message size (byte length of the payload);
  • Location (where the bundle is stored/fetched);
  • Routing details (if streaming applies).

Embedding value-based search parameters in the OCA Bundle mixes passive structure (attributes) with active mechanics (values), making searches imprecise.

  1. Role-Based Access Control (RBAC) Violations

Keeping domains separate ensures granular access control:

In the case of the two domains in question (i.e., Object & Action) ...
a.) Schema Guardians may be appointed to protect structural semantics in an OCA Bundle.
b.) Packet Trackers may be appointed to track message execution in transit (message size, location, routing).

If message metadata is stored inside the OCA Bundle, Schema Guardians would have access to exchange intelligence, violating need-to-know governance.

My suggestion would be to use an envelope for message/transmission metadata, and remove the "v" attribute (Versioning, Encoding Format & Message Size) from the OCA core specification. This would ensure:
✅ Schema Bundles remain purely structural (i.e., made up of passive structural attributes).
✅ Message metadata stays in the Action domain (i.e., within packet headers).
✅ RBAC integrity is preserved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My $0.02CDN. I’d definitely like to leave off the size of the bundle in the version as it is a pain. Doable if the calculation is well-defined, but annoying at the application layer. I agree that if anyone wants to stream OCA data (which really doesn’t make sense to me), they are welcome to do that by putting a minimal wrapper / prefix that has the size. But it should be outside of the OCA specification.

I definitely agree that a digest and version at the same level as capture_base and overlays are needed. I’d like the version defined as simply a semver.

@swcurran major and minor on this level of specification of data containers are likely to be what change in practice? In essence, you don't patch smth as critical as Bundle. It always has an impact.

How about we make it optional? I mean the <format><size>, effectively relaxing the burden of having v? Ultimately, what stays in v is the OCAS<major><minor>. <format><size> is moved into a separate attribute that is after v in Bundle canonical form.

Such a change shall make both worlds happy.

cc: @ryanbnl

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is precisely what we're doing when applying CESR

That's an esoteric specification which, given his it makes very specific demands - requiring specific metadata to be added to a message payload - appears to break the fundamental principle of separation of concerns.

OCA is a building block and as such we should not be making assumptions on usage. That means that we can't add metadata (the size) which is only relevant to a specific nice use-case.

It's bad design.

CESR looks at first glance to be like HL7 v2 and that was a disaster.

```

_Example 20. A representation of an OCA Bundle as a ZIP file containing a Capture Base (first row), multiple Overlays, and a metafile (meta.json) that provides key-value mappings between the file names and the names of the OCA object types. Apart from the metafile, each file name directly represents the encoded cryptographic digest of the file._
Where:

See [ Appendix A ](#appendix-a-an-example-of-metafile-content) for more information on the content of a metafile (`meta.json` in the above example).
- `OCAS`: A fixed prefix indicating "OCA Structure". This identifies the string as conforming to the OCA Specification's versioning scheme.
- `<major>`: A single-digit integer (0-9) representing the major version of the specification. A change in the major version indicates backward-incompatible updates to the structure.
- `<minor>`: A single-digit integer (0-9) representing the minor version of the specification. A change in the minor version indicates backward-compatible updates.
- `<format>`: A string denoting the serialization format of the bundle. Supported format is: `JSON`: JavaScript Object Notation
- `<size>`: A six-digit, zero-padded integer representing the size of the object in hex notation, size of the object is calculated with `d` field with dummy characters the same lenght as the eventual derived value. The dummy character is #, that is, ASCII 35 decimal (23 hex).
- '_': A version string terminator.

*Example*:

A valid bundle version string:
```
OCAS11JSON000646_
```

This indicates:
- `OCAS` it is a OCA Bundle.
- The major version is 1.
- The minor version is 1.
- The serialization format is JSON.
- The object size in base64 encoding is 646 bytes.

*Validation*

Consumers of the OCA Specification must implement validation logic to ensure the bundle version string:
- Matches the defined format and structure.
- Uses only supported serialization formats.
- Accurately represents the object's size in base64 encoding.

Validation failure must result in the rejection of the bundle as non-compliant with the specification.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a big concern of mine, as I’ve expressed multiple times. Once more.

  • Please add to the spec the algorithm for creating a digest for a given “chunk” of JSON.
  • Please do not refer to the CESR/SAID spec for that, but to put the algorithm in the spec. The algorithm is short, and easily defined.
    • Set the value of the digest item to a string # characters of the length the digest willl be
    • Calculate digest = remove_padding (encode ( prefix + hash ( JCS(JSON) ) ) )
    • Note that the OCA Bundle does NOT need to be stored canonicalized — the algoirthm to calculate the SAID will canonicalize the relevant JSON in doing the SAID calculation.
  • In doing that, please require that the hash and encoding algorithms used are embedded in the digest (the SAID prefix is fine, although I would prefer the more standard multiformats (multi base and multi hash).
  • Please specify the specific, and ideally very, hashing and encoding schemes. I would recommend only sha-256 and b58btc encoding, but am fine if others are specifically allowed. Without limiting the algorithms allowed (by version of the OCA specification), it is impossible to write an OCA Consumer that handles whatever a algorithms are used by producers. There are just too many options.
  • Please document the process for calculating the digests for an OCA Bundle. Notably, it must be calculated as follows:
    • Calculate the digest for the Capture Base, and set the value of its digest to the SAID.
    • For each overlay:
      • Set the capture_base value to be the digest of the capture base.
      • Calculate the digest for the overlay, and set the value of its digest to the SAID.
    • Calculate the digest for the entire OCA Bundle, and set the value of the root digest to that SAID.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing from my comment above is the (I think unnecessary) calculation of the length of the OCA Bundle before calculating the the digest of the entire bundle. Thus the last step I have above (Calculate the digest for the entire OCA Bundle…), with the steps:

  • Set the value of the root digest to a string of # characters the length the digest will be.
  • Determine the length of the OCA Bundle by doing this calculation: insert calculation of length of the bundle
  • Set the OCA Version string to be prefix + length of OCA Bundle + suffix (prefix and suffix are hardcoded per OCA Specification Version.

If any consumer of an OCA Bundle cares, they would need to repeat the length calculation and verify it against the length. They are unlikely to do that, because the digest verification would also fail if the OCA Bundle length has been changed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not refer to the CESR/SAID spec for that, but to put the algorithm in the spec.

Line 1043 ( section: Deterministic Identifier) does exactly that, is that not clear enough? or missing anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please specify the specific, and ideally very, hashing and encoding schemes. I would recommend only sha-256 and b58btc encoding, but am fine if others are specifically allowed. Without limiting the algorithms allowed (by version of the OCA specification), it is impossible to write an OCA Consumer that handles whatever a algorithms are used by producers.

the point of SAID is to not enforce any algorithms since they can be use case specific or required to be rotated at any point of time. OCA should not enforce that for every use case. It is up to the ecosystem creator to decide what they want to use e.g. if you creating verifiable credential ecosystem you can agree within ecosystem to use only sha-256 (maybe you need something NIST approved) and where use case in medical care where it needs to run on IoT devices with constrained resources would pick blake3 from practical reasons.

remember OCA is not about use case is about meta semantic which allow others build their own use cases.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that there must be a spec on top of OCA for every community to use OCA so that consumers know what cryptography they will need to include to be able to use the OCA. Since only two things need to be defined (hashing and encoding), I think it is very reasonable pick in the OCA spec, a finite number of options for those things — ideally just 1 for each, but several choices is fine Use cases will not be impacted by those selections, but all implementations will be MUCH easier with those choices made. With no guardrails, a consumer has to assume a producer could use anything — or just hope that pick the right ones.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing me to Line 1043 — added a comment there. Very happy to see it. Just to link what I said to this — the spec. does limit the hash and encoding algorithms used per the Deterministic IDs. The permitted hash algorithms are in the CESR spec (bad idea — just put them here — even if they are the same list), and only one encoding algorithm is allowed (with no multibase to allow for a future change…). Good stuff!


*Example*:
TODO update example
```
{
"bundle": {
"v": "OCAS11JSON000646_",
"d": "EKHBds6myKVIsQuT7Zr23M8Xk_gwq-2SaDRUprvqOXxa",
"capture_base": {
"d": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/capture_base/1.0",
"attributes": {
"d": "Text",
"i": "Text",
"passed": "Boolean"
},
"classification": "",
},
"overlays": {
"character_encoding": {
"d": "ED6Eio9KG2jHdFg3gXQpc0PX2xEI7aHnGDOpjU6VBfjs",
"capture_base": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/overlays/character_encoding/1.0",
"attribute_character_encoding": {
"d": "utf-8",
"i": "utf-8",
"passed": "utf-8"
}
},
"conformance": {
"d": "EJSRe8DnLonKf6GVT_bC1QHoY0lQOG6-ldqxu7pqVCU8",
"capture_base": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/overlays/conformance/1.0",
"attribute_conformance": {
"d": "M",
"i": "M",
"passed": "M"
}
},
"information": [
{
"d": "EIBXpVvka3_4lheeajtitiafIP78Ig8LDMVX9dXpCC2l",
"capture_base": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/overlays/information/1.0",
"language": "eng",
"attribute_information": {
"d": "Schema digest",
"i": "Credential Issuee",
"passed": "Enables or disables passing"
}
}
],
"label": [
{
"d": "ECZc26INzjxVbNo7-hln6xN3HW3e1r6NGDmA5ogRo6ef",
"capture_base": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/overlays/label/1.0",
"language": "eng",
"attribute_categories": [],
"attribute_labels": {
"d": "Schema digest",
"i": "Credential Issuee",
"passed": "Passed"
},
"category_labels": {}
}
],
"meta": [
{
"d": "EOxvie-zslkGmFzVqYAzTVtO7RyFXAG8aCqE0OougnGV",
"capture_base": "EBnF9U9XW1EqteIW0ucAR4CsTUqojvfIWkeifsLRuOUW",
"type": "spec/overlays/meta/1.0",
"language": "eng",
"description": "Entrance credential",
"name": "Entrance credential"
}
]
}
}
```
_Example 20. Code snippet for an OCA Bundle._

If well-structured, the metadata in an OCA bundle can facilitate many ways for users to search for information, present results, and even manipulate and present information objects without compromising their integrity.

### Code Tables

Expand Down Expand Up @@ -1125,7 +1240,7 @@ Internet Assigned Numbers Authority (IANA) [https://www.iana.org/](https://www.i
</dd>

<dt id="ref-ICAO">
[ICAO]
[ICAO]
</dt>
<dd>

Expand Down Expand Up @@ -1325,29 +1440,3 @@ United Nations. Sustainable Development Goals (SDGs) [https://sdgs.un.org/goals]
</div>

## Appendices

### Appendix A. An example of Metafile content

```json
{
"files": {
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] character_encoding": "E3SAKe0z83pfBnhhcZl19PGGKBheb35WeCJ3V6RdqwY8",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] conditional": "Ejx0o0yuwp99vi0V-ssP6URZIXRMGj1oNKIZ1BXi4sHU",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] conformance": "EZv1B5nNl4Rty8CXFTALhr8T6qXeO0CcKliM03sdrkRA",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] entry (en)": "Eri3NLi1fr4QrKoFfTlK31KvWpwrSgGaZ0LLuWYQaZfI",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] entry (fr)": "EY0UZ8aYAPusaWk_TON8c20gHth2tvZs4eWh7XAfXBcY",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] entry_code": "E1mqEb4f6eOMgu5zR857WWlMUwGYwPzZgiM6sWRZkQ0M",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] format": "ESEMKWoKKIf5qvngKecV-ei8MwcQc_pPWCH1FrTWajAM",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] information (en)": "EyzKEWuMs8kspj4r70_Lc8sdppnDx-hb9QqUQywjmDRY",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] information (fr)": "EIGknekgJFqjgQ8ah2NwL8zNWbFrllvXVLqezgB6U3Yg",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] label (en)": "EgBxL29VsxoZso7YFirlMP334ZuC1mkel-lO7TxPxEq8",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] label (fr)": "ED9PH0ZBaOci-nbnYfPgYZWGQdkyWxA-nW3REmB3vhu0",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] layout": "ElJEQGfAvfJEuB7JeNIcvmAPO2DIOaKkpkZyvxO-gQoc",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] meta (en)": "EpW9bQGs0Lk6k5cJikN0Ep-DN6z29fwZIsbVzMBgTlWY",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] meta (fr)": "EIGj0LQKT9-6gCLV2QZVgi4YQZhrUl0-GKbN7sFTCSAI",
"[EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis] unit": "EHDwC_Ucuttrsxh2NVptgBnyG4EMbG5D8QsdbeF9G9-M",
"capture_base-0": "EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis"
},
"root": "EVyoqPYxoPiZOneM84MN-7D0oOR03vCr5gg1hf3pxnis"
}
```
Loading