Skip to content

Commit

Permalink
Revamped DAG-JSON decoder and unmarshaller.
Browse files Browse the repository at this point in the history
This is added in a new "dagjson2" package for the time being,
but aims to replace the current dagjson package entirely,
and will take over that namespace when complete.

So far only the decoder/unmarshaller is included in this first commit,
and the encoder/marshaller is still coming up.

This revamp is making several major strides:

- The decoding system is cleanly separated from the tree building.

- The tree building reuses the codectools token assembler systems.
  This saves a lot of code, and adds a lot of consistency.
  (By contrast, the older dagjson and dagcbor packages had similar
  outlines, but didn't actually share much code; this was annoying
  to maintain, and meant improvements to one needed to be ported
  to the other manually.  No more.)

- The token type used by this codectools system is more tightly
  associated with the IPLD Data Model.  In practice, what this means
  is links are parsed at the same stage as the rest of parsing,
  rather than being added on in an awkward "parse 1.5" stage.
  This results in much less complicated code than the old token
  system from refmt which the older dagjson package leans on.

- Budgets are more consistently woven through this system.

- The JSON decoder components are in their own sub-package,
  and should be relatively reusable.  Some features like string parsing
  are exported in their own right, in addition to being accessable
  via the full recursive supports-everything decoders.
  (This might not often be compelling, but -- maybe.  I myself wanted
  more reusable access to fine-grained decoder and encoder components
  when I was working on the "JST" experiment, so, I'm scratching my
  own itch here if nothing else.)
  End-users should mostly not need to see this, but library
  implementors might appreciate it.

- The codectools scratch.Reader type is used in all the decoder APIs.
  This results in good performance for either streaming io.Reader or
  already-in-memory bytes slices as data sources, and does it without
  doubling the number of exported functions we need (or pushing the
  need for feature detection into every single exported function).

- The configuration system for the decoder is actually in this repo,
  and it's sanely and clearly settable while also being optional.
  Previously, if you wanted to configure dagjson, you'd have to reach
  into the refmt json package for *those* configuration structs,
  which was workable but just very confusing and gave the end-user a
  lot of different places to look before finding what they need.

- The implementations are very mindful of memory allocation efficiency.
  Almost all of the component structures carefully utilize embedding:
  ReusableUnmarsahller embeds the Decoder; the Decoder embeds the
  scratch.Reader as well as the Token it yields; etc.
  This should result in overall being able to produce fully usable
  codecs with a minimal number of allocations -- much fewer than the
  older implementations required.

Some benefits have yet to be realized, but are on the map now:

- The new Token structure also includes space for position and
  progress tracking, which we want to use to produce better errors.
  (This needs more implementation work, still, though.)

- There are several configuraiton options for strictness.
  These aren't all backed up by the actual implementation yet
  (I'm porting over old code fast enough to write a demo and make
  sure the whole suite of interfaces works; it'll require further
  work, especially on this strictness front, later), but
  at the very least these are now getting documented,
  and several comment blocks point to where more work is needed.

- The new multicodec registry is alluded to in comments here, but
  isn't implemented yet.  This is part of the long game big goal.
  The aim is to, by the end of this revamp, be able to do something
  about #55 , and approach
  https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
  • Loading branch information
warpfork committed Nov 14, 2020
1 parent b800484 commit 4f21398
Show file tree
Hide file tree
Showing 7 changed files with 827 additions and 0 deletions.
14 changes: 14 additions & 0 deletions codec/dagjson2/doc.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Several groups of exported symbols are available at different levels of abstraction:
//
// - You might just want the multicodec registration! Then never deal with this package directly again.
// - You might want to use the `Encode(Node,Writer)` and `Decode(NodeAssembler,Reader)` functions directly.
// - You might want to use `ReusableEncoder` and `ReusableDecoder` types and their configuration options,
// then use their Encode and Decode methods with that additional control.
// - You might want to use the lower-level TokenReader and TokenWriter tools to process the serial data
// as a stream, without necessary creating ipld Nodes at all.
// - (this is a stretch) You might want to use some of the individual token processing functions,
// perhaps as part of a totally new codec that just happens to share some behaviors with this one.
//
// The first three are exported from this package.
// The last two can be found in the "./token" subpackage.
package dagjson
55 changes: 55 additions & 0 deletions codec/dagjson2/json_unmarshaller.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
package dagjson

import (
"io"

"github.com/ipld/go-ipld-prime"
"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/dagjson2/token"
)

// Unmarshal reads data from input, parses it as DAG-JSON,
// and unfolds the data into the given NodeAssembler.
//
// The strict interpretation of DAG-JSON is used.
// Use a ReusableMarshaller and set its DecoderConfig if you need
// looser or otherwise customized decoding rules.
//
// This function is the same as the function found for DAG-JSON
// in the default multicodec registry.
func Unmarshal(into ipld.NodeAssembler, input io.Reader) error {
// FUTURE: consider doing a whole sync.Pool jazz around this.
r := ReusableUnmarshaller{}
r.SetDecoderConfig(jsontoken.DecoderConfig{
AllowDanglingComma: false,
AllowWhitespace: false,
AllowEscapedUnicode: false,
ParseUtf8C8: true,
})
r.SetInitialBudget(1 << 20)
return r.Unmarshal(into, input)
}

// ReusableUnmarshaller has an Unmarshal method, and also supports
// customizable DecoderConfig and resource budgets.
//
// The Unmarshal method may be used repeatedly (although not concurrently).
// Keeping a ReusableUnmarshaller around and using it repeatedly may allow
// the user to amortize some allocations (some internal buffers can be reused).
type ReusableUnmarshaller struct {
d jsontoken.Decoder

InitialBudget int
}

func (r *ReusableUnmarshaller) SetDecoderConfig(cfg jsontoken.DecoderConfig) {
r.d.DecoderConfig = cfg
}
func (r *ReusableUnmarshaller) SetInitialBudget(budget int) {
r.InitialBudget = budget
}

func (r *ReusableUnmarshaller) Unmarshal(into ipld.NodeAssembler, input io.Reader) error {
r.d.Init(input)
return codectools.TokenAssemble(into, r.d.Step, r.InitialBudget)
}
240 changes: 240 additions & 0 deletions codec/dagjson2/token/json_decode.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@
package jsontoken

import (
"fmt"
"io"

"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/codectools/scratch"
)

type Decoder struct {
r scratch.Reader

phase decoderPhase // current phase.
stack []decoderPhase // stack of any phases that need to be popped back up to before we're done with a complete tree.
some bool // true after first value in any context; use to decide if a comma must precede the next value. (doesn't need a stack, because if you're popping, it's true again.)

tok codectools.Token // we'll be yielding this repeatedly.

DecoderConfig
}

type DecoderConfig struct {
AllowDanglingComma bool // normal json: false; strict: false.
AllowWhitespace bool // normal json: true; strict: false.
AllowEscapedUnicode bool // normal json: true; strict: false.
ParseUtf8C8 bool // normal json: false; dag-json: true.
}

func (d *Decoder) Init(r io.Reader) {
d.r.Init(r)
d.phase = decoderPhase_acceptValue
d.stack = d.stack[0:0]
d.some = false
}

func (d *Decoder) Step(budget *int) (next *codectools.Token, err error) {
switch d.phase {
case decoderPhase_acceptValue:
err = d.step_acceptValue()
case decoderPhase_acceptMapKeyOrEnd:
err = d.step_acceptMapKeyOrEnd()
case decoderPhase_acceptMapValue:
err = d.step_acceptMapValue()
case decoderPhase_acceptListValueOrEnd:
err = d.step_acceptListValueOrEnd()
}
return &d.tok, err
}

func (d *Decoder) pushPhase(newPhase decoderPhase) {
d.stack = append(d.stack, d.phase)
d.phase = newPhase
d.some = false
}

func (d *Decoder) popPhase() {
d.phase = d.stack[len(d.stack)-1]
d.stack = d.stack[:len(d.stack)-1]
d.some = true
}

type decoderPhase uint8

const (
decoderPhase_acceptValue decoderPhase = iota
decoderPhase_acceptMapKeyOrEnd
decoderPhase_acceptMapValue
decoderPhase_acceptListValueOrEnd
)

func (d *Decoder) readn1skippingWhitespace() (majorByte byte, err error) {
if d.DecoderConfig.AllowWhitespace {
for {
majorByte, err = d.r.Readn1()
switch majorByte {
case ' ', '\t', '\r', '\n': // continue
default:
return
}
}
} else {
for {
majorByte, err = d.r.Readn1()
switch majorByte {
case ' ', '\t', '\r', '\n':
return 0, fmt.Errorf("whitespace not allowed by decoder configured for strictness")
default:
return
}
}
}
}

// The original step, where any value is accepted, and no terminators for recursives are valid.
// ONLY used in the original step; all other steps handle leaf nodes internally.
func (d *Decoder) step_acceptValue() error {
majorByte, err := d.r.Readn1()
if err != nil {
return err
}
return d.stepHelper_acceptValue(majorByte)
}

// Step in midst of decoding a map, key expected up next, or end.
func (d *Decoder) step_acceptMapKeyOrEnd() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
if d.some {
switch majorByte {
case '}':
d.tok.Kind = codectools.TokenKind_MapClose
d.popPhase()
return nil
case ',':
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
// and now fall through to the next switch
// FIXME: AllowDanglingComma needs a check hereabouts
}
}
switch majorByte {
case '}':
d.tok.Kind = codectools.TokenKind_MapClose
d.popPhase()
return nil
default:
// Consume a value for key.
// Given that this is JSON, this has to be a string.
err := d.stepHelper_acceptValue(majorByte)
if err != nil {
return err
}
if d.tok.Kind != codectools.TokenKind_String {
return fmt.Errorf("unexpected non-string token where expecting a map key")
}
// Now scan up to consume the colon as well, which is required next.
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
if majorByte != ':' {
return fmt.Errorf("expected colon after map key; got 0x%x", majorByte)
}
// Next up: expect a value.
d.phase = decoderPhase_acceptMapValue
d.some = true
return nil
}
}

// Step in midst of decoding a map, value expected up next.
func (d *Decoder) step_acceptMapValue() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
d.phase = decoderPhase_acceptMapKeyOrEnd
return d.stepHelper_acceptValue(majorByte)
}

// Step in midst of decoding an array.
func (d *Decoder) step_acceptListValueOrEnd() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
if d.some {
switch majorByte {
case ']':
d.tok.Kind = codectools.TokenKind_ListClose
d.popPhase()
return nil
case ',':
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
// and now fall through to the next switch
// FIXME: AllowDanglingComma needs a check hereabouts
}
}
switch majorByte {
case ']':
d.tok.Kind = codectools.TokenKind_ListClose
d.popPhase()
return nil
default:
d.some = true
return d.stepHelper_acceptValue(majorByte)
}
}

func (d *Decoder) stepHelper_acceptValue(majorByte byte) (err error) {
switch majorByte {
case '{':
d.tok.Kind = codectools.TokenKind_MapOpen
d.tok.Length = -1
d.pushPhase(decoderPhase_acceptMapKeyOrEnd)
return nil
case '[':
d.tok.Kind = codectools.TokenKind_ListOpen
d.tok.Length = -1
d.pushPhase(decoderPhase_acceptListValueOrEnd)
return nil
case 'n':
d.r.Readnzc(3) // FIXME must check these equal "ull"!
d.tok.Kind = codectools.TokenKind_Null
return nil
case '"':
d.tok.Kind = codectools.TokenKind_String
d.tok.Str, err = DecodeStringBody(&d.r)
if err == nil {
d.r.Readn1() // Swallow the trailing `"` (which DecodeStringBody has insured we have).
}
return err
case 'f':
d.r.Readnzc(4) // FIXME must check these equal "alse"!
d.tok.Kind = codectools.TokenKind_Bool
d.tok.Bool = false
return nil
case 't':
d.r.Readnzc(3) // FIXME must check these equal "rue"!
d.tok.Kind = codectools.TokenKind_Bool
d.tok.Bool = true
return nil
case '-', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
// Some kind of numeric... but in json, we can't tell if it's float or int. At least, certainly not yet.
// We'll have to look ahead quite a bit more to try to differentiate. The decodeNumber function does this for us.
d.r.Unreadn1()
d.tok.Kind, d.tok.Int, d.tok.Float, err = DecodeNumber(&d.r)
return err
default:
return fmt.Errorf("Invalid byte while expecting start of value: 0x%x", majorByte)
}
}
Loading

0 comments on commit 4f21398

Please sign in to comment.