Skip to content

Releases: rijs/fullstack

v0.9.1

26 Mar 03:31
Compare
Choose a tag to compare

Notable Changes

Automatic Push (Basic React Example)

In an ideal world, a developer could just start writing and running code like:

import foo from './foo.js'

However, the browser would fetch ./foo.js, parse it, before it realises it needs ./bar.js to evaluate, and then make another round-trip to the server to get ./bar.js (repeating for each dependency). This is why ES Modules as the runtime format is not suitable in production, without a co-operating server.

In terms of user experience, the current practice of bundling is too coarse-grained, not dedupeable and results in over-fetching as too much code is loaded before use. In terms of developer experience, it's purely an unnecessary cognitive and resource overhead, especially if they try to split the bundles themselves.

In v9, Ripple will now automatically push dependencies for a resource.

In terms of user experience, this results in the theoretical upper bound for performance. If you request a resource with dependencies, only a single round-trip will be made. Since we are just dealing with finer-grained units (modules), this means we can dedupe, get more cache hits, and only pull deltas for what we don't already have. As a user interacts with your site, the module map starts to get populated. It may be for example they visited some pages that already loaded moment and lodash, and by the time they go to the next page that needs those, there will be zero network interactions, resulting in a much more app-like experience - i.e. dynamically streaming and populating the module cache as needed achieves a performance profile static bundles fundamentally cannot (for both start and continued interaction with your site). It allows naturally enables interleaving and progressive rendering without having to think about it.

In terms of developer experience, this completely eliminates the need for a product developer to think about the entire issue or spend any time on it. They don't have to create bundles ever again, think about how best to split them, and not only do they not have to worry about performance, they can rest assured users will get the best performance that is theoretically possible. This approach also scales with size: you can start hacking together a few modules and just fire up a server, or you can have an enterprise application that spans many teams and a million modules, and also just fire up the same server. Since there is no build step between authoring modules and seeing the results, it means much faster iterations too. There is no dependency hell issues as JavaScript developers have come to expect, and deduping is already handled by how npm installs dependencies.

Check out the minimal example here

  • In the future:
    • This is currently working for CommonJS, since that's what the world is using right now. Support for ESM will also be added. This is fairly trivial to do now but the plan was to wait for an unflagged version of node that natively supports ESM first. Since it now seems that will take some time, support for ES modules will be added sooner.
    • The primary concern is still getting things right, before fine-tuning for absolute performance. After ESM, performance screws will be tightened and an updated version of the loading 10k modules challenge will be published.
    • Reinvestigate HTTP/2 + Push as the underlying transport mechanism: However every time I have looked at this it doesn't seem quite right. Given more recent discussions around HTTP/2 Push, it seems like more people are realising that is not a panacea that will just fix everything.
    • The offline module caches resources and boots from those before any network interaction even happens, greatly improving the perceived startup time for repeat loads. This has been removed for now and will be refactored to use the new Cache API instead (also revisiting Service Workers).
    • Server-side rendering would also improve perceived startup time.

v0.8.1

19 Nov 16:06
Compare
Choose a tag to compare

Notable Changes

Dynamic Minimum Transpilation (Example: Realtime Hackernews)

  • Resources can now be dynamically transpiled, controlled using the transpile header
  • Transpiled versions are stored in an LRU cache per resource. The size of the cache can be controlled by transpile.limit
  • The limit is set for all resources of type application/javascript to 25 by default
  • That means it will work automatically for all components, you can opt out by setting to 0
  • You can also opt-in for resources that aren't functions. For example in the following case we register a resource (lodash) that is an object with functions as some of it's properties:
// server
const filter = require('lodash.filter')
    , map = require('lodash.map')

ripple('lodash', { filter, map }, { transpile: { limit: 25 }})

// client 
ripple.get('lodash', 'filter') // will return transpiled version 
ripple.subscribe('lodash', 'filter') // will return transpiled version + update on change 
  • Transpilation will work on delta updates too, but these will not use a LRU cache. Dynamically updating part of a resource that happens to be a function seems to be a unusual/niche case, so at a minimum it works - but the main use case that is optimised is resources that are functions.
  • All updates, whether top-level or partial will evict the transpilation cache for that resource, so subscriptions and hot reloading works too
  • Transpilation is done by buble, so bear in mind async/await will not work until this is merged.
  • There are end-to-end tests added for all these use cases
  • In the future:
    • The transpile header will be extended so you can preheat the LRU cache AOT with the transpiled versions you want when you start the server.
    • A fallback option (pending this) will allow you to specify the default transforms if support data does not exist for a particular browser/version (typically apply either no transforms or all transforms)
    • This will work for free if you GET a resource via HTTP

Commits

examples

  • [a953a0d] - feat: add hackernews, sliding-blotter

fn

  • [306e54a] - feat: set transpilation headers by default

needs

  • [a7b35e3] - feat: parallelise loading dependencies

sync

  • [5f9952a] - fix: should allow subscribing to keys as numbers
  • [bf8652a] - feat: dynamically transpile
  • [5c3ecca] - fix: always unwrap streams

v0.8.0

12 Nov 20:39
Compare
Choose a tag to compare

Notable Changes

  • You can now respond with a single value, promise or stream in response to requests. ripple.send on the client returns an awaitable stream. See this blotter example for a demonstration of how powerful this can be.

  • socket.io has been replaced in favour of uws and nanosocket (see also xrs). The total size of the client is now just ~7 kB.

  • All modules are now bundled with buble + rollup instead of babel + browserify.

v0.7.0

22 Dec 22:24
Compare
Choose a tag to compare

Notable Changes

Dynamic import()

ripple.pull() is now similar to the dynamic import() proposal. The core of Ripple is like a module map - a map of URI's to resources. pull('resource') will either resolve to the resource already in the core, return the promise if a request has already been made or create a new promise. Additionally, all resources are automatically streamed updates via the change event (note: they will be ES6 Observables in the future). This is also a useful pattern for pausing a component render to resolve dependencies (using async/await).

Zero Boilerplate

The previous README had the following for the index.js:

const app = require('express')()
    , server = app.listen(3000)
    , ripple = require('rijs')({ server })

app.get('/*', (req, res) => res.sendFile(__dirname+'/index.html'))

Now if you don't pass a server, one will be created for you. This will be on a random port, unless you want to specify one using port.

Furthermore, your /pages directory is also now statically served. Therefore the minimal index.js now becomes one-line!:

const ripple = require('rijs')({ dir: __dirname })

Commits

backpressure

  • [9a11a45] - 0.1.6
  • [c024df5] - chore: workaround travis npm i issue
  • [ae5cf8a] - chore: build dist
  • [2627e68] - fix: failing tests and test promisified pull
  • [76ba325] - refactor: make pull === import() semantics
  • [fec4af5] - fix: do not block/end pull requests
  • [1be0b23] - refactor: wrap client code and match fn names

components

examples

export

features

  • [b56c19a] - chore: workaround travis npm i issue

fullstack

minimal

mintest

pages (new module)

precss

  • [27ab4ae] - chore: workaround travis npm i issue

resdir

serve

sync

v0.6.3

11 Sep 19:45
Compare
Choose a tag to compare

Notable Changes

  • The signature for components has changed. This should make it nicer to use arrow functions for simple components.
  • The DB & MySQL modules have been deprecated. There is no need for Ripple-specific modules for each database/service, so you can simply use any module/database/service you like.

Commits

components

delay

features

fullstack

helpers

  • [49a4c7b] - should skip over headerless changes

sync

  • [d368bbe] - feat: prefer ws over polling
  • [697f560] - fix: default to state of world if no change info
  • [9baeec8] - feat: use utilise/deb for debug logs

v0.6.2

29 Aug 00:44
Compare
Choose a tag to compare

Notable Changes

Web Components are dead, Long live the DOM (and Vanilla Components)

There are few parts to the Web Component spec:

  • HTML Imports - This was dead on arrival - albeit killed not unreasonably: they wanted to see how ES6 Modules pan out first.

  • Custom Elements - This is fine. But using a custom tag has been possible since forever.

    You might think the lifecycle callbacks are new, but if you are interested in connected/disconnected/attributeChange there are MutationObservers. Also, those are mostly useless events. The only one lifecycle event that would have been super-useful to standardise is a single "render"/"update"/"draw". Actually, there's no thought given to how a component should update at all so far. By having a consistent contract across components as per the vanilla spec for example, it makes it trivial to compose completely unrelated components.

    Without a real component model, none of the Web Components spec help with building interoperable components.

  • Shadow DOM - This was initially the most exciting. Practically, the only thing this gives you is style encapsulation. That's not much, and something you can do with a BEM-esque transform (s/:host/component-name/g) for upper boundaries, and a simple > for lower boundaries (or other approaches, like inlining). But it was at least something you could progressively layer on if available. The open/closed divide has now made this a victim of design by committee, turning it into a poor man's iframe.

Since the future of "Web Components" now seems uncertain and lacking a clear direction, I've disabled the shadow module by default in rijs/fullstack (it's always been omitted in the client-only build rijs/minimal). This is very unfortunate to admit, as I've been very bullish in trying to align closely with them since the early days of the project. Thanks to the modular framework architecture however, this is as simple as commenting out one line, with no impact on applications. Chrome will now just render the same as Firefox/IE (no shadow roots, but custom tags still). I look forward to seeing how they continue to evolve, but currently they do not add any value and just degrade performance (in speed and size).

A few other notable aspects:

  • Slots: Components generally transform data-to-markup. The distribution algorithm is essentially markup-to-markup. This is a very unexpressive, highly opinionated solution. Authoring a component based on HTML input is akin to DOM scraping. A lot of time was wasted on this area imho, which I doubt will catch on (contrast this approach to D3 joins for building a graph).
  • Scrapping [is]: This meant there was no way for a Web Component to participate in a <form>. So if you want <fancy-select>, you end up having to rewrite alot more than expected.
  • State Propagation: The ability to deeply propagate changes has been taken away in V1 (yes, even for open shadows). Without some low-level primitive to do this efficiently, this makes it untenable for any framework to adopt Shadow DOM V1.

Commits

backpressure

delay

fullstack

mysql

  • [102ae81] - feat: use utilise/deb for debug logs

sync

  • [9baeec8] - feat: use utilise/deb for debug logs

upload (new module)

v0.6.0

28 Jul 22:00
Compare
Choose a tag to compare

Notable Changes

Cleaner Sync

The sync module (send/recv) has been hugely simplified, resulting in a more consistent paradigm for dealing with the flow of data across the stack. The plan is to make deploying realtime resources as simple as possible, similar to lambdas, or now, hence there is quite a bit inspiration from micro in this release. Here's an overview of the API:

send

Instead of stream, it is now send:

const { send } = ripple

send(sockets)(req)
  .then(replies => ..)
  • sockets: a socket, an array of sockets, a sessionID string identifying some sockets, or nothing which would imply all connected sockets. On the client, you can only send to one socket (the server), so this is pre-bound (i.e. just send(req)).
  • req: could be the request object you wish to send, the name of a resource to send, an array of either the previous two, or nothing which would imply sending all resources. The req object can have any shape, but typically it would look like (only name is mandatory):
send({ name, type, value })

For which you can use the shortcut:

send(name, type, value)

The tuple { key, value, type } is a standard atomic diff, a reified notion of a mutation, by which you can represent all change. An immutable log of all these changes is stored for each resource and used to robustly replay and replicate state across nodes, following Kafka. This also happens to be a "request" in the REST-sense, where name is the endpoint (URI), type is the method/verb (add/update/delete == PUT/PATCH/DELETE) and value is the body (key is used to make a partial modification, i.e. PUT vs PATCH).

This function returns a promise with all the replies.

Redux / Nap

Pro-tip: If sockets == ripple, you can also send requests to the same node to reuse logic in the from handler and use this is in a redux/nap like manner! This function is aliased as ripple.req = ripple.send(ripple).

const { req } = ripple

req({ name: 'store', type: 'INCREMENT' })
// or
req('store', 'INCREMENT')
  .then(..)
  .catch(..)

The req name is a hat-tip to websdk/nap which implements the same API (s/uri/name/, s/method/type/, s/body/value/) but takes a callback function instead of returning a promise.

from/to

You can define your request (from) / response (to) handlers on a resource (also per-type, or globally):

ripple('user', [], { from, to })

These always receive one req object now and they default to the identity function. You can transform a req by returning something else. Returning falsy will ignore. You can also return a promise and the eventual value will be used if you need to process asynchronously.

Typically, you will want to check the type (method/verb) and then delegate to the appropriate function (composing the return values). For example, there may be different actions you want to take on a user resource:

const from = (req, res) => 
  req.type == 'register' ? register(req, res)
: req.type == 'forgot'   ? forgot(req, res)
: req.type == 'logout'   ? logout(req, res)
: req.type == 'reset'    ? reset(req, res)
: req.type == 'login'    ? login(req, res)
                         : res(405, err('method not allowed', req.type))

You can use the res function to reply directly to a request. You can reply with any arbitrary arguments. Conventionally, Ripple will set the first parameter to the (HTTP) status code and the message in the second. This happens for example when a resource is not found (404), a type has not been handled (405), your custom handler threw an exception (500) or your local history for a resource is irreconcilably behind (409).

Error Handling

Your request handler can simply throw an error:

// server
function from(req, res) {
  throw new Error('WTF!!')
}

The error will be logged and then returned to the client where you can catch it.

// client
send(req)
  .then()
  .catch()

By default the response status code is 500 and the message is the error message. You can customise the status code returned by also changing the status property on the error.

You can also respond directly instead of throwing:

res(500, 'something went wrong!')

Middleware

Middleware is handled explicitly as explained here, which is the same way to extend all Ripple modules.

Logging

The logging has been improved to be less noisy, whilst allowing further debug information (e.g. acks) to be shown by starting with DEBUG=[ri/sync].

Future Work

Future planned improvements:

  • The backpressure module neatly adds middleware to only send clients resources they need. This can be improved by tracking the subresources needed (key) instead of the whole resource.
  • Resurrect work on the hypermedia module to making traversing links across resources and exploring graph data more fluent.
  • A module that (competitively) synchronises Ripple nodes over TCP.

Commits

backpressure

  • [ed857e4] - feat: should also pull resources from is=
  • [8aa8f44] - refactor: use new simpler sync sig
  • [350499d] - chore: update header
  • [ae855cf] - chore: add colors to test commands

components

core

  • [b89f945] - should allow importing multiple resources from object
  • [fae8d8f] - chore: pull in chainable

docs

export

  • [a6ceb49] - chore: make js-beautify a regular dependency
  • [63a2251] - export object instead of array

features

  • [5534e63] - fix: update return value to be element

fn

fullstack

helpers

minimal

mysql

precss

resdir

  • [e2a8e49] - chore: add missing minimist dep
  • [db56594] - feat: should invoke loaded callback

serve

sync

v0.5.5

28 Jun 22:12
Compare
Choose a tag to compare

backpressure

  • [ce9fda4] - request potentially stale resources

db

  • [d75d413] - use named connections and single change event

examples

export

  • [e2f4f94] - more contained: input/output expects resource

features

  • [4f79ec7] - should invoke on shadow if present

mysql

  • [32e4e67] - disconnect hook to automatically push change and disable tests until api stable

perf

resdir

  • [ef183a7] - load from additional dirs, add loaded callback

sessions

shadow

sync

  • [f095347] - it should respond via ack if available

v0.5.4

02 May 16:57
Compare
Choose a tag to compare

Notable Changes

Active Change

Ripple now exposes the active changes that occured to trigger the render to components. There is only one change ({ key, value, type, time }) that may be flowing through the system, but since components are rAF batched, there be multiple changes that occur between frames so they are queued on the element (.change). Having access directly to the fine-grained, low-level changes allows components to better check whether a render can be skipped.

Housekeeping

Bunch of docs have been updated (see new homepage) and repo's organised better. This central repo (pemrouz/ripple) has now moved to rijs/fullstack to be co-located with other official builds (e.g. rijs/minimal).

backpressure

  • [a5e3a5e] - docs: update readme
  • [2d33683] - should not make multiple requests for same resource
  • [b44b6e0] - add client tests and do not pull on bailed render

components

data

db

delay

docs

features

fn

helpers

hypermedia

mysql

needs

offline

perf

precss

resdir

sessions

shadow

singleton

sync

versioned

v0.5.3

14 Apr 23:50
Compare
Choose a tag to compare

Notable Changes

New Request-Response Semantics (without correlation IDs)

Clearly, not all changes should propagate to all nodes. The decoupled request (from) and response (to) handlers are capable of contextually blocking incoming/outgoing streams, in addition to transforming representations. However, as in the case of validation, blocking an incoming message is not always enough either - you need to respond with details why. It's common for request-response implementions over a channel like WebSockets to hinge on a sufficiently random correlation ID. However, since we use logs as the underlying core data structure, we can leverage the index of the change (time) as a simpler and more robust UID. The implementation is conceptually simpler and literally a couple of one-liners, but as a convenience, a respond function is provided to from transformation functions to respond to particular incoming changes. On the client, you can use done to wait for a reply to a change, whose implementation is also one line. Example:

// client
done(push(user)(members))
  (d => o.draw(d.invalid 
    ? (state.invalid = d.invalid) 
    : (state.invalid = false
    ,  state.confirm = true)))

// server    
function from({}, { value, type }, respond) {
  if (type !== 'add') return

  const me = ripple('user').whos(this)
      , user = validate(me, value)

  if (!user.invalid) push(user)(ripple('users'))
  respond(user)
}

Refinements to the View Layer

By one categorisation, there are two types of elements that may need to be drawn:

  • Newly added elements
  • Existing elements

By moving the .draw function from the element to Node.prototype, the approach for dealing with both of these becomes the same (call draw after updating state) without having to reference any globals (ripple.draw). This means we don't need to rely on the Custom Elements attachedCallback, or the Mutation Observer polyfills (which have been removed) for the former, and now all scenarios work in all browsers (IE9 has been added to CI to reflect official support). Note that if you are using once it will call .draw for you so you never have to do this manually. If desired, the Mutation Observer code that invokes ripple.draw on newly added elements could also be moved into a separate Ripple module.

Commits

components

core

data

export

  • [ead0541] - put each resource on newline for readability

sync

versioned