Skip to content

Latest commit

Β 

History

History

react-demux-scatter

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Collaborative Etiquette chat on Discord follow on Twitter MIT

EOSIO React Demux Scatter Boilerplate

This project was inspired on MonsterEOS' EOSIO Dream Stack architecture.

Table of Contents generated with DocToc

Architecture

Technical Specs

  • Virtualized environment.
  • Microservices architecture.
  • Out-of-box services:
    • Demux service for executing side effects and data replication to postgres.
    • GraphQL endpoint with Hasura for executing complex data queries with ease.
    • PGWeb instance for exploring the demux postgres database.
    • Postgres database for the dApp data.
    • Reactjs client with:
      • Scatter integration.
      • Lynx integration. WIP
      • Material UI.
      • GraphQL Apollo client.
  • Automated code linting and testing.
  • Continuous Integration and Deployment. ( Travis and Netlify ) - CircleCI soon

Note: at the moment we are not using a docker container for running the React client due to issues related to hot reloading the app efficiently

Important Disclaimer: This is a Work in Progress

Getting started

Basic knowledge about Docker, Docker Compose, EOSIO and NodeJS is required.

Global Dependencies

Optionally

Commands

  • make start starts all containers and the reactjs app.
  • make flush stops and removes all cotainers and data.
  • make hasura open the hasura console on the browser.
  • make migrate runs hasura migration against the postgres database.
  • docker-compose build build all containers,
  • docker-compose up starts all containers.
  • docker-compose up --build rebuilds and starts all containers.
  • docker-compose exec [service_name] [bash | sh] open bash or sh in a container.
  • docker-compose stop stops all containers.
  • docker-compose down stops and removes all containers.
  • docker-compose restart restarts all services.

See the full list here https://docs.docker.com/compose/reference/

Directory Structure

.
β”œβ”€β”€ docs/ .............................................. documentation files and media
β”œβ”€β”€ contracts/ ......................................... eosio smart contracts 
β”œβ”€β”€ services/ .......................................... microservices
|   β”œβ”€β”€ demux/ ......................................... demux-js service
|   |   β”œβ”€β”€ utils/ ..................................... general utilities
|   |   β”œβ”€β”€ src/ ....................................... application biz logic 
|   |   β”œβ”€β”€ Dockerfile ................................. service image spec 
|   |   β”œβ”€β”€ pm2.config.js .............................. process specs for pm2
|   |   β”œβ”€β”€ tsconfig.json .............................. tslint config
|   |   β”œβ”€β”€ tslint.json ................................ code style rules
|   |   └── package.json ............................... service dependencies manifest
|   |
|   β”œβ”€β”€ hasura/ ........................................ graphql endpoint service
|   |   β”œβ”€β”€ migrations/ ................................ hasura postgres migrations
|   |   └── config.yaml ................................ hasura config file
|   |
|   └── frontend/ ...................................... reactjs frontend
|       β”œβ”€β”€ public/ .................................... static and public files
|       β”œβ”€β”€ src/ ....................................... reactjs views and components
|       β”œβ”€β”€ config-overrides.js ........................ configuration overrides for `cra`
|       β”œβ”€β”€ .env ....................................... environment variables
|       β”œβ”€β”€ .eslintrc .................................. code style rules
|       └── package.json ............................... service dependencies manifest
|   
β”œβ”€β”€ docker-compose.yaml ................................ docker compose for local dev
β”œβ”€β”€ contributing.md .................................... contributing guidelines
β”œβ”€β”€ license ............................................ project license
β”œβ”€β”€ makefile ........................................... make tasks manifest
β”œβ”€β”€ readme.md .......................................... project documentation
β”œβ”€β”€ netlify.toml ....................................... netlify config file
β”œβ”€β”€ .travis.yml ........................................ travis ci config file
└── .editorconfig ...................................... common text editor configs

Services

demux

Demux is a backend infrastructure pattern for sourcing blockchain events to deterministically update queryable datastores and trigger side effects.

Taking inspiration from the Flux Architecture pattern and Redux, Demux was born out of the following qualifications:

  1. A separation of concerns between how state exists on the blockchain and how it is queried by the client front-end
  2. Client front-end not solely responsible for determining derived, reduced, and/or accumulated state
  3. The ability for blockchain events to trigger new transactions, as well as other side effects outside of the blockchain
  4. The blockchain as the single source of truth for all application state

  1. Client sends transaction to blockchain.
  2. Action Watcher invokes Action Reader to check for new blocks.
  3. Action Reader sees transaction in new block, parses actions.
  4. Action Watcher sends actions to Action Handler.
  5. Action Handler processes actions through Updaters and Effects.
  6. Actions run their corresponding Updaters, updating the state of the Datastore.
  7. Actions run their corresponding Effects, triggering external events.
  8. Client queries API for updated data.

Learn more at https://github.com/EOSIO/demux-js.

We recomend using EOS Local to connect your demux to an EOSIO node running on your machine.

graphql

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

There are many reason for choosing GraphQL over other solutions, read Top 5 Reasons to Use GraphQL.

Move faster with powerful developer tools

Know exactly what data you can request from your API without leaving your editor, highlight potential issues before sending a query, and take advantage of improved code intelligence. GraphQL makes it easy to build powerful tools like GraphiQL by leveraging your API’s type system.

The GraphiQL instance on EOS Local is available at http://localhost:8088/console/api-explorer

Learn more at https://graphql.org & https://www.howtographql.com

hasura

Hasura GraphQL engine automatically generates your GraphQL schema and resolvers based on your tables/views in Postgres. You don’t need to write a GraphQL schema or resolvers.

The Hasura console gives you UI tools that speed up your data-modeling process, or working with your existing database. The console also automatically generates migrations or metadata files that you can edit directly and check into your version control.

Hasura GraphQL engine lets you do anything you would usually do with Postgres by giving you GraphQL over native Postgres constructs.

Learn more at https://hasura.io

postgres

PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance.

  • Postgres has a strongly typed schema that leaves very little room for errors. You first create the schema for a table and then add rows to the table. You can also define relationships between different tables with rules so that you can store related data across several tables and avoid data duplication.

  • You can change tables in PostgreSQL without requiring to lock it for every operation. For example, you can add a column and set a default value quickly without locking the entire table. This ensures that every row in a table has the column and your codebase remains clean without needing to check if the column exists at every stage. It is also much quicker to update every row since Postgres doesn't need to retrieve each row, update, and put it back.

  • Postgres also supports JSONB, which lets you create unstructured data, but with data constraint and validation functions to help ensure that JSON documents are more meaningful. The folks at Sisense have written a great blog with a detailed comparison of Postgres vs MongoDB for JSON documents.

  • The newest round of performance comparisons of PostgreSQL and MongoDB produced a near repeat of the results from the first tests that proved PostgreSQL can outperform MongoDB.

Learn more at https://www.postgresql.org

pgweb

Pgweb is a web-based database browser for PostgreSQL, written in Go and works on OSX, Linux and Windows machines. Main idea behind using Go for backend development is to utilize ability of the compiler to produce zero-dependency binaries for multiple platforms. Pgweb was created as an attempt to build very simple and portable application to work with local or remote PostgreSQL databases.

Docker compose exposes a pgweb instance on http://localhost:8081 and also through http://pgweb.eoslocal.io with the nginx reverse-proxy.

reactjs web client

In the services/frontend folder you will find a production ready frontend with Scatter and Lynx libraries ready for you to use.

This frontend application uses Materail UI, this UI framework will allow you to build maintainable, scalable web and mobile interfaces.

Material UI

components

Continuous Integration Process

  • TravisCI to run test and code style checks.
  • Netlify for continuous delivery and creation of ephemeral test environments.

EOS Documentation & Learning Resources

Frequently Asked Questions

Why Containers ?

The primary benefits of containers are efficiency and agility. Containers are orders of magnitude faster to provision, and much lighter-weight to build and define versus methods like omnibus software builds and full Virtual Machine images. Containers in a single OS are also more efficient at resource utilization than running a Hypervisor and guest OSs.

Efficiency and agility are good for everyone, but they become game-changers at scale.

It also gives the ability to run distint versions of the different services like EOSIO on your laptop without conflicts.

Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop. Containerization provides a clean separation of concerns, as developers focus on their application logic and dependencies, while IT operations teams can focus on deployment and management without bothering with application details such as specific software versions and configurations specific to the app.

For those coming from virtualized environments, containers are often compared with virtual machines (VMs). You might already be familiar with VMs: a guest operating system such as Linux or Windows runs on top of a host operating system with virtualized access to the underlying hardware. Like virtual machines, containers allow you to package your application together with libraries and other dependencies, providing isolated environments for running your software services. As you’ll see below however, the similarities end here as containers offer a far more lightweight unit for developers and IT Ops teams to work with, carrying a myriad of benefits.

Learn more at https://cloud.google.com/containers/

Why Database Migrations ?

  • It enables a rock-solid deployment process because you are doing exactly the same when updating your local database, your development database, your QA database, your acceptance database and your production database. It’s always the same process and it can be automated.
  • You can easily bring a (CI-)database to the point you want by loading a baseline backup and running all migration scripts until a certain point.
  • If you do it right you have database versioning and change documentation included
  • The approach encourages small changes at a time, leading to less risky deployments
  • It enables and empowers continuous integration because you can easily transport your functional stat to different data sets (e.g. test data)
  • You know exactly what’s happening. That’s in my opinion the greatest benefit of all, because it gives you confidence that what you’re delivering will work. It also gives you enormous flexibility and lets you solve any kind of challenge – even and especially ones which need specific business logic.

Learn more at https://dev.to/pesse/one-does-not-simply-update-a-database--migration-based-database-development-527d