This project was inspired on MonsterEOS' EOSIO Dream Stack architecture.
Table of Contents generated with DocToc
- Architecture
- Technical Specs
- Getting started
- Commands
- Directory Structure
- Services
- Continuous Integration Process
- EOS Documentation & Learning Resources
- Frequently Asked Questions
- Contributing
- About EOS Costa Rica
- License
- Contributors
- Virtualized environment.
- Microservices architecture.
- Out-of-box services:
- Demux service for executing side effects and data replication to postgres.
- GraphQL endpoint with Hasura for executing complex data queries with ease.
- PGWeb instance for exploring the demux postgres database.
- Postgres database for the dApp data.
- Reactjs client with:
- Scatter integration.
- Lynx integration. WIP
- Material UI.
- GraphQL Apollo client.
- Automated code linting and testing.
- Continuous Integration and Deployment. ( Travis and Netlify ) - CircleCI soon
Note: at the moment we are not using a docker container for running the React client due to issues related to hot reloading the app efficiently
Important Disclaimer: This is a Work in Progress
Basic knowledge about Docker, Docker Compose, EOSIO and NodeJS is required.
Global Dependencies
- Docker https://docs.docker.com/install/.
At least 6GB RAM (Docker -> Preferences -> Advanced -> Memory -> 6GB or above) - Hasura CLI https://docs.hasura.io/1.0/graphql/manual/hasura-cli/install-hasura-cli.html
Optionally
- Install node.js v11 on your machine. We recommend using n or nvm, and avn to manage multiple node.js versions on your computer.
- Yarn https://yarnpkg.com/lang/en/docs/install/.
make start
starts all containers and the reactjs app.make flush
stops and removes all cotainers and data.make hasura
open the hasura console on the browser.make migrate
runs hasura migration against the postgres database.docker-compose build
build all containers,docker-compose up
starts all containers.docker-compose up --build
rebuilds and starts all containers.docker-compose exec [service_name] [bash | sh]
open bash or sh in a container.docker-compose stop
stops all containers.docker-compose down
stops and removes all containers.docker-compose restart
restarts all services.
See the full list here https://docs.docker.com/compose/reference/
.
βββ docs/ .............................................. documentation files and media
βββ contracts/ ......................................... eosio smart contracts
βββ services/ .......................................... microservices
| βββ demux/ ......................................... demux-js service
| | βββ utils/ ..................................... general utilities
| | βββ src/ ....................................... application biz logic
| | βββ Dockerfile ................................. service image spec
| | βββ pm2.config.js .............................. process specs for pm2
| | βββ tsconfig.json .............................. tslint config
| | βββ tslint.json ................................ code style rules
| | βββ package.json ............................... service dependencies manifest
| |
| βββ hasura/ ........................................ graphql endpoint service
| | βββ migrations/ ................................ hasura postgres migrations
| | βββ config.yaml ................................ hasura config file
| |
| βββ frontend/ ...................................... reactjs frontend
| βββ public/ .................................... static and public files
| βββ src/ ....................................... reactjs views and components
| βββ config-overrides.js ........................ configuration overrides for `cra`
| βββ .env ....................................... environment variables
| βββ .eslintrc .................................. code style rules
| βββ package.json ............................... service dependencies manifest
|
βββ docker-compose.yaml ................................ docker compose for local dev
βββ contributing.md .................................... contributing guidelines
βββ license ............................................ project license
βββ makefile ........................................... make tasks manifest
βββ readme.md .......................................... project documentation
βββ netlify.toml ....................................... netlify config file
βββ .travis.yml ........................................ travis ci config file
βββ .editorconfig ...................................... common text editor configs
Demux is a backend infrastructure pattern for sourcing blockchain events to deterministically update queryable datastores and trigger side effects.
Taking inspiration from the Flux Architecture pattern and Redux, Demux was born out of the following qualifications:
- A separation of concerns between how state exists on the blockchain and how it is queried by the client front-end
- Client front-end not solely responsible for determining derived, reduced, and/or accumulated state
- The ability for blockchain events to trigger new transactions, as well as other side effects outside of the blockchain
- The blockchain as the single source of truth for all application state
- Client sends transaction to blockchain.
- Action Watcher invokes Action Reader to check for new blocks.
- Action Reader sees transaction in new block, parses actions.
- Action Watcher sends actions to Action Handler.
- Action Handler processes actions through Updaters and Effects.
- Actions run their corresponding Updaters, updating the state of the Datastore.
- Actions run their corresponding Effects, triggering external events.
- Client queries API for updated data.
Learn more at https://github.com/EOSIO/demux-js.
We recomend using EOS Local to connect your demux to an EOSIO node running on your machine.
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
There are many reason for choosing GraphQL over other solutions, read Top 5 Reasons to Use GraphQL.
Move faster with powerful developer tools
Know exactly what data you can request from your API without leaving your editor, highlight potential issues before sending a query, and take advantage of improved code intelligence. GraphQL makes it easy to build powerful tools like GraphiQL by leveraging your APIβs type system.
The GraphiQL instance on EOS Local is available at http://localhost:8088/console/api-explorer
Learn more at https://graphql.org & https://www.howtographql.com
Hasura GraphQL engine automatically generates your GraphQL schema and resolvers based on your tables/views in Postgres. You donβt need to write a GraphQL schema or resolvers.
The Hasura console gives you UI tools that speed up your data-modeling process, or working with your existing database. The console also automatically generates migrations or metadata files that you can edit directly and check into your version control.
Hasura GraphQL engine lets you do anything you would usually do with Postgres by giving you GraphQL over native Postgres constructs.
Learn more at https://hasura.io
PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance.
-
Postgres has a strongly typed schema that leaves very little room for errors. You first create the schema for a table and then add rows to the table. You can also define relationships between different tables with rules so that you can store related data across several tables and avoid data duplication.
-
You can change tables in PostgreSQL without requiring to lock it for every operation. For example, you can add a column and set a default value quickly without locking the entire table. This ensures that every row in a table has the column and your codebase remains clean without needing to check if the column exists at every stage. It is also much quicker to update every row since Postgres doesn't need to retrieve each row, update, and put it back.
-
Postgres also supports JSONB, which lets you create unstructured data, but with data constraint and validation functions to help ensure that JSON documents are more meaningful. The folks at Sisense have written a great blog with a detailed comparison of Postgres vs MongoDB for JSON documents.
-
The newest round of performance comparisons of PostgreSQL and MongoDB produced a near repeat of the results from the first tests that proved PostgreSQL can outperform MongoDB.
Learn more at https://www.postgresql.org
Pgweb is a web-based database browser for PostgreSQL, written in Go and works on OSX, Linux and Windows machines. Main idea behind using Go for backend development is to utilize ability of the compiler to produce zero-dependency binaries for multiple platforms. Pgweb was created as an attempt to build very simple and portable application to work with local or remote PostgreSQL databases.
Docker compose exposes a pgweb instance on http://localhost:8081 and also through http://pgweb.eoslocal.io with the nginx reverse-proxy.
In the services/frontend folder you will find a production ready frontend with Scatter and Lynx libraries ready for you to use.
This frontend application uses Materail UI, this UI framework will allow you to build maintainable, scalable web and mobile interfaces.
- react-app-rewired for tweaking
create-react-app
configuration without ejecting. - reach-router for a more accessible router.
- state management with rematch to use
redux
best practices without all the boilerplate. - react-apollo react apollo client.
- material-ui.
- scatter-js.
- eoslynx integration.
- TravisCI to run test and code style checks.
- Netlify for continuous delivery and creation of ephemeral test environments.
The primary benefits of containers are efficiency and agility. Containers are orders of magnitude faster to provision, and much lighter-weight to build and define versus methods like omnibus software builds and full Virtual Machine images. Containers in a single OS are also more efficient at resource utilization than running a Hypervisor and guest OSs.
Efficiency and agility are good for everyone, but they become game-changers at scale.
It also gives the ability to run distint versions of the different services like EOSIO on your laptop without conflicts.
Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developerβs personal laptop. Containerization provides a clean separation of concerns, as developers focus on their application logic and dependencies, while IT operations teams can focus on deployment and management without bothering with application details such as specific software versions and configurations specific to the app.
For those coming from virtualized environments, containers are often compared with virtual machines (VMs). You might already be familiar with VMs: a guest operating system such as Linux or Windows runs on top of a host operating system with virtualized access to the underlying hardware. Like virtual machines, containers allow you to package your application together with libraries and other dependencies, providing isolated environments for running your software services. As youβll see below however, the similarities end here as containers offer a far more lightweight unit for developers and IT Ops teams to work with, carrying a myriad of benefits.
Learn more at https://cloud.google.com/containers/
- It enables a rock-solid deployment process because you are doing exactly the same when updating your local database, your development database, your QA database, your acceptance database and your production database. Itβs always the same process and it can be automated.
- You can easily bring a (CI-)database to the point you want by loading a baseline backup and running all migration scripts until a certain point.
- If you do it right you have database versioning and change documentation included
- The approach encourages small changes at a time, leading to less risky deployments
- It enables and empowers continuous integration because you can easily transport your functional stat to different data sets (e.g. test data)
- You know exactly whatβs happening. Thatβs in my opinion the greatest benefit of all, because it gives you confidence that what youβre delivering will work. It also gives you enormous flexibility and lets you solve any kind of challenge β even and especially ones which need specific business logic.
Learn more at https://dev.to/pesse/one-does-not-simply-update-a-database--migration-based-database-development-527d