SmallBen
is a small and simple persistent scheduling library, that basically
combines cron and a persistence layer. That means that jobs that are added to the
scheduler will persist across runs. As of now, the only supported persistence layer is gorm.
Features:
- simple, both to use and to maintain
- relies on well-known libraries, it just adds a thin layer on top of them
- supports prometheus metrics
This library can be thought, somehow, as a (much) simpler version of Java quartz.
APIs are almost finalized but should not considered stable until a v1
.
A Job
is the very central struct
of this library. A Job
contains, among the others, the following fields, which
must be specified by the user.
ID
: unique identifier of each jobGroupID
: unique identifier useful to group jobs togetherSuperGroupID
: unique identifier useful to group groups of jobs together. For instance, it can be used to model different users. The semantic is left to the user.
The concrete execution logic of a Job
is wrapped in the CronJob
interface, which is defined as follows.
// CronJob is the interface jobs have to implement.
// It contains only one single method, `Run`.
type CronJob interface {
Run(input CronJobInput)
}
The Run
method takes input a struct
of type CronJobInput
, which is defined as follows.
// CronJobInput is the input passed to the Run function.
type CronJobInput struct {
// JobID is the ID of the current job.
JobID int64
// GroupID is the GroupID of the current job.
GroupID int64
// SuperGroupID is the SuperGroupID of the current job.
SuperGroupID int64
// CronExpression is the interval of execution, as specified on job creation.
CronExpression string
// OtherInputs contains the other inputs of the job.
OtherInputs map[string]interface{}
}
In practice, each (implementation of) CronJob
receives in input a bunch of data containing some information about the
job itself. In particular, OtherInputs
is a map that can contain arbitrary data needed for the job.
Since they are persisted using gob
serialization, it is important to:
- register the concrete types implementing
CronJob
(see below) - pay attention to updates to the code, since they might break serialization/deserialization.
The first thing to do is to configure the persistent storage.
The gorm
-backed storage is called RepositoryGorm
, and is created by passing in two structs:
gorm.Dialector
gorm.Config
import (
"github.com/nbena/smallben"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
dialector := []gorm.Dialector{
postgres.Open("host=localhost dbname=postgres port=5432 user=postgres password=postgres")
}
repo, _ := smallben.NewRepositoryGorm(&smallben.RepositoryGormConfig{
Dialector: dialector,
Config: gorm.Config{},
})
The second thing to do is to define an implementation of the CronJob
interface.
import (
"fmt"
)
type FooJob struct {}
func(f *FooJob) Run(input smallben.CronJobInput) {
fmt.Printf("You are calling me, my ID is: %d, my GroupID is: %d, my SuperGroupID is: %d\n",
input.JobID, input.GroupID, input.SuperGroupID)
}
Now, this implementation must be registered, to make gob
encoding works. A good place to do it is in the init()
function.
import (
"encoding/gob"
)
func init() {
gob.Register(&FooJob{})
}
The third thing to do is to actually create a Job
, which we later submit to SmallBen
. Other than ID
, GroupID
and SuperGroupID
, the following fields must be specified.
CronExpression
to specify the execution interval, following the format used by cronJob
to specify the actual implementation ofCronJob
to executeJobInput
to specify other inputs to pass to theCronJob
implementation. They will be available atinput.OtherInputs
, and they are static, i.e., each modification to them is not persisted.
// Create a Job struct. No builder-style API.
job := smallben.Job{
ID: 1,
GroupID: 1,
SuperGroupID: 1,
// executed every 5 seconds
CronExpression: "@every 5s",
Job: &FooJob{},
JobInput: make(map[string]interface{}),
}
The fourth thing to do is to actually create the scheduler, by passing in the storage interface and a
configurationstruct
. The latter allows to set some options of cron
, and configures the logger, that must
implement logr.
For instance, the example below uses zapr.
import (
"github.com/go-logr/zapr"
"github.com/robfig/cron/v3"
"go.uber.org/zap"
)
// create the Zap logger
zapLogger, _ := zap.NewProduction()
// that now is being wrapped into Zapr, providing
// the compatibility layer with logr.
logger := zapr.NewLogger(zapLogger)
config := Config{
// use default options for the scheduler
SchedulerConfig: &smallben.SchedulerConfig{},
Logger: logger,
}
// create the repository...
// create the scheduler passing
// in the storage.
scheduler := smallben.New(repo, &config)
Next, the scheduler must be started. Starting the scheduler will make it fetching all the Job
within the storage that must be executed.
err := scheduler.Start()
Add this point, our Job
can be added to the scheduler. All operations are done in batches and are protected by a lock.
err := scheduler.AddJobs([]Job{job})
That's all.
Other than adding Job
, other operations can be performed on a SmallBen
(i.e., scheduler) instance.
DeleteJobs
to permanently delete a batch of jobsPauseJobs
to pause the execution of a batch of jobs, without actually deleting it from the storageResumeJobs
to resume the execution of a batch of jobsUpdateSchedule
to update the execution interval of a batch of jobsListJobs
to list jobs, according to some criteria
Simplicity. This library is extremely simple, both to use and to write and maintain. New features will be added to the core library only if this aspect is left intact.
Storage. The only supported storage is gorm. Using an ORM and a RDBMS might seem an overkill, but actually thanks to gorm
the code is quite simple, and thanks to RDBMS the data being memorized are quite safe. Furthermore, to quickly experiment it still is possible to sqlite
.
Other storage. The functionalities exposed by gorm
-backed storage, in fact, implement an interface called Repository
, which is public. The SmallBen
struct
works with that interface, so it would be quite easy to add more backends, if needed.
Deployment. In the scripts directory there are the necessary files to start a dockerized postgres
instance for this library. Just one table is needed. For a quicker deployment, one might consider using SQLite
.