Easy logging package for all your logging needs.
- Log to multiple endpoints at once
- Support for STDOUT, Elasticsearch, Database (MySQL, PostgreSQL, SQLite), and more coming soon.
- Easy syntax
- Fail-over reporting (If one endpoint fails it will be reported to the other endpoints)
Install loghandler via pip
pip install loghandler
In your code import LogHandler and initalize it.
from loghandler import LogHandler
logger = LogHandler({
"log_level": "DEBUG",
"outputs": [
{
"type": "STDOUT"
}
]
})
You can now log messages to all your outputs via:
logger.log('fatal', Exception("Something went HORRIBLY wrong"))
The following endpoints are currently in the works and will be supported soon.
- logstash
- sentry
All endpoints accept a few global settings. They are shown below.
log_level
: For the output it's applied to, this will overrule the global configuration level
report_error
: If an output fails to send a message, this error will be reported to the other outputs, by default this is True. (It's not recommended to turn this off.)
retry_after
: Defines how long an output should wait before retrying, by default this is 15s (Defined in seconds)
To use STDOUT as a log endpoint, add the following to your outputs array.
{
"type": "STDOUT"
}
To use elasticsearch as a log endpoint, add the following to your outputs array.
{
"type": "elasticsearch",
"config": {
"hosts": ["https://your-es-host.com:9243"],
"ssl": True,
"verify_certs": True,
"refresh": "wait_for", # Must be either "true", "false" or "wait_for"
"index": "your-index", # Index will be created if it doesn't exist
"api_key": ("your-api-key-id", "your-api-key-secret")
}
}
Next time something is logged you should see something like the following under your index:
{
"_index" : "logs",
"_type" : "_doc",
"_id" : "some-id",
"_score" : 1.0,
"_source" : {
"timestamp" : "2021-11-05T04:16:25.250206",
"level" : "DEBUG",
"hostname" : "YOUR-HOSTNAME",
"message" : "division by zero",
"occurred_at" : {
"path" : "/somepath/test.py",
"line" : 22
}
}
}
Table Structure
Table(
db_config["table_name"],
metadata,
Column("id", Integer, primary_key=True),
Column("message", Text),
Column("level", String),
Column("origin", String),
Column("timestamp", DateTime),
)
To use sqlite as a log endpoint, add the following to your outputs array.
{
"type": "sqlite",
"config": {
"table_name": "logs", # Will be created if it doesn't exist
"db_path": "/path/to/db.sqlite", # Will be created if it doesn't exist
}
}
Next time something is logged you should see something like the following under your table:
('division by zero', 'DEBUG', '/somepath/test.py:31', '2021-11-07 01:27:24.755989')
To use mysql as a log endpoint, add the following to your outputs array.
{
"type": "mysql",
"config": {
"table_name": "logs",
"connection_string": "root:example@localhost:3306/example_db"
}
}
Next time something is logged you should see something like the following under your table:
division by zero | DEBUG | /somepath/test.py:22 | 2021-11-07 01:46:58
To use pgsql as a log endpoint, add the following to your outputs array.
{
"type": "pgsql",
"config": {
"table_name": "logs",
"connection_string": "postgres:postgres@localhost:5432/example"
}
}
Next time something is logged you should see something like the following under your table:
division by zero | DEBUG | /somepath/test.py:22 | 2021-11-07 01:46:58