This project lets you DBC decode CAN data from your CANedge into physical values - and push the data into an InfluxDB database. From here, the data can be displayed via your own customized, open source Grafana dashboard.
For the full step-by-step guide to setting up your dashboard, see the CANedge intro.
- easily load MF4 log files from local disk or S3 server
- fetch data from hardcoded time period - or automate with dynamic periods
- DBC-decode data and optionally extract specific signals
- optionally resample data to specific frequency
- write the data to your own InfluxDB time series database
We recommend to install Python 3.7 for Windows (32 bit/64 bit) or Linux. Once installed, download and unzip the repository, then navigate to the folder with the requirements.txt
file.
In your explorer path, write cmd
and hit enter to open your command prompt.
Next, enter the below and hit enter to install script dependencies:
pip install -r requirements.txt
Tip: Watch this video walkthrough of the above.
- Download this repository incl. the J1939 data and demo DBC
- In
inputs.py
add your InfluxDB details, then runpython main.py
via the command line
Note: If you use a free InfluxDB Cloud user, the sample data will be removed after a period (as it is >30 days old).
- Local disk: Add your own data next to the scripts as per the SD structure:
LOG/<device_ID>/<session>/<split>.MF4
- S3 server: Add your S3 server details in
inputs.py
and sets3 = True
- In
inputs.py
update the DBC path list and the device list to match yours - Optionally modify the signal filters or resampling frequency
- On the 1st run, the script will process data starting from
default_start
(you may want to modify this)
There are multiple ways to automate the script execution.
One approach is via periodic execution, triggered e.g. by Windows Task Scheduler or Linux cron jobs. By default, the script is 'dynamic' meaning that it will only process log files that have not yet been added to the InfluxDB database. The script achieves this by fetching the 'most recent' timestamp (across signals) for each device in InfluxDB. The script will then only fetch log files that contain newer data vs. this timestamp.
If no timestamps are found in InfluxDB for a device, default_start
is used. Same goes if dynamic = False
is used. If the script is e.g. temporarily unable to connect to InfluxDB, no log files will be listed for processing.
For details on setting up task scheduler, see the CANedge Intro guide for browser dashboards.
Antoher approach is to use event based triggers, e.g. via AWS Lambda functions. We provide a detailed description of setting up AWS Lambda functions in the aws_lambda_example/
sub folder.
If you need to handle encrypted log files, you can provide a passwords dictionary object with similar structure as the passwords.json
file used in the CANedge MF4 converters. The object can be provided e.g. as below (or via environmental variables):
pw = {"default": "password"} # hardcoded
pw = json.load(open("passwords.json")) # from local JSON file
If you wish to test the script using old data, you can change the timestamps so that the data is 'rebaselined' to today, minus an offset number of days. This is useful e.g. if you want to use the InfluxDB Cloud Starter, which will delete data that is older than 30 days. To rebaseline your data to start today minus 2 days, simply add days_offset=2
in the ProcessData
initialization.
By default, summary information is printed as part of the processing. You can parse verbose=False
as an input argument in list_log_files
, SetupInflux
and ProcessData
to avoid this.
If you need to delete data in InfluxDB that you e.g. uploaded as part of a test, you can use the delete_influx(name)
function from the SetupInflux
class. Call it by parsing the name of the 'measurement' to delete (i.e. the device ID):
influx.delete_influx("958D2219")
If your log files contain data from two CAN channels, you may need to adjust the script in case you have duplicate signal names across both channels. For example, if you're extracting the signal EngineSpeed
from both channels.
If you need to perform more advanced data processing, you may find useful functions and examples in the api-examples library under data-processing/
.
In particular, see the guide in that repository for including transport protocol handling for UDS, J1939 or NMEA 2000 fast packets.
You can add tags to your data when using InfluxDB. This effectively adds additional dimensions to your data that you can e.g. use to color timeseries based on events or to further segment your queries when visualizing the data. The utils_db.py
contains a basic example via the add_signal_tags
functions that you can use as outset for building your own logic.
Note that if you use the paid InfluxDB cloud and a paid S3 server, we recommend that you monitor usage during your tests early on to ensure that no unexpected cost developments occur.