-
Notifications
You must be signed in to change notification settings - Fork 38
FAQ
There currently is not a tool to migrate data from Harvest 1.6 to 2.0. The most common workaround is to run both, 1.6 and 2.0, in parallel. Run both, until the 1.6 data expires due to normal retention policy, and then fully cut over to 2.0.
Technically, it’s possible to take a Graphite DB, extract the data, and send it to a Prometheus db, but it’s not an area we’ve invested in. If you want to explore that option, check out the promtool which supports importing, but probably not worth the effort.
Is there a way to allow per SVM level user views? I need to offer 1 tenant per SVM. Can I limit visibility to specific SVMs? Is there an SVM dashboard available?
You can do this with Grafana. Harvest can provide the labels for SVMs. The pieces are there but need to be put together.
Grafana templates support the $__user variable to make pre-selections and decisions. You can use that + metadata mapping the user <-> SVM. With both of those you can build SVM specific dashboards.
There is a German service provider who is doing this. They have service managers responsible for a set of customers – and only want to see the data/dashboards of their corresponding customers.
What permissions does Harvest need to talk to ONTAP?
Permissions, authentication, role based security, and creating a Harvest user are covered here.
How do I make Harvest collect additional ONTAP counters?
Instead of modifying the out-of-the-box templates in the conf/
directory, it is better to create your own custom templates following these instructions.
How are capacity and other metrics calculated by Harvest?
Each collector has its own way of collecting and post-processing metrics. Check the documentation of each individual collector (usually under section #Metrics). Capacity and hardware-related metrics are collected by the Zapi collector which emits metrics as they are without any additional calculation. Performance metrics are collected by the ZapiPerf collector and the final values are calculated from the delta of two consequent polls.
How do I tag ONTAP volumes with metadata and surface that data in Harvest?
See volume tagging issue and volume tagging via sub-templates
How do I relate ONTAP REST endpoints to ZAPI APIs and attributes?
Please refer to the ONTAPI to REST API mapping document.
How much disk space is required by Prometheus?
This depends on the collectors you've added, # of nodes monitored, cardinality of labels, # instances, retention, ingest rate, etc. A good approximation is to curl your Harvest exporter and count the number of samples that it publishes and then feed that information into a Prometheus sizing formula.
Prometheus stores an average of 1-2 bytes per sample. To plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample A rough approximation is outlined https://devops.stackexchange.com/questions/9298/how-to-calculate-disk-space-required-by-prometheus-v2-2
In Grafana, why do I see more results from topk than I asked for?
Topk is one of Prometheus's out-of-the-box aggregation operators, and is used to calculate the largest k elements by sample value.
Depending on the time range you select, Prometheus will often return more results than you asked for. That's because Prometheus is picking the topk for each time in the graph. In other words, different time series are the topk at different times in the graph. When you use a large duration, there are often many time series.
This is a limitation of Prometheus and can be mitigated by:
- reducing the time range to a smaller duration that includes fewer topk results - something like a five to ten minute range works well for most of Harvest's charts
- the panel's table shows the current topk rows and that data can be used to supplement the additional series shown in the charts
Additional details: here, here, and here
Harvest images are published to both NetApp's (cr.netapp.io) and Docker's (hub.docker.com) image registry. By default, cr.netapp.io is used.
Replace all instances of rahulguptajss/harvest:latest
with cr.netapp.io/harvest:latest
-
Edit your docker-compose file and make those replacements or regenerate the compose file using the
--image cr.netapp.io/harvest:latest
option) -
Update any shell or Ansible scripts you have that are also using those images
-
After making these changes, you should stop your containers, pull new images, and restart.
You can verify that you're using the cr.netapp.io images like so:
Before
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
rahulguptajss/harvest latest 80061bbe1c2c 10 days ago 85.4MB <=== no prefix in the repository
prom/prometheus v2.33.1 e528f02c45a6 3 weeks ago 204MB column means from DockerHub
grafana/grafana 8.3.4 4a34578e4374 5 weeks ago 274MB
Pull image from cr.netapp.io
docker pull cr.netapp.io/harvest
Using default tag: latest
latest: Pulling from harvest
Digest: sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae
Status: Image is up to date for cr.netapp.io/harvest:latest
cr.netapp.io/harvest:latest
Notice that the IMAGE ID
for both images are identical since the images are the same.
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
cr.netapp.io/harvest latest 80061bbe1c2c 10 days ago 85.4MB <== Harvest image from cr.netapp.io
rahulguptajss/harvest latest 80061bbe1c2c 10 days ago 85.4MB
prom/prometheus v2.33.1 e528f02c45a6 3 weeks ago 204MB
grafana/grafana 8.3.4 4a34578e4374 5 weeks ago 274MB
grafana/grafana latest 1d60b4b996ad 2 months ago 275MB
prom/prometheus latest c10e9cbf22cd 3 months ago 194MB
We can now remove the DockerHub pulled image
docker image rm rahulguptajss/harvest
Untagged: rahulguptajss/harvest:latest
Untagged: rahulguptajss/harvest@sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
cr.netapp.io/harvest latest 80061bbe1c2c 10 days ago 85.4MB
prom/prometheus v2.33.1 e528f02c45a6 3 weeks ago 204MB
grafana/grafana 8.3.4 4a34578e4374 5 weeks ago 274MB
The default ports are shown in the following diagram.
- Harvest's pollers use ZAPI or REST to communicate with ONTAP on port
443
- Each poller exposes the Prometheus port defined in your
harvest.yml
file - Prometheus scrapes each poller-exposed Prometheus port (
promPort1
,promPort2
,promPort3
) - Prometheus's default port is
9090
- Grafana's default port is
3000