Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/windowsperfcountersreceiver] Update how metrics are established #8376

Merged
merged 26 commits into from
Mar 21, 2022
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@

### 🛑 Breaking changes 🛑

- `windowsperfcountersreceiver`: Added metrics configuration (#8376)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like I can't change this PR, so I can't fix this for you, but this now needs to be moved up to the "unreleased" section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

- `mongodbatlasreceiver`: rename mislabeled attribute `memory_state` to correct `disk_status` on partition disk metrics (#7747)
- `mongodbatlasreceiver`: Correctly set initial lookback for querying mongodb atlas api (#8246)
- `nginxreceiver`: instrumentation name updated from `otelcol/nginx` to `otelcol/nginxreceiver` (#8255)
Expand Down
70 changes: 49 additions & 21 deletions receiver/windowsperfcountersreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,6 @@ interface](https://docs.microsoft.com/en-us/windows/win32/perfctrs/using-the-pdh
It is based on the [Telegraf Windows Performance Counters Input
Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters).

Metrics will be generated with names and labels that match the performance
counter path, i.e.

- `Memory\Committed Bytes`
- `Processor\% Processor Time`, with a datapoint for each `Instance` label = (`_Total`, `1`, `2`, `3`, ... )

Expand All @@ -25,11 +22,27 @@ be configured:
```yaml
windowsperfcounters:
collection_interval: <duration> # default = "1m"
metric_metadata:
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
- metric_name: <metric name>
description: <description>
unit: <unit type>
gauge:
value_type: <int or double>
- metric_name: <metric name>
description: <description>
unit: <unit type>
sum:
value_type: <int or double>
aggregation: <cumulative or delta>
monotonic: <true or false>
perfcounters:
- object: <object name>
instances: [<instance name>]*
counters:
- <counter name>
- counter_name: <counter name>
metric_name: <metric name>
attributes:
<key>: <value>
```

*Note `instances` can have several special values depending on the type of
Expand All @@ -50,63 +63,78 @@ If you would like to scrape some counters at a different frequency than others,
you can configure multiple `windowsperfcounters` receivers with different
`collection_interval` values. For example:

```yaml
```yaml
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
receivers:
windowsperfcounters/memory:
metric_metadata:
- metric_name: bytes.committed
description: the number of bytes committed to memory
unit: By
gauge:
value_type: int
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
collection_interval: 30s
perfcounters:
- object: Memory
counters:
- Committed Bytes
- counter_name: Committed Bytes
metric_name: bytes.committed

windowsperfcounters/processor:
collection_interval: 1m
metric_metadata:
- metric_name: processor.time
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
description: active and idle time of the processor
unit: "%"
gauge:
value_type: double
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
perfcounters:
- object: "Processor"
instances: "*"
counters:
- "% Processor Time"
- counter_name: "% Processor Time"
metric_name: processor.time
attributes:
state: active
- object: "Processor"
instances: [1, 2]
counters:
- "% Idle Time"
- counter_name: "% Idle Time"
metric_name: processor.time
attributes:
state: idle

service:
pipelines:
metrics:
receivers: [windowsperfcounters/memory, windowsperfcounters/processor]
```

### Changing metric format
### Defining metric format

To report metrics in the desired output format, it's recommended you use this
receiver with the [metrics transform
processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/metricstransformprocessor).
To report metrics in the desired output format, build a metric the metric and reference it in the given counter with any applicable attributes.
djaglowski marked this conversation as resolved.
Show resolved Hide resolved

e.g. To output the `Memory/Committed Bytes` counter as a metric with the name
`system.memory.usage`:
`bytes.committed`:

```yaml
receivers:
windowsperfcounters:
metric_metadata:
- metric_name: bytes.committed
description: the number of bytes committed to memory
unit: By
gauge:
value_type: int
collection_interval: 30s
perfcounters:
- object: Memory
counters:
- Committed Bytes

processors:
metricstransformprocessor:
transforms:
- metric_name: "Memory/Committed Bytes"
action: update
new_name: system.memory.usage

service:
pipelines:
metrics:
receivers: [windowsperfcounters]
processors: [metricstransformprocessor]
```

## Recommended configuration for common applications
Expand Down
92 changes: 84 additions & 8 deletions receiver/windowsperfcountersreceiver/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,40 @@ import (
type Config struct {
scraperhelper.ScraperControllerSettings `mapstructure:",squash"`

PerfCounters []PerfCounterConfig `mapstructure:"perfcounters"`
MetricMetaData []MetricConfig `mapstructure:"metric_metadata"`
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
PerfCounters []PerfCounterConfig `mapstructure:"perfcounters"`
}

// PerfCounterConfig defines configuration for a perf counter object.
type PerfCounterConfig struct {
Object string `mapstructure:"object"`
Instances []string `mapstructure:"instances"`
Counters []string `mapstructure:"counters"`
Object string `mapstructure:"object"`
Instances []string `mapstructure:"instances"`
Counters []CounterConfig `mapstructure:"counters"`
}

// MetricsConfig defines the configuration for a metric to be created.
type MetricConfig struct {
MetricName string `mapstructure:"metric_name"`
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
Unit string `mapstructure:"unit"`
Description string `mapstructure:"description"`
Gauge GaugeMetric `mapstructure:"gauge"`
Sum SumMetric `mapstructure:"sum"`
}

type GaugeMetric struct {
ValueType string `mapstructure:"value_type"`
}

type SumMetric struct {
ValueType string `mapstructure:"value_type"`
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
Aggregation string `mapstructure:"aggregation"`
Monotonic bool `mapstructure:"monotonic"`
}

type CounterConfig struct {
MetricName string `mapstructure:"metric_name"`
CounterName string `mapstructure:"counter_name"`
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
Attributes map[string]string `mapstructure:"attributes"`
}

func (c *Config) Validate() error {
Expand All @@ -46,23 +72,73 @@ func (c *Config) Validate() error {
errs = multierr.Append(errs, fmt.Errorf("must specify at least one perf counter"))
}

if len(c.MetricMetaData) == 0 {
errs = multierr.Append(errs, fmt.Errorf("must specify at least one metric"))
}

for _, metric := range c.MetricMetaData {
if metric.MetricName == "" {
errs = multierr.Append(errs, fmt.Errorf("a metric does not include a name"))
continue
}
if metric.Description == "" {
errs = multierr.Append(errs, fmt.Errorf("metric %q does not include a description", metric.MetricName))
}
if metric.Unit == "" {
errs = multierr.Append(errs, fmt.Errorf("metric %q does not include a unit", metric.MetricName))
}
djaglowski marked this conversation as resolved.
Show resolved Hide resolved

if (metric.Gauge == GaugeMetric{}) && (metric.Sum == SumMetric{}) {
errs = multierr.Append(errs, fmt.Errorf("metric %q does not include a metric definition", metric.MetricName))
} else if (metric.Gauge != GaugeMetric{}) {
if metric.Gauge.ValueType == "" {
errs = multierr.Append(errs, fmt.Errorf("gauge metric %q does not include a value type", metric.MetricName))
} else if metric.Gauge.ValueType != "int" && metric.Gauge.ValueType != "double" {
errs = multierr.Append(errs, fmt.Errorf("gauge metric %q includes an invalid value type", metric.MetricName))
}
} else if (metric.Sum != SumMetric{}) {
Mrod1598 marked this conversation as resolved.
Show resolved Hide resolved
if metric.Sum.ValueType == "" {
errs = multierr.Append(errs, fmt.Errorf("sum metric %q does not include a value type", metric.MetricName))
} else if metric.Sum.ValueType != "int" && metric.Sum.ValueType != "double" {
errs = multierr.Append(errs, fmt.Errorf("sum metric %q includes an invalid value type", metric.MetricName))
}
if metric.Sum.Aggregation == "" {
errs = multierr.Append(errs, fmt.Errorf("sum metric %q does not include an aggregation", metric.MetricName))
} else if metric.Sum.Aggregation != "cumulative" && metric.Sum.Aggregation != "delta" {
errs = multierr.Append(errs, fmt.Errorf("sum metric %q includes an invalid aggregation", metric.MetricName))
}
}
}

var perfCounterMissingObjectName bool
for _, pc := range c.PerfCounters {
if pc.Object == "" {
perfCounterMissingObjectName = true
continue
}

if len(pc.Counters) == 0 {
errs = multierr.Append(errs, fmt.Errorf("perf counter for object %q does not specify any counters", pc.Object))
}

for _, counter := range pc.Counters {
foundMatchingMetric := false
for _, metric := range c.MetricMetaData {
if counter.MetricName == metric.MetricName {
foundMatchingMetric = true
}
}
if !foundMatchingMetric {
errs = multierr.Append(errs, fmt.Errorf("perf counter for object %q includes an undefined metric", pc.Object))
}
djaglowski marked this conversation as resolved.
Show resolved Hide resolved
}

for _, instance := range pc.Instances {
if instance == "" {
errs = multierr.Append(errs, fmt.Errorf("perf counter for object %q includes an empty instance", pc.Object))
break
}
}

if len(pc.Counters) == 0 {
errs = multierr.Append(errs, fmt.Errorf("perf counter for object %q does not specify any counters", pc.Object))
}
}

if perfCounterMissingObjectName {
Expand Down
88 changes: 83 additions & 5 deletions receiver/windowsperfcountersreceiver/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,30 @@ func TestLoadConfig(t *testing.T) {

r0 := cfg.Receivers[config.NewComponentID(typeStr)]
defaultConfigSingleObject := factory.CreateDefaultConfig()
defaultConfigSingleObject.(*Config).PerfCounters = []PerfCounterConfig{{Object: "object", Counters: []string{"counter"}}}

counterConfig := CounterConfig{
CounterName: "counter1",
MetricName: "metric",
}
defaultConfigSingleObject.(*Config).PerfCounters = []PerfCounterConfig{{Object: "object", Counters: []CounterConfig{counterConfig}}}
defaultConfigSingleObject.(*Config).MetricMetaData = []MetricConfig{
{
MetricName: "metric",
Description: "desc",
Unit: "1",
Gauge: GaugeMetric{
ValueType: "double",
},
},
}

assert.Equal(t, defaultConfigSingleObject, r0)

counterConfig2 := CounterConfig{
CounterName: "counter2",
MetricName: "metric2",
}

r1 := cfg.Receivers[config.NewComponentIDWithName(typeStr, "customname")].(*Config)
expectedConfig := &Config{
ScraperControllerSettings: scraperhelper.ScraperControllerSettings{
Expand All @@ -56,11 +76,29 @@ func TestLoadConfig(t *testing.T) {
PerfCounters: []PerfCounterConfig{
{
Object: "object1",
Counters: []string{"counter1"},
Counters: []CounterConfig{counterConfig},
},
{
Object: "object2",
Counters: []string{"counter1", "counter2"},
Counters: []CounterConfig{counterConfig, counterConfig2},
},
},
MetricMetaData: []MetricConfig{
{
MetricName: "metric",
Description: "desc",
Unit: "1",
Gauge: GaugeMetric{
ValueType: "double",
},
},
{
MetricName: "metric2",
Description: "desc",
Unit: "1",
Gauge: GaugeMetric{
ValueType: "double",
},
},
},
}
Expand All @@ -82,6 +120,15 @@ func TestLoadConfig_Error(t *testing.T) {
noObjectNameErr = "must specify object name for all perf counters"
noCountersErr = `perf counter for object "%s" does not specify any counters`
emptyInstanceErr = `perf counter for object "%s" includes an empty instance`
undefinedMetricErr = `perf counter for object "%s" includes an undefined metric`
missingMetricName = `a metric does not include a name`
missingMetricDesc = `metric "%s" does not include a description`
missingMetricUnit = `metric "%s" does not include a unit`
missingMetricMetricType = `metric "%s" does not include a metric definition`
missingGaugeValueType = `gauge metric "%s" does not include a value type`
missingSumValueType = `sum metric "%s" does not include a value type`
missingSumAggregation = `sum metric "%s" does not include an aggregation`
missingMetrics = `must specify at least one metric`
)

testCases := []testCase{
Expand Down Expand Up @@ -110,15 +157,46 @@ func TestLoadConfig_Error(t *testing.T) {
cfgFile: "config-emptyinstance.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(emptyInstanceErr, "object")),
},
{
name: "EmptyMetricDescription",
cfgFile: "config-missingmetricdescription.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(missingMetricDesc, "metric")),
},
{
name: "EmptyMetricUnit",
cfgFile: "config-missingmetricunit.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(missingMetricUnit, "metric")),
},
{
name: "EmptyMetricMetricType",
cfgFile: "config-missingdatatype.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(missingMetricMetricType, "metric")),
},
{
name: "EmptyMetricName",
cfgFile: "config-missingmetricname.yaml",
expectedErr: fmt.Sprintf("%s: %s; %s", errorPrefix, missingMetricName, fmt.Sprintf(undefinedMetricErr, "object")),
},
{
name: "EmptySumValueType",
cfgFile: "config-missingsumvaluetype.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(missingSumValueType, "metric")),
},
{
name: "EmptySumAggregation",
cfgFile: "config-missingsumaggregation.yaml",
expectedErr: fmt.Sprintf("%s: %s", errorPrefix, fmt.Sprintf(missingSumAggregation, "metric")),
},
{
name: "AllErrors",
cfgFile: "config-allerrors.yaml",
expectedErr: fmt.Sprintf(
"%s: %s; %s; %s; %s",
"%s: %s; %s; %s; %s; %s",
errorPrefix,
negativeCollectionIntervalErr,
fmt.Sprintf(emptyInstanceErr, "object"),
missingMetrics,
fmt.Sprintf(noCountersErr, "object"),
fmt.Sprintf(emptyInstanceErr, "object"),
noObjectNameErr,
),
},
Expand Down
Loading