Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need more network port details to give Ethernet parity with FC #472

Closed
jgasher opened this issue Aug 27, 2021 · 7 comments · Fixed by #1231
Closed

Need more network port details to give Ethernet parity with FC #472

jgasher opened this issue Aug 27, 2021 · 7 comments · Fixed by #1231

Comments

@jgasher
Copy link

jgasher commented Aug 27, 2021

Is your feature request related to a problem? Please describe.
When building dashboard for Ethernet interfaces there aren't as many details available as FC ports. Unfortanately the nic_common object doesn't have the same counters available as the fcp_port object.

Describe the solution you'd like
User the net-port-get-iter API to gather the following metrics:
[administrative-duplex]
[administrative-flowcontrol]
[administrative-speed]
[broadcast-domain]
[ifgrp-distribution-function]
[ifgrp-mode]
[ifgrp-node]
[ifgrp-port]
[ignore-health-status]
[ipspace]
[is-administrative-auto-negotiate]
[is-administrative-up]
[is-operational-auto-negotiate]
[link-status]
[mtu]
[mtu-admin]
[node]
[operational-duplex]
[operational-flowcontrol]
[operational-speed]
[port]
[vlan-id]
[vlan-node]
[vlan-port]

@jgasher jgasher added the feature New feature or request label Aug 27, 2021
@vgratian
Copy link
Contributor

Hi @jgasher, thanks for bringing this up. Feel free to create and add a template for the zapi collector on your own. Check the documentation for some guidance and examples.

You just have to translate your list of metrics into a subtemplate that Harvest understands. E.g. I think something like this should work:

name:            NicPort
query:            net-port-get-iter
object:           nic_port

counters:
  net-port-info:
    - ^administrative-duplex
    - ^administrative-flowcontrol
    - administrative-speed
    - ^broadcast-domain
    - ^ifgrp-distribution-function
    - ^ifgrp-mode
    - ^ifgrp-node
    - ^ifgrp-port
    - ^ignore-health-status
    - ^ipspace
    - ^is-administrative-auto-negotiate
    - ^is-administrative-up
    - ^is-operational-auto-negotiate
    - ^link-status
    - mtu
    - mtu-admin
    - ^^node
    - ^operational-duplex
    - ^operational-flowcontrol
    - operational-speed
    - ^^port
    - ^vlan-id
    - ^vlan-node
    - ^vlan-port

@vgratian
Copy link
Contributor

I tested with a more refined subtemplate. Created conf/zapi/cdot/9.8.0/net_port.yaml:

name:                     NetPort
query:                     net-port-get-iter
object:                    net_port

counters:
  net-port-info:
    - ^administrative-duplex                   => admin_duplex
    - ^administrative-flowcontrol              => admin_flowcontrol
    - ^administrative-speed                    => admin_speed
    - ^broadcast-domain                        => broadcast
    - ^ifgrp-distribution-function             => ifgrp_func
    - ^ifgrp-mode                              => ifgrp_mode
    - ^ifgrp-node                              => ifgrp_node
    - ^ifgrp-port                              => ifgrp_port
    - ^ignore-health-status                    => ignore_health
    - ^ipspace                                 => ipspace
    - ^is-administrative-auto-negotiate        => admin_auto_negotiate
    - ^is-administrative-up                    => admin_up
    - ^is-operational-auto-negotiate           => op_auto_negotiate
    - ^link-status                             => status
    - mtu                                      => port_mtu
    - mtu-admin                                => port_admin_mtu
    - ^^node                                   => node
    - ^operational-duplex                      => op_duplex
    - ^operational-flowcontrol                 => op_flowcontrol
    - ^operational-speed                       => op_speed
    - ^^port                                   => port
    - ^vlan-id                                 => vlan_id
    - ^vlan-node                               => vlan_node
    - ^vlan-port                               => vlan_port

plugins:
  LabelAgent:
    value_mapping: status status up `0`

export_options:
  include_all_labels: true

Started poller, here is example output:

net_port_status{datacenter="MUCCBC",cluster="jamaica",broadcast="FabricPool",status="up",node="jamaica-01",ignore_health="false",ipspace="FabricPool",admin_up="true",port="a0b-59",vlan_id="59",vlan_node="jamaica-01",vlan_port="a0b"} 0
net_port_port_mtu{datacenter="MUCCBC",cluster="jamaica",ignore_health="false",ipspace="Default",admin_flowcontrol="full",op_auto_negotiate="true",op_speed="1000",broadcast="Cluster-MGMT_v59",admin_auto_negotiate="true",admin_up="true",admin_speed="auto",status="up",node="jamaica-01",op_duplex="full",op_flowcontrol="full",port="e0M",admin_duplex="auto"} 1500
net_port_port_admin_mtu{datacenter="MUCCBC",cluster="jamaica",ignore_health="false",ipspace="Default",admin_flowcontrol="full",op_auto_negotiate="true",op_speed="1000",broadcast="Cluster-MGMT_v59",admin_auto_negotiate="true",admin_up="true",admin_speed="auto",status="up",node="jamaica-01",op_duplex="full",op_flowcontrol="full",port="e0M",admin_duplex="auto"} 1500
net_port_status{datacenter="MUCCBC",cluster="jamaica",ignore_health="false",ipspace="Default",admin_flowcontrol="full",op_auto_negotiate="true",op_speed="1000",broadcast="Cluster-MGMT_v59",admin_auto_negotiate="true",admin_up="true",admin_speed="auto",status="up",node="jamaica-01",op_duplex="full",op_flowcontrol="full",port="e0M",admin_duplex="auto"} 0
net_port_port_mtu{datacenter="MUCCBC",cluster="jamaica",admin_flowcontrol="full",ignore_health="false",ipspace="Default",op_auto_negotiate="false",op_duplex="half",admin_duplex="auto",admin_speed="auto",admin_auto_negotiate="true",admin_up="true",status="down",node="jamaica-01",port="e11d"} 1500
net_port_port_admin_mtu{datacenter="MUCCBC",cluster="jamaica",admin_flowcontrol="full",ignore_health="false",ipspace="Default",op_auto_negotiate="false",op_duplex="half",admin_duplex="auto",admin_speed="auto",admin_auto_negotiate="true",admin_up="true",status="down",node="jamaica-01",port="e11d"} 1500
net_port_status{datacenter="MUCCBC",cluster="jamaica",admin_flowcontrol="full",ignore_health="false",ipspace="Default",op_auto_negotiate="false",op_duplex="half",admin_duplex="auto",admin_speed="auto",admin_auto_negotiate="true",admin_up="true",status="down",node="jamaica-01",port="e11d"} 0

@jgasher
Copy link
Author

jgasher commented Aug 31, 2021 via email

@ruanruijuan
Copy link

ruanruijuan commented Aug 31, 2021

@jgasher As we are migrating from ZAPI collection to RESTAPI (issue #59), the following attributes from the net-port-get-iter ZAPI do not have RESTAPI equivalent. We have gone through the RESTAPI gap analysis with various ONTAP teams for Unified Manager and was told that 1) ONTAP has done most of the RESTAPI development and not every ZAPI/ZAPI attribute would have a RESTAPI equivalent; 2) we would need to give justifications why certain RESTAPI gaps need to be closed; 3) ONTAP has one last release of 9.11 to close the remaining gaps (if they agree with the gaps).

administrative-duplex
administrative-flowcontrol
administrative-speed - it's being deprecating in CLI
ignore-health-status
is-administrative-auto-negotiate
is-operational-auto-negotiate
mtu-admin
operational-duplex - default is always full-duplex
operational-flowcontrol

How important for you (or your respective customers) to include these attributes? If we add them now into Harvest, chances are there will be no data for them once we switch over to RESTAPI.

@jgasher
Copy link
Author

jgasher commented Aug 31, 2021 via email

@rahulguptajss
Copy link
Contributor

Rest Collector has netport template enabled by default while Zapi collector has it disabled. It should be same.

@rahulguptajss
Copy link
Contributor

verified in 22.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants