-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathdraft-rosa-bmwg-vnfbench
2744 lines (2034 loc) · 94.6 KB
/
draft-rosa-bmwg-vnfbench
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
BMWG R. V. Rosa, Ed.
Internet-Draft C. E. Rothenberg
Intended status: Informational UNICAMP
Expires: 11 April 2021 M. P. Peuster
H. K. Karl
UPB
8 October 2020
Methodology for VNF Benchmarking Automation
draft-rosa-bmwg-vnfbench-06
Abstract
This document describes a common methodology for the automated
benchmarking of Virtualized Network Functions (VNFs) executed on
general-purpose hardware. Specific cases of automated benchmarking
methodologies for particular VNFs can be derived from this document.
An open source reference implementation is reported as running code
embodiment of the proposed, automated benchmarking methodology.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on 11 April 2021.
Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents (https://trustee.ietf.org/
license-info) in effect on the date of publication of this document.
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document. Code Components
Rosa, et al. Expires 11 April 2021 [Page 1]
Internet-Draft VNFBench October 2020
extracted from this document must include Simplified BSD License text
as described in Section 4.e of the Trust Legal Provisions and are
provided without warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Considerations . . . . . . . . . . . . . . . . . . . . . . . 6
4.1. VNF Assessment Methods . . . . . . . . . . . . . . . . . 7
4.2. Benchmarking Stages . . . . . . . . . . . . . . . . . . . 7
4.3. Architectural Framework . . . . . . . . . . . . . . . . . 8
4.4. Scenarios . . . . . . . . . . . . . . . . . . . . . . . . 10
4.5. Phases of a Benchmarking Test . . . . . . . . . . . . . . 11
4.5.1. Phase I: Deployment . . . . . . . . . . . . . . . . . 11
4.5.2. Phase II: Configuration . . . . . . . . . . . . . . . 11
4.5.3. Phase III: Execution . . . . . . . . . . . . . . . . 12
4.5.4. Phase IV: Result . . . . . . . . . . . . . . . . . . 12
5. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.1. VNF Benchmarking Descriptor (VNF-BD) . . . . . . . . . . 13
5.2. VNF Performance Profile (VNF-PP) . . . . . . . . . . . . 13
5.3. VNF Benchmarking Report (VNF-BR) . . . . . . . . . . . . 14
5.4. Procedures . . . . . . . . . . . . . . . . . . . . . . . 15
5.4.1. Plan . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4.2. Realization . . . . . . . . . . . . . . . . . . . . . 16
5.4.3. Summary . . . . . . . . . . . . . . . . . . . . . . . 18
6. Particular Cases . . . . . . . . . . . . . . . . . . . . . . 18
6.1. Capacity . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2. Redundancy . . . . . . . . . . . . . . . . . . . . . . . 19
6.3. Isolation . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4. Failure Handling . . . . . . . . . . . . . . . . . . . . 19
6.5. Elasticity and Flexibility . . . . . . . . . . . . . . . 19
6.6. Handling Configurations . . . . . . . . . . . . . . . . . 20
6.7. White Box VNF . . . . . . . . . . . . . . . . . . . . . . 20
7. Open Source Reference Implementation . . . . . . . . . . . . 20
7.1. Gym . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.2. Related work: tng-bench . . . . . . . . . . . . . . . . . 21
8. Security Considerations . . . . . . . . . . . . . . . . . . . 22
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23
10. YANG Modules . . . . . . . . . . . . . . . . . . . . . . . . 23
10.1. VNF-Benchmarking Descriptor . . . . . . . . . . . . . . 23
10.2. VNF Performance Profile . . . . . . . . . . . . . . . . 34
10.3. VNF Benchmarking Report . . . . . . . . . . . . . . . . 41
11. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 46
12. References . . . . . . . . . . . . . . . . . . . . . . . . . 46
12.1. Normative References . . . . . . . . . . . . . . . . . . 46
12.2. Informative References . . . . . . . . . . . . . . . . . 47
Rosa, et al. Expires 11 April 2021 [Page 2]
Internet-Draft VNFBench October 2020
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 49
1. Introduction
In [RFC8172] the Benchmarking Methodology Working Group (BMWG)
presented considerations for benchmarking of VNFs and their
infrastructure, similar to the motivation given, the following
aspects reinforce and justify the need for VNF benchmarking: (i) pre-
deployment infrastructure dimensioning to realize associated VNF
performance profiles; (ii) comparison factor with physical network
functions; (iii) and output results for analytical VNF development.
Even if many methodologies already described by the BMWG, e.g., self-
contained black-box benchmarking, can be applied to VNF benchmarking
scenarios, further considerations have to be made. This is because
VNFs, which are software components, might not have strict and clear
execution boundaries and depend on underlying virtualization
environment parameters as well as management and orchestration
decisions [ETS14a].
Different enabling technologies advent of Software Defined Networking
(SDN) and Network Functions Virtualization (NFV) have propitiated the
disaggregation of VNFs and benchmarking tools, turning their
Application Programming Interfaces (APIs) open and programmable.
This process have occurred mostly by: (i) the decoupling of network
functions control and data planes; (ii) the development of VNFs as
multi-layer and distributed software components; (iii) and the
existence of multiple underlying hardware abstractions to be utilized
by VNFs.
Utilizing SDN and NFV enabling technologies, a myriad of benchmarking
tools have been created to facilitate the active stimulus and the
passive monitoring of a VNF via diverse software abstraction layers,
propitiating a wide variety of abstractions for benchmarking
mechanisms in the formulation of a VNF benchmarking methodology. In
this manner of establishing the disaggregation of a VNF benchmarking
setup, the abstracted VNF benchmarking mechanisms can be
programmable, enabling the execution of their underlying technologies
by the means of well defined parameters and producing the report of
standardized metrics.
Turning programmable the execution of a VNF benchmarking methodology
enables a richer apparatus for the benchmarking of a VNF and
consequently facilitates the high-fidelity assessment of a VNF
behaviour. Estimating the behaviour of a VNF depends on three
correlated factors:
Internal configuration: Each use case of the VNF might define
Rosa, et al. Expires 11 April 2021 [Page 3]
Internet-Draft VNFBench October 2020
specific settings for it to work properly, and even each VNF might
dispose of specific settings to be configured.
Hardware and software execution environment: A myriad of
capabilities offered by execution environments might match in a
large diversity of manners the possible software arrangements
internal of each VNF might be programmable.
Network workload specificities: Depending on the use case, a VNF
might be placed in different settings, operating under varied
traffic profiles and in demand of a specific performance behavior.
The role of a VNF benchmarking methodology consists in defining how
to tackle the diversity of settings imposed by the above enlisted
factors in order to extract performance metrics associated with
particular VNF packet processing behaviors. The sample space of
testing such diversity of settings can be extensively large, turning
manual benchmarking experiments prohibitively expensive. Indeed,
portability as an intrinsic characteristic of VNFs allows them to be
deployed in multiple execution environments, enabling benchmarking
setups in a myriad of settings. Thus, the establishment of a
methodology for VNF benchmarking automation detains utter importance.
Accordingly, can and should the flexible, software-based nature of
VNFs be exploited to fully automate the entire benchmarking
methodology end-to-end. This is an inherent need to align VNF
benchmarking with the agile methods enabled by the concept of Network
Functions Virtualization (NFV) [ETS14e]. More specifically it
allows: (i) the development of agile performance-focused DevOps
methodologies for Continuous Integration and Delivery (CI/CD) of
VNFs; (ii) the creation of on-demand VNF test descriptors for
upcoming execution environments; (iii) the path for precise-analytics
of automated catalogues of VNF performance profiles; (iv) and run-
time mechanisms to assist VNF lifecycle orchestration/management
workflows, e.g., automated resource dimensioning based on
benchmarking insights.
2. Terminology
Common benchmarking terminology contained in this document is derived
from [RFC1242]. The reader is assumed to be familiar with the
terminology as defined in the European Telecommunications Standards
Institute (ETSI) NFV document [ETS14b]. Some of these terms, and
others commonly used in this document, are defined below.
NFV: Network Function Virtualization - the principle of separating
network functions from the hardware they run on by using virtual
hardware abstraction.
Rosa, et al. Expires 11 April 2021 [Page 4]
Internet-Draft VNFBench October 2020
VNF: Virtualized Network Function - a software-based network
function. A VNF can be either represented by a single entity or
be composed by a set of smaller, interconnected software
components, called VNF components (VNFCs) [ETS14d]. Those VNFs
are also called composed VNFs.
VNFC: Virtualized Network Function Component - a software component
that implements (parts of) the VNF functionality. A VNF can
consist of a single VNFC or multiple, interconnected VNFCs
[ETS14d]
VNFD: Virtualised Network Function Descriptor - configuration
template that describes a VNF in terms of its deployment and
operational behaviour, and is used in the process of VNF on-
boarding and managing the life cycle of a VNF instance.
NS: Network Service - a collection of interconnected VNFs forming a
end-to-end service. The interconnection is often done using
chaining of functions.
VNF Benchmarking Descriptor (VNF-BD) -- contains all the definitions
and requirements to deploy, configure, execute, and reproduce VNF
benchmarking tests. A VNF-BD is defined by the developer of a VNF
benchmarking methodology and serve as input to the execution of an
automated benchmarking methodology.
VNF Performance Profile (VNF-PP) -- in a well defined structure
contains all the measured metrics resulting from the execution of
automated VNF benchmarking tests defined by a specific VNF-BD.
Additionally, it might also contain additional recordings of
configuration parameters used during the execution of the
benchmarking setup.
VNF Benchmarking Report (VNF-BR) -- contains all the definition of
the inputs and outputs of an automated VNF benchmarking
methodology. The inputs define the necessary VNF-BD and a
respective list of variables referencing the VNF-BD fields that
must be utilized to define the sample space of the VNF
benchmarking settings. The outputs consist of a list of entries,
each one contains one of the combinations of the sampled variables
from the inputs, the input VNF-BD parsed with such combination of
variables, and the obtained VNF-PP resulting from the automated
realization of the parsed VNF-BD. A VNF-BR might contain the
settings definitions of the orchestrator platform that realizes
the instantiation of the benchmarking setup to enable the VNF-BD
fullfilment.
Rosa, et al. Expires 11 April 2021 [Page 5]
Internet-Draft VNFBench October 2020
3. Scope
This document assumes VNFs as black boxes when defining their
benchmarking methodologies. White box approaches are assumed and
analysed as a particular case under the proper considerations of
internal VNF instrumentation, later discussed in this document.
This document outlines a methodology for VNF benchmarking,
specifically addressing its automation, without limiting the
automated process to a specific benchmarking case or infrastructure.
The document addresses state-of-the-art work on VNF benchmarking from
scientific publications and current developments in other
standardization bodies (e.g., [ETS14c], [ETS19f] and [RFC8204])
wherever possible.
Whenever utilizing the specifications of this document, a particular
automated VNF benchmarking methodology must be described in a clear
and objective manner following four basic principles:
* Comparability: The output of a benchmarking test shall be simple
to understand and process, in a human-readable format, coherent,
and easily reusable (e.g., inputs for analytic applications).
* Repeatability: A benchmarking setup shall be comprehensively
defined through a flexible design model that can be interpreted
and executed by the testing platform repeatedly but supporting
customization.
* Configurability: Open interfaces and extensible messaging models
shall be available between benchmarking components for flexible
composition of a benchmarking test descriptor and environment
configurations.
* Interoperability: A benchmarking test shall be ported to different
environments, using lightweight components whenever possible.
4. Considerations
VNF benchmarking considerations are defined in [RFC8172].
Additionally, VNF pre-deployment testing considerations are well
explored in [ETS14c]. Further, ETSI provides test specifications for
networking benchmarks and measurement methods for NFV infrastructure
in [ETS19f], which complements the presented work on VNF benchmarking
methodologies.
Rosa, et al. Expires 11 April 2021 [Page 6]
Internet-Draft VNFBench October 2020
4.1. VNF Assessment Methods
Following ETSI's model in [ETS14c], we distinguish three methods for
a VNF evaluation:
Benchmarking: Where parameters (e.g., CPU, memory, storage) are
provided and the corresponding performance metrics (e.g., latency,
throughput) are obtained. Note, such evaluations might create
multiple reports, for example, with minimal latency or maximum
throughput results.
Verification: Both parameters and performance metrics are provided
and a stimulus verifies if the given association is correct or
not.
Dimensioning: Performance metrics are provided and the corresponding
parameters obtained. Note, multiple deployments may be required,
or if possible, underlying allocated resources need to be
dynamically altered.
Note: Verification and Dimensioning can be reduced to Benchmarking.
4.2. Benchmarking Stages
The realization of an automated benchmarking methodology can be
divided into three stages:
Trial: Is a single process or iteration to obtain VNF performance
metrics from benchmarking measurements. A Test MUST always run
multiple Trials to get statistical confidence about the obtained
measurements.
Test: Defines unique structural and functional parameters (e.g.,
configurations, resource assignment) for benchmarked components to
perform one or multiple Trials. Each Test must be executed
following a particular benchmarking scenario composed by a Method.
Proper measures must be taken to ensure statistical validity
(e.g., independence across Trials of generated load patterns).
Method: Consists of one or more Tests to benchmark a VNF. A Method
can explicitly list ranges of parameter values for the
configuration of a benchmarking scenario and its components. Each
value of such a range is to be realized in a Test. I.e., Methods
can define parameter studies.
Rosa, et al. Expires 11 April 2021 [Page 7]
Internet-Draft VNFBench October 2020
4.3. Architectural Framework
A VNF benchmarking architectural framework, shown in Figure 1,
establishes the disposal of essential components and control
interfaces, explained below, that realize the automation of a VNF
benchmarking methodology.
+---------------+
| Manager |
Control | (Coordinator) |
Interfaces +---+-------+---+
+---------+-----------+ +-------------------+
| | |
| | +--------------------+ |
| | | System Under Test | |
| | | | |
| | | +-----------------+| |
| +--+--------+ | | VNF || |
| | | | | || |
| | | | | +----+ +----+ || |
| | <===> |VNFC|...|VNFC| || |
| | | | | +----+ +----+ || |
| | Monitor(s)| | +----.---------.--+| |
+-----+---+ |{listeners}| | : : | +-----+----+
| Agent(s)| | | | +----^---------V--+| | Agent(s)|
|(Sender) | | <===> Execution || |(Receiver)|
| | | | | | Environment || | |
|{Probers}| +-----------+ | | || |{Probers} |
+-----.---+ | +----.---------.--+| +-----.----+
: +------^---------V---+ :
V : : :
:.................>.........: :........>..:
Stimulus Traffic Flow
Figure 1: A VNF Benchmarking Architectural Framework
Virtualized Network Function (VNF) -- consists of one or more
software components, so called VNF components (VNFC), adequate for
performing a network function according to allocated virtual
resources and satisfied requirements in an execution environment.
A VNF can demand particular settings for benchmarking
specifications, demonstrating variable performance based on
available virtual resource parameters and configured enhancements
targeting specific technologies (e.g., NUMA, SR-IOV, CPU-Pinning).
Execution Environment -- defines a virtualized and controlled
Rosa, et al. Expires 11 April 2021 [Page 8]
Internet-Draft VNFBench October 2020
composition of capabilities necessary for the execution of a VNF.
An execution environment stands as a general purpose level of
virtualization with abstracted resources available for one or more
VNFs. It can also define specific technology qualifications,
incurring in viable settings for enhancing the performance of
VNFs, satisfying their particular enhancement requirements. An
execution environment must be defined with the proper
virtualization technologies feasible for the allocation of a VNF.
The means to programmatically control the execution environment
capabilities must be well defined for its life cycle management.
Agent (Active Prospection) -- executes active stimulus using
probers, to benchmark and collect network and system performance
metrics. A single Agent can perform localized benchmarks in
execution environments (e.g., stress tests on CPU, memory, storage
Input/Output) or can generate stimulus traffic and the other end
be the VNF itself where, for example, one-way latency is
evaluated. The interaction among two or more Agents enable the
generation and collection of end-to-end metrics (e.g., frame loss
rate, latency) measured from stimulus traffic flowing through a
VNF. An Agent can be defined by a physical or virtual network
function, and it must provide programmable interfaces for its life
cycle management.
Prober -- defines an abstraction layer for a software or hardware
tool able to generate stimulus traffic to a VNF or perform
stress tests on execution environments. Probers might be
specific or generic to an execution environment or a VNF. For
an Agent, a Prober must provide programmable interfaces for its
life cycle management, e.g., configuration of operational
parameters, execution of stilumus, parsing of extracted
metrics, and debugging options. Specific Probers might be
developed to abstract and to realize the description of
particular VNF benchmarking methodologies.
Monitor (Passive Prospection) -- when possible is instantiated
inside the System Under Test, VNF and/or execution environment, to
perform the passive monitoring, using Listeners, for the
extraction of metrics while Agents` stimuli takes place. Monitors
observe particular properties according to the execution
environment and VNF capabilities, i.e., exposed passive monitoring
interfaces. Multiple Listeners can be executed at once in
synchrony with a Prober' stimulus on a SUT. A Monitor can be
defined as a virtualized network function, and it must provide
programmable interfaces for its life cycle management.
Listener -- defines one or more software interfaces for the
Rosa, et al. Expires 11 April 2021 [Page 9]
Internet-Draft VNFBench October 2020
extraction of metrics monitored in a target VNF and/or
execution environment. A Listener must provide programmable
interfaces for its life cycle management workflows, e.g.,
configuration of operational parameters, execution of passive
monitoring captures, parsing of extracted metrics, and
debugging options (also see [ETS19g]). Varied methods of
passive performance monitoring might be implemented as a
Listener, depending on the interfaces exposed by the VNF and/or
the execution environment.
Manager -- performs (i) the discovery of available Agents and
Monitors and their respective features (i.e., available Probers/
Listeners and their execution environment capabilities), (ii) the
coordination and synchronization of activities of Agents and
Monitors to perform a benchmarking Test, (iii) the collection,
processing and aggregation of all VNF benchmarking (active and
passive) metrics, which correlates the characteristics of the VNF
traffic stimuli and the, possible, SUT monitoring. A Manager
executes the main configuration, operation, and management actions
to deliver the VNF benchmarking metrics. Hence, it detains
interfaces open for users interact with the whole benchmarking
framework, realizing, for instance, the retrival of the framework
characteristics (e.g., available benchmarking components and their
probers/listeners), the coordination of benchmarking tests, the
processing and the retrival of benchmarking metrics, among other
operational and management functionalities. A Manager can be
defined as a physical or virtualized network function, and it must
provide programmable interfaces for its life cycle management.
4.4. Scenarios
A scenario, as well referred as a benchmarking setup, consists of the
actual instantiation of physical and/or virtual components of a "VNF
Benchmarking Architectural Framework" needed to habilitate the
execution of an automated VNF benchmarking methodology. The
following considerations hold for a scenario:
* Not all components are mandatory for a Test, possible to be
disposed in varied setups.
* Components can be aggregated in a single entity and be defined as
black or white boxes. For instance, Manager and Agents could
jointly define one hardware or software entity to perform a VNF
benchmarking Test.
* Monitor can be defined by multiple instances of distributed
software components, each one addressing one or more VNF or
execution environment monitoring interfaces.
Rosa, et al. Expires 11 April 2021 [Page 10]
Internet-Draft VNFBench October 2020
* Agents can be disposed in varied topology setups, included the
possibility of multiple input and output ports of a VNF being
directly connected each in one Agent.
* All benchmarking components defined in a scenario must perform the
synchronization of clocks.
4.5. Phases of a Benchmarking Test
In general, an automated benchmarking methodology must execute Tests
repeatedly so it must capture the relevant causes of the performance
variability of a VNF. To dissect a VNF benchmarking Test, in the
sections that follow a set of benchmarking phases are categorized
defining generic operations that may be automated. When executing an
automated VNF benchmarking methodology, all the influencing aspects
on the performance of a VNF must be carefully analyzed and
comprehensively reported in each automated phase of a benchmarking
Test.
4.5.1. Phase I: Deployment
The placement (i.e., assignment and allocation of resources) and the
interconnection, physical and/or virtual, of network function(s) and
benchmarking components can be realized by orchestration platforms
(e.g., OpenStack, Kubernetes, Open Source MANO). In automated
manners, the realization of a benchmarking scenario through those
means usually rely on network service templates (e.g., TOSCA, YANG,
Heat, and Helm Charts). Such descriptors have to capture all
relevant details of the execution environment to allow the
benchmarking framework to correctly instantiate the SUT as well as
helper functions required for a Test.
4.5.2. Phase II: Configuration
The configuration of benchmarking components and VNFs (e.g., populate
routing table, load PCAP source files in source of traffic stimulus)
to execute the Test settings can be realized by programming
interfaces in an automated way. In the scope of NFV, there might
exist management interfaces to control a VNF during a benchmarking
Test. Likewise, infrastructure or orchestration components can
establish the proper configuration of an execution environment to
realize all the capabilities enabling the description of the
benchmarking Test. Each configuration registry, its deployment
timestamp and target, must all be contained in the report of a VNF
benchmarking Test.
Rosa, et al. Expires 11 April 2021 [Page 11]
Internet-Draft VNFBench October 2020
4.5.3. Phase III: Execution
In the execution of a benchmarking Test, the VNF configuration can be
programmed to be changed by itself or by a VNF management platform.
It means that during a Trial execution, particular behaviors of a VNF
can be automatically triggered, e.g., auto-scaling of its internal
components. Those must be captured in the detailed procedures of the
VNF execution and its performance report. I.e., the execution of a
Trial can determine arrangements of internal states inside a VNF,
which can interfere in observed benchmarking metrics. For instance,
in a particular benchmarking case where the monitoring measurements
of the VNF and/or execution environment are available for extraction,
comparison Tests must be run to verify if the monitoring of the VNF
and/or execution environment can impact the VNF performance metrics.
4.5.4. Phase IV: Result
The result of a VNF benchmarking Test might contain generic metrics
(e.g., CPU and memory consumption) and VNF-specific traffic
processing metrics (e.g., transactions or throughput), which can be
stored and processed in generic or specific ways (e.g., by statistics
or machine learning algorithms). More details about possible metrics
and the corresponding capturing methods can be found in [ETS19g]. If
automated procedures are applied over the generation of a
benchmarking Test result, those must be explained in the result
itself, jointly with their input raw measurements and output
processed data. For instance, any algorithm used in the generation
of processed metrics must be disclosed in the Test result.
5. Methodology
The execution of an automated benchmarking methodology consists in
elaborating a VNF Benchmarking Report, its inputs and outputs. The
inputs part of a VNF-BR must be written by a VNF benchmarking tester.
When the VNF-BR, with its inputs fulfilled, is requested from the
Manager component of a implementation of the "VNF Benchmarking
Architectural Framework", the Manager must utilize the inputs part to
obtain the outputs part of the VNF-BR, addressing the execution of
the automated benchmarking methodology as defined in Section 5.4.
The flow of information in the execution of an automated benchmarking
methodology can be represented by the YANG modules defined by this
document. The sections that follow present an overview of such
modules.
Rosa, et al. Expires 11 April 2021 [Page 12]
Internet-Draft VNFBench October 2020
5.1. VNF Benchmarking Descriptor (VNF-BD)
VNF Benchmarking Descriptor (VNF-BD) -- an artifact that specifies
how to realize the Test(s) and Trial(s) of an automated VNF
benchmarking methodology in order to obtain a VNF Performance
Profile. The specification includes structural and functional
instructions and variable parameters at different abstraction levels,
such as the topology of the benchmarking scenario, and the execution
parameters of prober(s)/listener(s) in the required
Agent(s)/Monitor(s). A VNF-BD may be specific to a VNF or applicable
to several VNF types.
More specifically, a VNF-BD is defined by a scenario and its
proceedings. The scenario defines nodes (i.e., benchmarking
components) and links interconnecting them, a topology that must be
instantiated in order to execute the VNF-BD proceedings. The
proceedings contain the specification of the required Agent(s) and
Monitor(s) needed in the scenario nodes. Detailed in each Agent/
Monitor follows the specification of the Prober(s)/Listener(s)
required for the execution of the Tests, and in the details of each
Prober/Listener follows the specification of its execution
parameters. In the header of a VNF-BD is specified the number of
Tests and Trials that a Manager must run them. Each Test realizes a
unique instantiation of the scenario, while each Trial realizes a
unique execution of the proceedings in the instantiated scenario of a
Test. The VNF-BD YANG module is presented in Section 10.1.
5.2. VNF Performance Profile (VNF-PP)
VNF Performance Profile (VNF-PP) -- an output artifact of a VNF-BD
execution performed by a Manager component. It contains all the
metrics from Monitor(s) and/or Agent(s) components after realizing
the execution of the Prober(s) and/or the Listener(s) proceedings,
specified in its corresponding VNF-BD. Metrics are logically grouped
according to the execution of the Trial(s) and Test(s) defined by a
VNF-BD. A VNF-PP is specifically associated with a unique VNF-BD.
Rosa, et al. Expires 11 April 2021 [Page 13]
Internet-Draft VNFBench October 2020
More specifically, a VNF-PP is defined by a structure that allows
benchmarking results to be presented in a logical and unified format.
A VNF-PP report is the result of an unique Test, while its content,
the so called snapshot(s), each containing the results of the
execution of a single Trial. Each snapshot is built by a single
Agent or Monitor. A snapshot contains evaluation(s), each one being
the output of the execution of a single Prober or Listener. An
evaluation contains one or more metrics. In summary, a VNF-PP
aggregates the results from reports (i.e., the Test(s)); a report
aggregates Agent(s) and Monitor(s) results (i.e., the Trial(s)); a
snapshot aggregates Prober(s) or Listener(s) results; and an
evaluation aggregates metrics. The VNF-PP YANG module is presented
in Section 10.2.
5.3. VNF Benchmarking Report (VNF-BR)
VNF Benchmarking Report (VNF-BR) -- the core artifact of an automated
VNF benchmarking methodology consisted of three parts: a header,
inputs and output. The header refers to the VNF-BR description items
(e.g., author, version, name), the description of the target SUT
(e.g., the VNF version, release, name), and the environment settings
specifying the parameters needed to instantiate the benchmarking
scenario via an orchestration platform. The inputs contain the
definitions needed to execute the automated benchmarking methodology
of the target SUT, a VNF-BD and its variables settings. The outputs
contain the results of the execution of the inputs, a list of
entries, each one containing a VNF-BD filled with one of the
combinations of the input variables settings, and the obtained VNF-PP
reported after the execution of the Test(s) and Trial(s) of the
parsed VNF-BD. The process of utilizing the VNF-BR inputs to
generate its outputs concerns the realization of an automated VNF
benchmarking methodology, explained in details in Section 5.4.2. The
VNF-BR YANG module is presented in Section 10.3.
In details, each one of the variables in the inputs part of a VNF-BR
is defined by: a name (the actual name of the variable); a path (the
YANG path of the variable in the input VNF-BD); a type (the type of
the values, such as string, int, float, etc); class (one of:
stimulus, resource, configuration); and values (a list of the
variable actual values). The values of all the variables must be
combined all-by-all, generating a list containing the whole sample
space of variables settings that must be used to create the VNF-BD
instances. A VNF-BD instance is defined as the result of the parsing
of one of those combinations of input variables into the VNF-BD of
the VNF-BR inputs. The parsing takes place when the variable path is
utilized to set its value in the VNF-BD. Interatively, all the VNF-
BD instances must have its Test(s) and Trial(s) executed to generate
its corresponding VNF-PP. After all the VNF-BD instances had their
Rosa, et al. Expires 11 April 2021 [Page 14]
Internet-Draft VNFBench October 2020
VNF-PP accomplished, the realization of the whole automated VNF
benchmarking methodology is complete, fulfilling the outputs part of
the VNF-BR as shown in Figure 2.
5.4. Procedures
+------------+ +-----------------+ +------------+
| | | | | |
| VNF-BR | | Execution of | | VNF-BR |
| (Inputs) +----------->+ the Automated +--------->+ (Inputs) |
| | | Benchmarking | | (Outputs) |
+------------+ | Methodology | | |
| | +------------+
+-----------------+
Figure 2: VNF benchmarking process inputs and outputs
The methodology for VNF benchmarking automation encompasses the
process defined in Figure 2, i.e., the procedures that utilize the
inputs part to obtain the outputs part of a VNF-BR. This section
details the procedures that realize such process.
5.4.1. Plan
The plan of an automated VNF benchmarking methodology consists in the
definition of all the header and the inputs part of a VNF-BR, the
artifacts to be utilized by the realization of the methodology, and
the establishment of the execution environment where the methodology
takes place. The topics below contain the details of such planning.
1. The writing of a VNF-BD must be done utilizing the VNF-BD YANG
module Section 10.1. A VNF-BD composition must determine the
scenario and the proceedings. The VNF-BD must be added to the
inputs part of an instance of the VNF-BR YANG model.
2. All the variables in the inputs part of a VNF-BR must be defined.
Each variable must contain all its fields fullfiled according to
the VNF-BR YANG module Section 10.3.
3. All the software artifacts needed for the instantiation of the
Rosa, et al. Expires 11 April 2021 [Page 15]
Internet-Draft VNFBench October 2020
VNF-BD scenario must be made and turn available for the execution
of the Test(s) and Trial(s). The artifacts include the definition
of software components that realize the role of the functional
components of the Benchmarking Architectural Framework, i.e., the
Manager, the Agent and the Monitor and their respective Probers
and Listeners.
4. The header of the VNF-BR instance must be written, stating the
VNF-BR description items, the specification of the SUT settings,
and the definition of the environment parameters, feasible for the
instantiation of the VNF-BD scenario when executing the automated
VNF benchmarking methodology.
5. The execution environment needed for a VNF-BD scenario must be
prepared to be utilized by an orchestration platform to automate
instantiation of the scenario nodes and links needed for the
execution of a Test. The orchestration platform interface
parameters must be referenced in the VNF-BR header. The
orchestration platform must have access to the software artifacts
that are referenced in the VNF-BD scenario to be able to manage
their life cycle.
6. The Manager component must be instantiated, the execution
environment must be turned available, and the orchestration
platform must have accesss to the execution environment and the
software artifacts that are referenced in the scenario of the VNF-
BD in the inputs part of the VNF-BR.
5.4.2. Realization
Accomplished all the planning procedures, the process of the
realization of the automated benchmarking methodology must be
realized as the following topics describe.
1. The realization of the benchmarking procedures starts when the
VNF-BR composed in the planning procedures is submitted to the
Manager component. It triggers the automated execution of the
benchmarking methodology defined by the inputs part of the VNF-BR.
2. Manager computes all the combinations of values from the lists of
Rosa, et al. Expires 11 April 2021 [Page 16]
Internet-Draft VNFBench October 2020
inputs in the VNF-BD, part of the submitted VNF-BR. Each
combination of variables are used to define a Test. The VNF-BD
submitted serves as a template for each combination of variables.
Each parsing of each combination of variables by the VNF-BD
template creates a so called VNF-BD instance. The Manager must
iterate through all the VNF-BD instances to finish the whole set
of Tests defined by all the combinations of variables and their
respective parsed VNF-BD. The Manager iterates through the
following steps until all the Tests are accomplished.
3. The Manager must interface an orchestration platform to realize
the automated instantiation of the deployment scenario defined by
a VNF-BD instance (i.e., a Test). To perform such step, The
Manager might interface a management function responsible to
properly parse the deployment scenario specifications into the
orchestration platform interface format. The environment
specifications of the VNF-BR header provide the guidelines to
interface the orchestration platform. The orchestration platform
must deploy the scenario requested by the Manager, assuring the
requirements and policies specified on it. In addition, the
orchestration platform must acknowledge the deployed scenario to
the Manager specifying the management interfaces of the VNF SUT
and the other components in the running instances for the
benchmarking scenario. Only when the scenario is correctly
deployed the execution of the VNF-BD instance Test(s) and Trial(s)
must ocurr, otherwise the whole execution of the VNF-BR must be
aborted and an error message must be added to the VNF-BR outputs
describing the problems that ocurred in the instantiation of the
VNF-BD scenario. If the scenario is successfuly deployed, the
VNF-BD Test proceedings can be executed.
4. Manager must interface Agent(s) and Monitor(s) via their
management interfaces to require the execution of the VNF-BD
proceedings, which consist in running the specified Probers and
Listeners using the defined parameters, and retrieve their output
metrics captured at the end of each Trial. Thus, a Trial
conceives the execution of the proceedings of the VNF-BD instance.
The number of Trials is defined in each VNF-BD instance. After
the execution of all defined Trials the execution of a Test ends.
5. Output measurements from each obtained benchmarking Trials that
compose a Test result must be collected by the Manager, until all
the Tests are finished. Each set of collected measurements from
each VNF-BD instance Trials and Tests must be used to elaborate a
VNF-PP by the Manager component. The respective VNF-PP, its
associated VNF-BD instance and its input variables compose one of
the entries of the list of outputs of the VNF-BR. After all the
list of combinations of input variables is explored to obtain the
Rosa, et al. Expires 11 April 2021 [Page 17]
Internet-Draft VNFBench October 2020
whole list of instances of VNF-BDs and elaborated VNF-PPs, the
Manager component returns the original VNF-BR submitted to it,
including the outputs part properly filled.
5.4.3. Summary
After the realization of an automated benchmarking methodology, some
automated procedures can be performed to improve the quality and the
utility of the obtained VNF-BR, as described in the following topics.
1. Archive the raw outputs contained in the VNF-BR, perform
statistical analysis on it, or train machine learning models with
the collected data.
2. Evaluate the analysis output to the detection of any possible
cause-effect factors and/or intrinsic correlations in the VNF-BR
outputs (e.g., outliers).
3. Review the inputs of a VNF-BR, VNF-BD and variables, and modify
them to realize the proper extraction of the target VNF metrics
based on the intended goal of the VNF benchmarking methodology
(e.g., throughput). Iterate in the previous steps until composing
a stable and representative VNF-BR.
6. Particular Cases
As described in [RFC8172], VNF benchmarking might require to change
and adapt existing benchmarking methodologies. More specifically,
the following cases need to be considered.
6.1. Capacity
VNFs are usually deployed inside containers or VMs to build an
abstraction layer between physical resources and the resources
available to the VNF. According to [RFC8172], it may be more
representative to design experiments in a way that the VMs hosting
the VNFs are operating at maximum of 50% utilization and split the
workload among several VMs, to mitigateside effects of overloaded
VMs. Those cases are supported by the presented automation
methodologies through VNF-BDs that enable direct control over the
resource assignments and topology layouts used for a benchmarking
experiment.