Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipeline ipv6 datapath #2069

Merged
merged 40 commits into from
Jul 26, 2023
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
cb7d932
add Linux test cases for connectivity
paulyufan2 Jun 22, 2023
60e384d
pipeline ipv6 test cases
paulyufan2 Jul 21, 2023
4344d7e
add linux ipv6 yamls
paulyufan2 Jul 21, 2023
dd85072
add deployment yamls
paulyufan2 Jul 21, 2023
341ac63
remove duplicated linux deloyment files
paulyufan2 Jul 21, 2023
297b3f4
add linux datapath
paulyufan2 Jul 21, 2023
b15314a
add windows test
paulyufan2 Jul 21, 2023
e9faea0
change datapath windows file name
paulyufan2 Jul 21, 2023
7e634a0
fix datapath windows test
paulyufan2 Jul 21, 2023
d564745
fix datapath windows test
paulyufan2 Jul 21, 2023
004e2d7
scripts to cleanup ovs bridge and ovs leaked rules (#2066)
paulyufan2 Jul 24, 2023
5a40bff
fix comments
paulyufan2 Jul 24, 2023
60e90e6
fix a minor issue
paulyufan2 Jul 24, 2023
e5b1718
remove conflicts
paulyufan2 Jul 24, 2023
b154ab6
fix comment
paulyufan2 Jul 24, 2023
ed2002d
Merge branch 'master' into ipv6datapathpipeline
paulyufan2 Jul 24, 2023
2b3a7fe
Merge branch 'master' into ipv6datapathpipeline
paulyufan2 Jul 25, 2023
a13b9de
fix comments
paulyufan2 Jul 25, 2023
8622773
rerun test
paulyufan2 Jul 25, 2023
1e1ae11
rerun test
paulyufan2 Jul 25, 2023
7478d7f
fix comments
paulyufan2 Jul 25, 2023
a8642c8
change namespace back to default
paulyufan2 Jul 25, 2023
0273ccf
Merge branch 'master' into ipv6datapathpipeline
paulyufan2 Jul 25, 2023
dc40c96
add namespace fixes
paulyufan2 Jul 25, 2023
bac5b33
add pipeline
paulyufan2 Jul 25, 2023
45ea65d
add pipeline
paulyufan2 Jul 25, 2023
2c70b7f
add logs
paulyufan2 Jul 26, 2023
69b112f
fix dualstack pipeline setup
paulyufan2 Jul 26, 2023
01e025f
add AzureOverlayDualStackPreview
paulyufan2 Jul 26, 2023
4d4e0fc
delete pipeline templates
paulyufan2 Jul 26, 2023
988c530
put installdualstackoverlayp
paulyufan2 Jul 26, 2023
3b3904e
fix comments
paulyufan2 Jul 26, 2023
d65b071
fix comments
paulyufan2 Jul 26, 2023
a4c3e41
fix comments
paulyufan2 Jul 26, 2023
7c15bef
Merge branch 'master' into ipv6datapathpipeline
paulyufan2 Jul 26, 2023
50477d4
remove readme for dualstack
paulyufan2 Jul 26, 2023
d828c04
comment fix
paulyufan2 Jul 26, 2023
533e4ef
fix comments
paulyufan2 Jul 26, 2023
0c9d86f
fix logs
paulyufan2 Jul 26, 2023
345bac2
fix error
paulyufan2 Jul 26, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions hack/aks/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,34 @@ windows-cniv1-up: rg-up overlay-net-up ## Bring up a Windows CNIv1 cluster

@$(MAKE) set-kubeconf

dualstack-overlay-up: rg-up overlay-net-up ## Brings up an dualstack Overlay cluster with Linux node only
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--kubernetes-version 1.26.3 \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--network-plugin azure \
--ip-families ipv4,ipv6 \
--network-plugin-mode overlay \
--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/AzureOverlayDualStackPreview \
--subscription $(SUB) \
--no-ssh-key \
--yes
@$(MAKE) set-kubeconf

dualstack-overlay-byocni-up: rg-up overlay-net-up ## Brings up an dualstack Overlay BYO CNI cluster
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--kubernetes-version 1.26.3 \
--node-count $(NODE_COUNT) \
--node-vm-size $(VM_SIZE) \
--network-plugin none \
--network-plugin-mode overlay \
--aks-custom-headers AKSHTTPCustomFeatures=Microsoft.ContainerService/AzureOverlayDualStackPreview \
--ip-families ipv4,ipv6 \
--subscription $(SUB) \
--no-ssh-key \
--yes
@$(MAKE) set-kubeconf

linux-cniv1-up: rg-up overlay-net-up
$(AZCLI) aks create -n $(CLUSTER) -g $(GROUP) -l $(REGION) \
--node-count $(NODE_COUNT) \
Expand Down
22 changes: 12 additions & 10 deletions hack/aks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,16 @@ SWIFT Infra
net-up Create required swift vnet/subnets

AKS Clusters
byocni-up Alias to swift-byocni-up
cilium-up Alias to swift-cilium-up
up Alias to swift-up
overlay-up Brings up an Overlay AzCNI cluster
swift-byocni-up Bring up a SWIFT BYO CNI cluster
swift-cilium-up Bring up a SWIFT Cilium cluster
swift-up Bring up a SWIFT AzCNI cluster
windows-cniv1-up Bring up a Windows AzCNIv1 cluster
down Delete the cluster
vmss-restart Restart the nodes of the cluster
byocni-up Alias to swift-byocni-up
cilium-up Alias to swift-cilium-up
up Alias to swift-up
overlay-up Brings up an Overlay AzCNI cluster
swift-byocni-up Bring up a SWIFT BYO CNI cluster
swift-cilium-up Bring up a SWIFT Cilium cluster
swift-up Bring up a SWIFT AzCNI cluster
dualstack-overlay-up Brings up an dualstack overlay cluster
dualstack-overlay-byocni-up Brings up an dualstack overlay cluster without CNS and CNI installed
windows-cniv1-up Bring up a Windows AzCNIv1 cluster
down Delete the cluster
vmss-restart Restart the nodes of the cluster
```
152 changes: 69 additions & 83 deletions test/integration/datapath/datapath_linux_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import (
"github.com/stretchr/testify/require"

appsv1 "k8s.io/api/apps/v1"
apiv1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

Expand All @@ -44,7 +44,7 @@ const (

var (
podPrefix = flag.String("podName", "goldpinger", "Prefix for test pods")
podNamespace = flag.String("namespace", "linux-datapath-test", "Namespace for test pods")
podNamespace = flag.String("namespace", "default", "Namespace for test pods")
nodepoolSelector = flag.String("nodepoolSelector", "nodepool1", "Provides nodepool as a Linux Node-Selector for pods")
// TODO: add flag to support dual nic scenario
isDualStack = flag.Bool("isDualStack", false, "whether system supports dualstack scenario")
Expand Down Expand Up @@ -91,103 +91,85 @@ func setupLinuxEnvironment(t *testing.T) {
require.NoError(t, err, "could not get k8s node list: %v", err)
}

// Create namespace if it doesn't exist
namespaceExists, err := k8sutils.NamespaceExists(ctx, clientset, *podNamespace)
if err != nil {
require.NoError(t, err, "failed to check if namespace %s exists due to: %v", *podNamespace, err)
}

if !namespaceExists {
// Test Namespace
t.Log("Create Namespace")
err = k8sutils.MustCreateNamespace(ctx, clientset, *podNamespace)
if err != nil {
require.NoError(t, err, "failed to create pod namespace %s due to: %v", *podNamespace, err)
}

var daemonset appsv1.DaemonSet
var deployment appsv1.Deployment
t.Log("Creating Linux pods through deployment")

// run goldpinger ipv4 and ipv6 test cases saperately
if *isDualStack {
deployment, err = k8sutils.MustParseDeployment(LinuxDeployIPv6)
if err != nil {
require.NoError(t, err)
}
t.Log("Creating Linux pods through deployment")

daemonset, err = k8sutils.MustParseDaemonSet(gpDaemonsetIPv6)
if err != nil {
t.Fatal(err)
}
} else {
deployment, err = k8sutils.MustParseDeployment(LinuxDeployIPV4)
if err != nil {
require.NoError(t, err)
}
// run goldpinger ipv4 and ipv6 test cases saperately
var daemonset appsv1.DaemonSet
var deployment appsv1.Deployment

daemonset, err = k8sutils.MustParseDaemonSet(gpDaemonset)
if err != nil {
t.Fatal(err)
}
if *isDualStack {
deployment, err = k8sutils.MustParseDeployment(LinuxDeployIPv6)
if err != nil {
require.NoError(t, err)
}

rbacSetupFn, err := k8sutils.MustSetUpClusterRBAC(ctx, clientset, gpClusterRolePath, gpClusterRoleBindingPath, gpServiceAccountPath)
daemonset, err = k8sutils.MustParseDaemonSet(gpDaemonsetIPv6)
if err != nil {
t.Log(os.Getwd())
t.Fatal(err)
}

// Fields for overwritting existing deployment yaml.
// Defaults from flags will not change anything
deployment.Spec.Selector.MatchLabels[podLabelKey] = *podPrefix
deployment.Spec.Template.ObjectMeta.Labels[podLabelKey] = *podPrefix
deployment.Spec.Template.Spec.NodeSelector[nodepoolKey] = *nodepoolSelector
deployment.Name = *podPrefix
deployment.Namespace = *podNamespace

deploymentsClient := clientset.AppsV1().Deployments(*podNamespace)
err = k8sutils.MustCreateDeployment(ctx, deploymentsClient, deployment)
} else {
deployment, err = k8sutils.MustParseDeployment(LinuxDeployIPV4)
if err != nil {
require.NoError(t, err)
}

daemonsetClient := clientset.AppsV1().DaemonSets(daemonset.Namespace)
err = k8sutils.MustCreateDaemonset(ctx, daemonsetClient, daemonset)
daemonset, err = k8sutils.MustParseDaemonSet(gpDaemonset)
if err != nil {
t.Fatal(err)
}
}

// setup common RBAC, ClusteerRole, ClusterRoleBinding, ServiceAccount
rbacSetupFn, err := k8sutils.MustSetUpClusterRBAC(ctx, clientset, gpClusterRolePath, gpClusterRoleBindingPath, gpServiceAccountPath)
if err != nil {
t.Log(os.Getwd())
t.Fatal(err)
}

t.Cleanup(func() {
t.Log("cleaning up resources")
rbacSetupFn()
// Fields for overwritting existing deployment yaml.
// Defaults from flags will not change anything
deployment.Spec.Selector.MatchLabels[podLabelKey] = *podPrefix
deployment.Spec.Template.ObjectMeta.Labels[podLabelKey] = *podPrefix
deployment.Spec.Template.Spec.NodeSelector[nodepoolKey] = *nodepoolSelector
deployment.Name = *podPrefix
deployment.Namespace = *podNamespace
daemonset.Namespace = *podNamespace

deploymentsClient := clientset.AppsV1().Deployments(*podNamespace)
err = k8sutils.MustCreateDeployment(ctx, deploymentsClient, deployment)
if err != nil {
require.NoError(t, err)
}

if err := deploymentsClient.Delete(ctx, deployment.Name, metav1.DeleteOptions{}); err != nil {
t.Log(err)
}
daemonsetClient := clientset.AppsV1().DaemonSets(daemonset.Namespace)
err = k8sutils.MustCreateDaemonset(ctx, daemonsetClient, daemonset)
if err != nil {
t.Fatal(err)
}

if err := daemonsetClient.Delete(ctx, daemonset.Name, metav1.DeleteOptions{}); err != nil {
t.Log(err)
}
})
t.Cleanup(func() {
t.Log("cleaning up resources")
rbacSetupFn()

t.Log("Waiting for pods to be running state")
err = k8sutils.WaitForPodsRunning(ctx, clientset, *podNamespace, podLabelSelector)
if err != nil {
require.NoError(t, err)
if err := deploymentsClient.Delete(ctx, deployment.Name, metav1.DeleteOptions{}); err != nil {
t.Log(err)
}

if *isDualStack {
t.Log("Successfully created customer dualstack Linux pods")
} else {
t.Log("Successfully created customer singlestack Linux pods")
if err := daemonsetClient.Delete(ctx, daemonset.Name, metav1.DeleteOptions{}); err != nil {
t.Log(err)
}
})

t.Log("Waiting for pods to be running state")
err = k8sutils.WaitForPodsRunning(ctx, clientset, *podNamespace, podLabelSelector)
if err != nil {
require.NoError(t, err)
}

if *isDualStack {
t.Log("Successfully created customer dualstack Linux pods")
} else {
// delete namespace and stop test cases
if err := k8sutils.MustDeleteNamespace(ctx, clientset, *podNamespace); err != nil {
require.NoError(t, err)
}
t.Fatal("goldpinger namespace exists and was deleted. Re-run test")
t.Log("Successfully created customer singlestack Linux pods")
}

t.Log("Checking Linux test environment")
Expand All @@ -200,8 +182,16 @@ func setupLinuxEnvironment(t *testing.T) {
t.Logf("%s", node.Name)
require.NoError(t, errors.New("Less than 2 pods on node"))
}
}

errFlag := apierrors.IsAlreadyExists(err)
if errFlag {
if err := k8sutils.MustDeleteDaemonset(ctx, daemonsetClient, daemonset); err != nil {
require.NoError(t, err)
}
t.Fatal("delete all goldpinger hosts and pods under default namespace if there is any error")
}

t.Log("Linux test environment ready")
}

Expand Down Expand Up @@ -286,8 +276,9 @@ func TestDatapathLinux(t *testing.T) {
}
return nil
}

if err := defaultRetrier.Do(portForwardCtx, portForwardFn); err != nil {
t.Fatalf("could not start port forward within %ds: %v", defaultTimeoutSeconds, err)
t.Fatalf("could not start port forward within %d: %v", defaultTimeoutSeconds, err)
}
defer pf.Stop()

Expand All @@ -313,9 +304,4 @@ func TestDatapathLinux(t *testing.T) {
t.Log("all pings successful!")
})
})

// delete namespace after test is done
if err := k8sutils.MustDeleteNamespace(ctx, clientset, *podNamespace); err != nil {
require.NoError(t, err)
}
}
2 changes: 1 addition & 1 deletion test/integration/k8s_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ func TestPodScaling(t *testing.T) {
defer cancel()

pfOpts := PortForwardingOpts{
Namespace: "linux-datapath-test",
Namespace: "default",
LabelSelector: "type=goldpinger-pod",
LocalPort: 9090,
DestPort: 8080,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ roleRef:
subjects:
- kind: ServiceAccount
name: goldpinger-serviceaccount
namespace: linux-datapath-test
namespace: default
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: DaemonSet
metadata:
name: goldpinger-host
namespace: linux-datapath-test
namespace: default
spec:
selector:
matchLabels:
Expand Down
2 changes: 1 addition & 1 deletion test/integration/manifests/goldpinger/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: DaemonSet
metadata:
name: goldpinger-host
namespace: linux-datapath-test
namespace: default
spec:
selector:
matchLabels:
Expand Down
2 changes: 1 addition & 1 deletion test/integration/manifests/goldpinger/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: goldpinger-pod
namespace: linux-datapath-test
namespace: default
spec:
replicas: 1
selector:
Expand Down
2 changes: 1 addition & 1 deletion test/integration/manifests/goldpinger/service-account.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: goldpinger-serviceaccount
namespace: linux-datapath-test
namespace: default
2 changes: 1 addition & 1 deletion test/integration/manifests/goldpinger/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: v1
kind: Service
metadata:
name: goldpinger
namespace: linux-datapath-test
namespace: default
labels:
app: goldpinger
spec:
Expand Down
2 changes: 1 addition & 1 deletion test/internal/k8sutils/utils_create.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ func MustCreateOrUpdatePod(ctx context.Context, podI typedcorev1.PodInterface, p
}

func MustCreateDaemonset(ctx context.Context, daemonsets typedappsv1.DaemonSetInterface, ds appsv1.DaemonSet) error {
if err := mustDeleteDaemonset(ctx, daemonsets, ds); err != nil {
if err := MustDeleteDaemonset(ctx, daemonsets, ds); err != nil {
return err
}
log.Printf("Creating Daemonset %v", ds.Name)
Expand Down
2 changes: 1 addition & 1 deletion test/internal/k8sutils/utils_delete.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ func MustDeletePod(ctx context.Context, podI typedcorev1.PodInterface, pod corev
return nil
}

func mustDeleteDaemonset(ctx context.Context, daemonsets typedappsv1.DaemonSetInterface, ds appsv1.DaemonSet) error {
func MustDeleteDaemonset(ctx context.Context, daemonsets typedappsv1.DaemonSetInterface, ds appsv1.DaemonSet) error {
if err := daemonsets.Delete(ctx, ds.Name, metav1.DeleteOptions{}); err != nil {
if !apierrors.IsNotFound(err) {
return err
Expand Down