Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AdditionalRoutes not applied to AWS route tables after cluster update and rollout #17234

Open
apolyakov-sugarcrm opened this issue Jan 24, 2025 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@apolyakov-sugarcrm
Copy link

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.30.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.30.6

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
added additionalRoutes to manifest, then run kops replace, kops update, kops rollout

5. What happened after the commands executed?
manifest applied, nodes replaced

6. What did you expect to happen?
Expecting that kOps will add routes on AWS side, the same routes as specified in the additionalRoutes section manifest

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  generation: 11
  name: k8s.cluster.build
spec:
  additionalPolicies:
    master: |
      [
        {
          "Effect": "Allow",
          "Action": ["sts:AssumeRole"],
          "Resource": ["*"]
        }
      ]
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["sts:AssumeRole"],
          "Resource": ["*"]
        }
      ]
  api:
    loadBalancer:
      class: Network
      type: Public
  authentication: {}
  authorization:
    rbac: {}
  awsLoadBalancerController:
    enableWAF: true
    enableWAFv2: true
    enabled: true
  certManager:
    enabled: true
    managed: false
  channel: stable
  cloudConfig:
    awsEBSCSIDriver:
      enabled: true
  cloudProvider: aws
  configBase: s3://kops-state-k8s.cluster.build/k8s.cluster.build
  encryptionConfig: false
  etcdClusters:
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
      volumeSize: 30
    - encryptedVolume: true
      instanceGroup: master-us-west-2b
      name: b
      volumeSize: 30
    - encryptedVolume: true
      instanceGroup: master-us-west-2c
      name: c
      volumeSize: 30
    manager:
      env:
      - name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
        value: 7d
      - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
        value: 60d
    name: main
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
      volumeSize: 30
    - encryptedVolume: true
      instanceGroup: master-us-west-2b
      name: b
      volumeSize: 30
    - encryptedVolume: true
      instanceGroup: master-us-west-2c
      name: c
      volumeSize: 30
    manager:
      env:
      - name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
        value: 7d
      - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
        value: 60d
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    enableAdmissionPlugins:
    - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - DefaultStorageClass
    - DefaultTolerationSeconds
    - MutatingAdmissionWebhook
    - ValidatingAdmissionWebhook
    - ResourceQuota
    - PersistentVolumeLabel
    - NodeRestriction
    - Priority
    logFormat: json
    oidcClientID: <redacted>
    oidcIssuerURL: <redacted>
    oidcUsernameClaim: email
    tlsCipherSuites:
    - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
    - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
    - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  kubeControllerManager:
    logFormat: json
  kubeDNS:
    nodeLocalDNS:
      cpuRequest: 25m
      enabled: true
      memoryRequest: 5Mi
    provider: CoreDNS
  kubeProxy: {}
  kubeScheduler:
    logFormat: json
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
    logFormat: json
    podPidsLimit: 1024
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.30.6
  masterPublicName: api.k8s.cluster.build
  networkCIDR: <redacted>
  networking:
    calico: {}
  nonMasqueradeCIDR: <redacted>
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - additionalRoutes:
    - cidr: 10.80.0.0/16
      target: pcx-0e104...
    cidr: <redacted>
    name: us-west-2a
    type: Private
    zone: us-west-2a
  - additionalRoutes:
    - cidr: 10.80.0.0/16
      target: pcx-0e104...
    cidr: <redacted>
    name: us-west-2b
    type: Private
    zone: us-west-2b
  - additionalRoutes:
    - cidr: 10.80.0.0/16
      target: pcx-0e104...
    cidr: <redacted>
    name: us-west-2c
    type: Private
    zone: us-west-2c
  - cidr: <redacted>
    name: utility-us-west-2a
    type: Utility
    zone: us-west-2a
  - cidr: <redacted>
    name: utility-us-west-2b
    type: Utility
    zone: us-west-2b
  - cidr: <redacted>
    name: utility-us-west-2c
    type: Utility
    zone: us-west-2c
  topology:
    bastion:
      bastionPublicName: bastion.k8s.cluster.build
    dns:
      type: Public

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants