-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure mesh expansion with the repository and onboarding plane endpoint #184
Comments
This is perfect, thank you @nacx - as we proceed we need to refactor the variables so we can customize each cluster when we define it, but probably next month or so resolution. I was thinking about the above further, and did align myself towards enabling |
I agree. I thought several times about adding external-dns, because it would also help expose people's declared IngressGateways. I already have an external-dns setup for GCP, but lacking AWS/Azure config expertise to add it in those so I parked that :) But happy to help there. I can probably get started with the GCP bits, put the structure, etc, and then let you or someone else parameterize the bits/values for Azure/AWS |
I can take on expanding towards AWS/Azure - actually I do know how to do it on Azure but never done it for GCP. |
JFTR, GKE example bits for external-dns: https://github.com/tetratelabs/zta-demo-2022/tree/master/k8s/gke |
Once #247 this could be easily achieved by configuring this overlay in the CP: |
Today we generate an empty mesh expansion configuration in the CPs.
We could easily add some config bits to allow configuring which clusters should have mesh expansion enabled, and we could also generate a proper FQDN for the vmgateway (something like
vm<cluster_id>-<tsb_fqdn>
), configure it in the provider DNS, and generate the corresponding certificate, so that the cluster is ready to be used to onboard VMs.I have a WIP branch for this and plan to submit a PR soon.
The text was updated successfully, but these errors were encountered: