You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When requesting the service trough istio gateway, we got ~149 RPS when proxy mode (activator in path) and it drops to ~2 RPS when on serve mode (activator not in path)
I can provided more information if you need more, and I am willing to help to debug/fix.
We might have missed something in the configuration. I think it can be more of configuration problem on our side ?
The text was updated successfully, but these errors were encountered:
Hi @aqemia-aymeric-alixe, I assume here that you are hitting this app via the tailscale ingress, do you see the same issue if you hit the app within the cluster (via app_name.namespace.svc), wondering if it takes time for the ingress to catch up with networking configuration? Does the RPS ever improve or stays low?
Could you check the activator metrics to see if during proxy to serve mode transition there are queued requests?
What version of Knative?
Expected Behavior
When knative ServerlessService going from proxy to serve, deactivating Activator in path. We expect to improve/keep RPS
Actual Behavior
RPS drops from 149 to 2 when switching to Serve mode.
Steps to Reproduce the Problem
Cluster: k8s 1.30 IPv6
Istio/knative installed using yaml files.
Kserve Serverless on top installed using yaml files
Change on those manifests:
Istio:
Service
LoadBalancerType: Tailscale
.We haven't activate istio service mesh. So we do not have any injected istio proxy.
Knative:
config-deployment:
config-autoscaler:
config-default:
config-features:
Service:
We have set the
targetBurstCapacity
to 0 for debug purposeWhen requesting the service trough istio gateway, we got ~149 RPS when proxy mode (activator in path) and it drops to ~2 RPS when on serve mode (activator not in path)
I can provided more information if you need more, and I am willing to help to debug/fix.
We might have missed something in the configuration. I think it can be more of configuration problem on our side ?
The text was updated successfully, but these errors were encountered: