You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently accidentally deployed the datadog cluster agent with the metricsProvider enabled while KEDA was active in the cluster. The datadog cluster agent also creates an apiservice/v1beta1.external.metrics.k8s.io but so does KEDA (see #470). Datadog's v1beta1.external.metrics.k8s.io eventually won out therefore KEDA wasn't able to retrieve external metrics but didn't enter fallback.
Expected Behavior
I expected pod counts to scale to their configured fallback
Actual Behavior
Scaling paused at the current pod counts
Steps to Reproduce the Problem
create a scaledobject with fallback replicas configured using the cloudwatch trigger
delete apiservice/v1beta1.external.metrics.k8s.io
observe scaledobject health and hpas
Logs from KEDA operator
No response
KEDA Version
2.13.0
Kubernetes Version
1.29
Platform
Amazon Web Services
Scaler Details
cloudwatch
Anything else?
scaledobject:
Name: xxxxxxx
Namespace: app
Labels: scaledobject.keda.sh/name=xxxxxxx
Annotations: <none>
API Version: keda.sh/v1alpha1
Kind: ScaledObject
Metadata:
Creation Timestamp: 2023-12-20T18:11:50Z
Finalizers:
finalizer.keda.sh
Generation: 14
Resource Version: 783411116
UID: 8d594129-cb8a-481c-83f9-a374b2072db2
Spec:
Advanced:
Horizontal Pod Autoscaler Config:
Behavior:
Scale Down:
Policies:
Period Seconds: 1200
Type: Pods
Value: 1
Stabilization Window Seconds: 1200
Scaling Modifiers:
Fallback:
Failure Threshold: 3
Replicas: 4
Max Replica Count: 6
Min Replica Count: 2
Polling Interval: 60
Scale Target Ref:
Kind: Deployment
Name: xxxxxxxxxx
Triggers:
Metadata:
Aws Region: us-east-1
Dimension Name: LoadBalancer
Dimension Value: app/xxxxxxx/yyyyyyyy
Identity Owner: operator
Metric Collection Time: 300
Metric Name: ActiveConnectionCount
Metric Stat: Sum
Metric Stat Period: 60
Min Metric Value: 630
Namespace: AWS/ApplicationELB
Target Metric Value: 157
Metric Type: AverageValue
Type: aws-cloudwatch
Status:
Conditions:
Message: ScaledObject is defined correctly and is ready for scaling
Reason: ScaledObjectReady
Status: True
Type: Ready
Message: Scaling is performed because triggers are active
Reason: ScalerActive
Status: True
Type: Active
Message: No fallbacks are active on this scaled object
Reason: NoFallbackFound
Status: False
Type: Fallback
Message: pause annotation removed for ScaledObject
Reason: ScaledObjectUnpaused
Status: False
Type: Paused
External Metric Names:
s0-aws-cloudwatch
Health:
s0-aws-cloudwatch:
Number Of Failures: 0
Status: Happy
Hpa Name: keda-hpa-xxxxx-scaledobject
hpa:
Name: keda-hpa-xxxxxxx-scaledobject
Namespace: app
Labels: app.kubernetes.io/managed-by=keda-operator
app.kubernetes.io/name=keda-hpa-xxxxxxx-scaledobject
app.kubernetes.io/part-of=xxxxxxx-scaledobject
app.kubernetes.io/version=2.13.0
scaledobject.keda.sh/name=xxxxxxx-scaledobject
Annotations: <none>
CreationTimestamp: Wed, 21 Feb 2024 10:50:01 -0500
Reference: Deployment/app-deploy-xxxxxxx
Metrics: ( current / target )
"s0-aws-cloudwatch" (target average value): <unknown> / 157
Min replicas: 2
Max replicas: 6
Behavior:
Scale Up:
Stabilization Window: 0 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 4 Period: 15 seconds
- Type: Percent Value: 100 Period: 15 seconds
Scale Down:
Stabilization Window: 1200 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 1 Period: 1200 seconds
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetExternalMetric the HPA was unable to compute the replica count: unable to get external metric app/s0-aws-cloudwatch/&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: xxxxxxx-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find the requested resource (get s0-aws-cloudwatch.external.metrics.k8s.io)
ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 51m (x12 over 54m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get s0-aws-cloudwatch external metric value: failed to get s0-aws-cloudwatch external metric: unable to get external metric app/s0-aws-cloudwatch/&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: xxxxxxx-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find the requested resource (get s0-aws-cloudwatch.external.metrics.k8s.io)
Warning FailedGetExternalMetric 4m5s (x201 over 54m) horizontal-pod-autoscaler unable to get external metric app/s0-aws-cloudwatch/&LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: xxxxxxx-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},}: unable to fetch metrics from external metrics API: the server could not find the requested resource (get s0-aws-cloudwatch.external.metrics.k8s.io)
The text was updated successfully, but these errors were encountered:
Report
I recently accidentally deployed the datadog cluster agent with the metricsProvider enabled while KEDA was active in the cluster. The datadog cluster agent also creates an
apiservice/v1beta1.external.metrics.k8s.io
but so does KEDA (see #470). Datadog'sv1beta1.external.metrics.k8s.io
eventually won out therefore KEDA wasn't able to retrieve external metrics but didn't enter fallback.Expected Behavior
I expected pod counts to scale to their configured fallback
Actual Behavior
Scaling paused at the current pod counts
Steps to Reproduce the Problem
Logs from KEDA operator
No response
KEDA Version
2.13.0
Kubernetes Version
1.29
Platform
Amazon Web Services
Scaler Details
cloudwatch
Anything else?
scaledobject:
hpa:
The text was updated successfully, but these errors were encountered: