OOM from MultithreadEventExecutorGroup under heavy load + Tasks are not correctly split between EventExecutors #2840
Unanswered
SimoneGiusso
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have some load tests against two Spring applications that do the exact same thing but in two different ways. One is using Spring Web (blocking stack) the other is using reactive stack (Spring webflux, etc..) Both use redis: normal API in Spring Web and reactive API in Spring WebFlux.
Currently the tests are showing that the Spring Web app is capable of handling the load better than the Spring WebFlux app 🤔. It should be the opposite.
The Spring WebFlux application goes OOM under heavy load. I'm using a
DefaultEventExecutorGroup
with n_processors * 2 (4 in this case). This is the configuration:Looking at the heap dump two things are evident:
DefaultEventExecutor
. In particular by thetaskQueue
variable.taskQueue.count
is very different in the 4 instances: 16, 17, 24'185, 64'663.Below a screenshot of the thread dump:
For the Spring Web application I have similar configuration for the
redisTemplate
&redisCacheManager
. InLettuceClientConfigurationBuilderCustomizer
I just set thedisconnectedBehaviour
toREJECT_COMMANDS
.Why the Spring Web application, in particular the redis client, is working better that the WebFlux version under heavy load? I'd expect the opposite.
P.S. I specified the
EventExecutorGroup
to try to improve performances since by default the lettuce Reactive uses a single connection. Moreover the Netty’s threading model just one thread for each connection. In other words just one I/O event loop thread is being used for all operations (e.g. encoding, decoding, etc...). However, evidently, this didn't help.Beta Was this translation helpful? Give feedback.
All reactions