-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamically allocate write buffer if needed. (ready for merge) #324
Conversation
I've merge the this PR to my project. Let me test on my env. I make 120 clients to send 2500 messages for every one. And then I make 120 clients slice as four groups. every group will subscribe all the message. In other words, the broker will send 1,200,000 messages to the 120 clients. Let me observe for one day. By the way, I've change the bconn to "*bufio.Reader" |
It's been 20 hours, and the new code is running well in the test environment. 120 clients sent 19.4 billion messages to the broker. Four groups of subscribers subscribed to 95.8 billion messages from the broker, and there was no data loss. I have set my buffer size to 10,485,760. The CPU usage of the broker process is maintained at around 3800%, and the memory usage is approximately 6.5 GB. The new code doesn't show a significant difference in performance compared to the previous code that used bufio. I look forward to someone simulating a scenario with a large number of client connections. |
This is super fascinating and while I've been very busy with work, I have been following it and I'm interested to see how it goes. I don't have any environments for testing on the sort of scale you are both using, so very appreciative for feedback and statistics! |
Pull Request Test Coverage Report for Build 7159499807
💛 - Coveralls |
I made the following changes:
Although my environment has tens of millions clients, but mostly it's msgs from client->server, very little server->client msgs, so I can't evaluate true impact neither. |
@mochi-co , the perf test we seen is at least 5% improvement in publishing rate for busy clients. It is ready to review/merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally this looks good to me - easy to understand code, nice and clean, and solves a problem. Love it! Approved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thedevop It makes sense to me. If the packet size exceeds the ClientNetWriteBufferSize (default is 2KB), the data will be sent directly to the connection. Smaller packets will share a write buffer, which helps increase the frequency of write I/O. Excellent work! Approved.
@mochi-co , I don't think this will reach 6 reviewers. So when you have a chance, can you pl merge. I have some follow up changes that's not related to this buffer but can improve publishing rate, like reduce writing payload during encoding, and use of sync.Pool to reduce memory allocations/GCs during packet encoding. |
Yup looks good to me, just haven't had a chance - I'll merge and release now! |
Released in v2.4.3, thank you @thedevop! |
No description provided.