-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
History sync queries being too expensive #881
Comments
Thanks for this report. Having queries this expensive is definitely a bug. But can you please provide us with some information to narrow this down?
|
@oxzi PostgreSQL limits the maximum query size
history has around 60M rows.
Yes, it's possible. I wasn't able to narrow down the connected client yet. It might be useful to introduce some partitioning to those tables. |
Thanks for your fast reply.
That would be great, thanks.
Your history has become quite large. I don't want to put the blame on you, I just want to state the fact and that there are potentially performance issues linked to the size. Unless you need all this date, you could enable the retention to shrink your history. Out of curiosity and to get a better understanding of your and potentially similar setups: How old is the history? In which time interval were these 60M records created?
@nilmerg: In the potential case this query was sent by Icinga DB Web, may this be related to the ticket number 56180?
Thanks. This is actually a good idea to look into. |
Yes, I'm definitely planning to shrink the data. Just thought this might help to optimize the performance.
If I'm parsing the timestamp correctly, the oldest data would be 2 years ago. We've tried to use icinga for tracking pings between hosts, but it was a bad idea. The full query looks like this:
The query is probably generated by IcingaDB web:
|
Thanks for the insights. Depending on the amount of pings, some time series database might be the better option, I guess. Thus, cleaning up the old data should at least result in a smaller history and potentially faster queries.
Indeed. Appending After taking a look at PostgreSQL's query plan via an
For reference, without using your custom sort key -
The cost shrank from To be sure, I compared the two queries and the only difference was the icingadb/schema/pgsql/schema.sql Line 734 in ff280ca
|
@deric Could you please test the performance of the following query and share the query plan? SELECT * FROM history
INNER JOIN host_state history_host_state ON history_host_state.host_id = history.host_id
INNER JOIN service history_service ON history_service.id = history.service_id
INNER JOIN service_state history_service_state ON history_service_state.service_id = history.service_id
LEFT JOIN comment_history history_comment ON history_comment.comment_id = history.comment_history_id
LEFT JOIN downtime_history history_downtime ON history_downtime.downtime_id = history.downtime_history_id
LEFT JOIN flapping_history history_flapping ON history_flapping.id = history.flapping_history_id
LEFT JOIN notification_history history_notification ON history_notification.id = history.notification_history_id
LEFT JOIN acknowledgement_history history_acknowledgement ON history_acknowledgement.id = history.acknowledgement_history_id
LEFT JOIN state_history history_state ON history_state.id = history.state_history_id
WHERE history.id IN (
SELECT history.id FROM history
INNER JOIN host history_host ON history_host.id = history.host_id
INNER JOIN service_state history_service_state ON history_service_state.service_id = history.service_id
WHERE (history.event_time <= 1735686000000)
ORDER BY history_service_state.last_state_change DESC LIMIT 26
); |
And maybe fix the timestamp I've used 😆 |
@lippserd it would produce following query plan:
|
I'm having issues with
icingadb
queries periodically overloading PostgreSQL backend. The query looks like this:these queries generate over 250 GB in PostgreSQL temp because the result is probably too big to sort in memory (just 64 GB).
I guess I have to drop the history data... but my question is: does these queries really need to be so expensive?
The text was updated successfully, but these errors were encountered: