Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Benchmark Timing output more sensible #69

Open
miq-bot opened this issue Feb 26, 2019 · 1 comment
Open

Make Benchmark Timing output more sensible #69

miq-bot opened this issue Feb 26, 2019 · 1 comment

Comments

@miq-bot
Copy link
Member

miq-bot commented Feb 26, 2019

The current Benchmark timings do a good job of capturing the time it took to do various blocks, but one of the things that it's missing is the ability to group, or nest, timings the same way we're nesting the calls to Benchmark.realtime_block in the code.

For instance, in the VMWare EMS Refresh process, we have several calls to Benchmark.realtime_block. If I grouped the timing keys together to represent their call stack, it would look like:

- ems_refresh
| - collect_inventory_for_targets
  | - get_ems_data
  | - get_vc_data
  | - get_vc_data_ems_customization_specs (optional)
  | - filter_vc_data
  | - get_vc_data_host_scsi
| - parse_targeted_inventory
  | - parse_vc_data
| - save_inventory
  | - db_save_inventory

However, this is what we currently output when we capture the Benchmark.realtime_block:

[----] I, [2016-09-25T20:29:39.651349 #30193:ea3990] INFO -- : 
MIQ(ManageIQ::Providers::Vmware::InfraManager::Refresher#refresh) 
EMS: [EMS NAME], id: [50000000000004] Refreshing targets for EMS...Complete - 
Timings {:server_dequeue=>0.0058367252349853516, :get_ems_data=>9.814793109893799, 
:get_vc_data=>89.20815706253052, :filter_vc_data=>0.020371675491333008, 
:get_vc_data_host_scsi=>16.73558282852173, 
:collect_inventory_for_targets=>115.83132314682007, :parse_vc_data=>0.729074239730835, 
:parse_targeted_inventory=>0.729581356048584, :db_save_inventory=>32.98734211921692, 
:save_inventory=>32.98742628097534, :ems_refresh=>149.55086016654968}

If I scale that back to just the hash of timings, it looks like (newlines and formatting added):

{
 :server_dequeue                => 0.0058367252349853516, 
 :get_ems_data                  => 9.814793109893799, 
 :get_vc_data                   => 89.20815706253052, 
 :filter_vc_data                => 0.020371675491333008, 
 :get_vc_data_host_scsi         => 16.73558282852173, 
 :collect_inventory_for_targets => 115.83132314682007, 
 :parse_vc_data                 => 0.729074239730835, 
 :parse_targeted_inventory      => 0.729581356048584, 
 :db_save_inventory             => 32.98734211921692, 
 :save_inventory                => 32.98742628097534, 
 :ems_refresh                   => 149.55086016654968
}

There's no organization to this data. Instead, it's just a collection of numbers without any context. If this hash represented the nested nature of these calls, it could look like (newlines and formatting added):

{
  :server_dequeue => 0.0058367252349853516, 
  :ems_refresh => 149.55086016654968,
  :ems_refresh_timings => { 
    :collect_inventory_for_targets => 115.83132314682007, 
    :collect_inventory_for_targets_timings => {
      :get_ems_data => 9.814793109893799, 
      :get_vc_data => 89.20815706253052, 
      :filter_vc_data => 0.020371675491333008, 
      :get_vc_data_host_scsi => 16.73558282852173, 
    },
    :parse_targeted_inventory => 0.729581356048584, 
    :parse_targeted_inventory_timings => {
      :parse_vc_data => 0.729074239730835,
    },
    :save_inventory => 32.98742628097534,
    :save_inventory_timings => {
      :db_save_inventory => 32.98734211921692,
    }
  }
}

This gives a much clearer view, imo, of how these numbers actually compose a larger refresh timing.

I would love to see other examples of nested Benchmark.realtime_block log output that either corroborates or disproves the need for better timing grouping.


This issue was moved to this repository from ManageIQ/manageiq#11692, originally opened by @blomquisg


This issue was moved to this repository from ManageIQ/manageiq-gems-pending#420, originally opened by @miq-bot

@miq-bot
Copy link
Member Author

miq-bot commented Jun 11, 2020

This issue has been automatically marked as stale because it has not been updated for at least 3 months.

If you can still reproduce this issue on the current release or on master, please reply with all of the information you have about it in order to keep the issue open.

Thank you for all your contributions! More information about the ManageIQ triage process can be found in the traige process documentation.

@miq-bot miq-bot added the stale label Jun 11, 2020
@Fryguy Fryguy added pinned and removed stale labels Jul 6, 2020
@Fryguy Fryguy added the size/s label May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants