Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement runner#wait in a generic way #179

Closed
wants to merge 1 commit into from

Conversation

kbrock
Copy link
Member

@kbrock kbrock commented May 1, 2024

pulled out of #178

Depend upon:

Overview

The builtin runner does not need a wait method since it does not listen to completed tasks. But other runners may need a a wait.

So the solution is to make wait optional, and setup a thread for all the runners that implement that interface.

Before

Run a thread to support wait from "docker" runner, but no others.

After

Run a thread for every runner that requires wait.

Developer speak

# handle docker scheme
Thread.new do
  loop do
    Runner.for_resource("docker").wait do |event, runner_context|
      queue.push([event, runner_context])
    end
  end
end

aka:

# handle docker scheme
runner = Runner.for_resource("docker")
Thread.new do
  loop do
    runner.wait do |event, runner_context|
      queue.push([event, runner_context])
    end
  end
end

handling multiple runners:

# handle all schemes
Runner.runners.each do |runner|
  Thread.new do
    loop do
      runner.wait do |event, runner_context|
        queue.push([event, runner_context])
      end
    end
  end
end

Aside

I had wanted to add something with exe/floe's interface, since that is also hard coded, but this addresses my primary concerns.

@kbrock kbrock added the enhancement New feature or request label May 1, 2024
@kbrock kbrock requested review from agrare and Fryguy as code owners May 1, 2024 13:57
@kbrock kbrock mentioned this pull request May 1, 2024
1 task
queue = Queue.new
wait_threads =
Runner.runners.map do |runner|
next unless runner.respond_to?(:wait)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kbrock if we don't have a wait thread what is going to unblock the queue.pop

            # Block until an event is raised
            event, runner_context = queue.pop

Copy link
Member Author

@kbrock kbrock May 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare That thread is only needed to handle the async nature of the "docker" runner. I commented out the "docker" thread and was able to run a workflow just fine:

{
  "StartAt": "a",
  "States": {
    "a":{
      "Type": "Pass",
      "Next": "b"
    },
    "b": {
      "Type": "Wait",
      "Seconds": 1,
      "Next": "c"
    },
      "c": {
        "Type": "Succeed"
    }
  }
}

ASIDE: was also able to call with an awesome:// Task without this thread

runner = @runners[scheme] = @runners[scheme].call if runner.is_a?(Proc)
runner = @runners[scheme] = @runners[scheme].call if runner.kind_of?(Proc)
Copy link
Member Author

@kbrock kbrock May 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a lazy eval, but it caches the result on the first call.
I do wonder if this is not thread safe. Ruby's GPL probably takes care of it.

Comment on lines +31 to +36
def runners
@runners.each_value.map do |runner|
runner = runner.call if runner.kind_of?(Proc)
runner
end
end
Copy link
Member Author

@kbrock kbrock May 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially @runners.values resolving the procs.

I'm realizing that on startup 100% of runners are procs.
But after 1 task is run, 0% of runners are procs.
Wonder if we can do a resolve up front so we don't have to handle this case during regular runtime activity.


I feel supporting multiple schemes by a single runner is currently far fetched.
Ignoring that use case allows us to keep to our current simple runner registration process: Floe::Runner.register_scheme.

In the future, if we come up with a case where there are multiple schemes, we can make our registration process more complicated.

@kbrock
Copy link
Member Author

kbrock commented May 8, 2024

@Fryguy re: use case

I had implemented #178 using a synchronous/blocking implementation (first commit) and an async/event implementation (second commit)

The sync/blocking implementation works fine without this PR.
But if we have events triggered once the process is complete, then we need the wait thread to process these events.

This PR provides the thread for any runner that needs one.

Comment on lines 78 to 92
def wait(timeout: nil, events: %i[create update delete])
raise NotImplementedError, "Must be implemented in a subclass"
end
# Implement if the runner needs a watch for async events
# def wait(timeout: nil, events: %i[create update delete])
# raise NotImplementedError, "Must be implemented in a subclass"
# end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some runners do not have async events.
Putting an empty method into wait calls an empty method in a tight loop.
So having a switch that says a thread is not needed seems the best route (i.e.: not defining wait.

@kbrock kbrock force-pushed the common_wait branch 2 times, most recently from 53db854 to 82f320d Compare June 12, 2024 17:49
@miq-bot
Copy link
Member

miq-bot commented Jun 13, 2024

Checked commit kbrock@7e419b0 with ruby 2.7.8, rubocop 1.56.3, haml-lint 0.51.0, and yamllint
2 files checked, 0 offenses detected
Everything looks fine. 🏆

@kbrock
Copy link
Member Author

kbrock commented Jun 17, 2024

reducing branches: handle this with awesome_spawn

@kbrock kbrock closed this Jun 17, 2024
@kbrock kbrock deleted the common_wait branch June 17, 2024 21:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants