Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

htlcswitch: use fn.GoroutineManager #9140

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

starius
Copy link
Collaborator

@starius starius commented Sep 27, 2024

Change Description

Replaced the use of s.quit and s.wg with s.gm (GoroutineManager). WaitGroup is still needed to wait for handleLocalResponse: if it was switched to s.gm, then it may skip running, which has unclear consequences. After handleLocalResponse is changed to run without a goroutine, we can remove WaitGroup completely.

This fixes a race condition between s.wg.Add(1) and s.wg.Wait().

Steps to Test

I added a test which used to fail under -race before this commit.

$ cd htlcswitch

$ go test -race -run TestSwitchGetAttemptResultStress

This test crashes with a data race if I undo the changes of implementation of switch.

Pull Request Checklist

Testing

  • Your PR passes all CI checks.
  • Tests covering the positive and negative (error paths) are included.
  • Bug fixes contain tests triggering the bug to prevent regressions.

Code Style and Documentation

Copy link
Contributor

coderabbitai bot commented Sep 27, 2024

Important

Review skipped

Auto reviews are limited to specific labels.

🏷️ Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@starius starius mentioned this pull request Sep 27, 2024
8 tasks
@starius starius force-pushed the goroutines branch 2 times, most recently from 8810118 to 88fbc4b Compare October 3, 2024 15:27
@starius starius force-pushed the goroutines branch 2 times, most recently from 8395cca to e001027 Compare October 7, 2024 19:00
@starius starius changed the title [WIP] htlcswitch: use fn.GoroutineManager htlcswitch: use fn.GoroutineManager Oct 11, 2024
@starius starius marked this pull request as ready for review October 11, 2024 15:50
@saubyk saubyk requested review from Crypt-iQ and ellemouton October 15, 2024 16:54
@saubyk saubyk added this to the v0.19.0 milestone Oct 15, 2024
@ellemouton
Copy link
Collaborator

@starius - I think these unit test failures are related to this PR - maybe take a look at fixing those up first & then re-ping reviewers when ready?

htlcswitch/switch.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Show resolved Hide resolved
Copy link
Collaborator

@Crypt-iQ Crypt-iQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't reproduce the race condition with the attached test, do you have an error trace of it?

htlcswitch/switch.go Outdated Show resolved Hide resolved
}()
})
if err != nil {
return
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't think this should return?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, added a comment. Now this section looks like this:

                // When this time ticks, then it indicates that we should
                // collect all the forwarding events since the last internal,
                // and write them out to our log.
                case <-s.cfg.FwdEventTicker.Ticks():
                        // The error of Go is ignored: if it is shutting down,
                        // the loop will terminate on the next iteration, in
                        // s.gm.Done case.
                        _ = s.gm.Go(func(ctx context.Context) {
                                err := s.FlushForwardingEvents()
                                if err != nil {
                                        log.Errorf("unable to flush "+
                                                "forwarding events: %v", err)
                                }
                        })

@starius
Copy link
Collaborator Author

starius commented Oct 23, 2024

@Crypt-iQ

I couldn't reproduce the race condition with the attached test, do you have an error trace of it?

I pushed branch reproduce-race to my fork.

In that branch:

htlcswitch$ go test -race -run TestSwitchGetAttemptResultStress
==================
WARNING: DATA RACE
Read at 0x00c0001d4118 by goroutine 21:
  runtime.raceread()
      <autogenerated>:1 +0x1e
  github.com/lightningnetwork/lnd/htlcswitch.(*Switch).GetAttemptResult()
      /home/user/lnd/htlcswitch/switch.go:496 +0x1c4
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress.func1()
      /home/user/lnd/htlcswitch/switch_test.go:3211 +0x168

Previous write at 0x00c0001d4118 by goroutine 22:
  runtime.racewrite()
      <autogenerated>:1 +0x1e
  github.com/lightningnetwork/lnd/htlcswitch.(*Switch).Stop()
      /home/user/lnd/htlcswitch/switch.go:1995 +0x1e9
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress.func2()
      /home/user/lnd/htlcswitch/switch_test.go:3232 +0xae

Goroutine 21 (running) created at:
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress()
      /home/user/lnd/htlcswitch/switch_test.go:3203 +0x356
  testing.tRunner()
      /home/user/.goroot/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /home/user/.goroot/src/testing/testing.go:1743 +0x44

Goroutine 22 (finished) created at:
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress()
      /home/user/lnd/htlcswitch/switch_test.go:3222 +0x45c
  testing.tRunner()
      /home/user/.goroot/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /home/user/.goroot/src/testing/testing.go:1743 +0x44
==================
--- FAIL: TestSwitchGetAttemptResultStress (0.08s)
    testing.go:1399: race detected during execution of test
FAIL
exit status 1
FAIL    github.com/lightningnetwork/lnd/htlcswitch      0.380s

@starius starius force-pushed the goroutines branch 2 times, most recently from 7cb95ef to 662c47b Compare October 24, 2024 00:30
@starius
Copy link
Collaborator Author

starius commented Oct 24, 2024

@starius - I think these unit test failures are related to this PR - maybe take a look at fixing those up first & then re-ping reviewers when ready?

Test failure was caused by extra call to s.Stop in defer. I removed it.

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @starius!

Logic looks good, but I have some opinions about the API of the fn.Go call that I think is worth discussing before we merge. Would love to hear what @yyforyongyu & @ProofOfKeags think too.

htlcswitch/switch_test.go Show resolved Hide resolved
htlcswitch/switch.go Show resolved Hide resolved
htlcswitch/switch.go Outdated Show resolved Hide resolved
htlcswitch/switch.go Show resolved Hide resolved
@ProofOfKeags ProofOfKeags self-requested a review October 29, 2024 15:39
@ProofOfKeags
Copy link
Collaborator

What's the prio on this? I want to review but I need to balance with other stuff.

@saubyk
Copy link
Collaborator

saubyk commented Oct 30, 2024

What's the prio on this? I want to review but I need to balance with other stuff.

Not critical. You can focus on P0 stuff, before addressing this.

Copy link
Member

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry a bit late in the game, but is there an issue page describing what the issue is?

I also don't understand the struct GoroutineManager - it looks like it's putting a mutex to guard the wait group operations?

My instinct is this is solving the wrong problem - we should always know when/where we call wg.Add and wg.Wait, if not, we should refactor our code so we always know when we cal wg.Add and wg.Wait. I guess other people have run into this issue before too.

htlcswitch/switch.go Outdated Show resolved Hide resolved
starius added a commit to starius/lnd that referenced this pull request Nov 14, 2024
@starius
Copy link
Collaborator Author

starius commented Nov 26, 2024

Sorry a bit late in the game, but is there an issue page describing what the issue is?

@yyforyongyu Thank you for the suggestion!
I opened #9308 to describe the original issue.

I also don't understand the struct GoroutineManager - it looks like it's putting a mutex to guard the wait group operations?

WaitGroup cannot be directly used to track goroutines in the scenario we encounter in htlcswitch. The issue arises when we have a long-lived object (the switch) that launches goroutines during its lifecycle (via the GetAttemptResult method, which calls wg.Add(1)) and a Stop() method, which cancels running goroutines (using context cancellation) and waits for them to complete (via wg.Wait()).

In this setup, wg.Add(1) and wg.Wait() can be called in parallel when the WaitGroup counter is at 0. At this point, WaitGroup cannot determine whether it should wait or not because the outcome depends on the timing and order of these calls. Essentially, this creates a situation where switch.Stop() doesn’t know whether to wait for a goroutine launched by GetAttemptResult if it was initiated at the same time Stop() was called. This results in a race condition.

GoroutineManager resolves this issue by introducing a Go method that is synchronized with the Stop method. This ensures that either a goroutine is successfully launched or the Go method returns false. This synchronization is achieved by using a mutex alongside the WaitGroup.

My instinct is this is solving the wrong problem - we should always know when/where we call wg.Add and wg.Wait, if not, we should refactor our code so we always know when we cal wg.Add and wg.Wait. I guess other people have run into this issue before too.

I agree that, ideally, the code should be refactored into an event-loop style, centralizing all goroutine launches and state changes within a single goroutine and using channels to transmit data to and from it. This approach aligns with the patterns we follow in other packages. However, implementing such a change would require significant time and extensive modifications to the package. What are your thoughts?

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@starius - I think this still needs to be updated to point to the latest version of fn (#9270).

Also - I think you can go ahead and squash in that final commit

@starius
Copy link
Collaborator Author

starius commented Nov 28, 2024

I squashed the last commit (deeacc6), rebased and used GoroutineManager from fn v2. Fortunately fn v1 and fn v2 can be used simultaneously!

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates, I think things look good but i think we should change the API of the goroutine manager a bit more. See my suggestion here

htlcswitch/switch.go Show resolved Hide resolved
@@ -836,7 +847,8 @@ func (s *Switch) logFwdErrs(num *int, wg *sync.WaitGroup, fwdChan chan error) {
log.Errorf("Unhandled error while reforwarding htlc "+
"settle/fail over htlcswitch: %v", err)
}
case <-s.quit:

case <-ctx.Done():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something doesnt feel right here. It feels like we are mixing the use of caller ctx and quit channels. Here, they mean the same thing: so ie, why cant we just listen on s.gm.Done() here (ie, s.quit)? because this ctx that is now being passed in here is not coming from the caller of ForwardPackets and is instead coming from the creator of the the gm. I think the issue is stemming from the fact that we are passing a context to the constructor of the goroutine manager which is an anti-pattern. Im gonna see if I can work the goroutine manager a bit to work around this anti-pattern

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I replaced ctx.Done() with s.gm.Done() here and also inside a goroutine launched by GetAttemptResult.

@@ -368,8 +370,11 @@ func New(cfg Config, currentHeight uint32) (*Switch, error) {
return nil, err
}

gm := fn2.NewGoroutineManager(context.Background())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's an anti-pattern to pass a context into a constructor. I think we should try to avoid this as much as possible. I'll put up a suggested diff for the goroutine manager 👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I updated fn dependency and used new API!

@starius starius force-pushed the goroutines branch 2 times, most recently from c51f5ab to 1a18ed4 Compare December 13, 2024 03:35
@starius starius requested a review from ellemouton December 13, 2024 04:07
Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's hold off here until #9344 and #9342 are merged as those will make things easier here

@@ -85,6 +86,9 @@ var (
// fail payments if they increase our fee exposure. This is currently
// set to 500m msats.
DefaultMaxFeeExposure = lnwire.MilliSatoshi(500_000_000)

// background is a shortcut for context.Background.
background = context.Background()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont think we should do this. Rather use a context.TODO() where needed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you rebase on top of #9344, then we can also add a context guard here and then we only need a single context.TODO() in Start()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.
There are now 3 top-level methods left which use context.TODO() now:

  • Start
  • ForwardPackets
  • GetAttemptResult

Probably they should get a context argument in the future and it will replace the context.TODO().

Comment on lines 29 to 30
// background is a shortcut for context.Background.
background = context.Background()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not do this.

consider rebasing on top of #9342 which handles the bump to the correct fn version and handles updating the statemachine to thread contexts through correctly

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. This commit is not needed now.

var n *networkResult
select {
case n = <-nChan:
case <-s.quit:
case <-s.gm.Done():
Copy link
Collaborator

@ellemouton ellemouton Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it is not great to refer to s.gm from inside a call-back that is called from s.gm (it screams "deadlock"). Rather just use the ctx provided to the callback which will be cancelled when the gm is shutdown (ie, when gm.Done() would have returned anyways)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

// The error of Go is ignored: if it is shutting down,
// the loop will terminate on the next iteration, in
// s.gm.Done case.
_ = s.gm.Go(background, func(ctx context.Context) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let htlcForwarder take a context and pass in a context in there from the goroutine which is starting it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -3020,8 +3042,12 @@ func (s *Switch) handlePacketSettle(packet *htlcPacket) error {
// NOTE: `closeCircuit` modifies the state of `packet`.
if localHTLC {
// TODO(yy): remove the goroutine and send back the error here.
s.wg.Add(1)
go s.handleLocalResponse(packet)
ok := s.gm.Go(background, func(ctx context.Context) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather pass in a context to the calling func. Same for all the others

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Another instance is handlePacketFail.

@ellemouton
Copy link
Collaborator

@starius - those 2 PRs are in now so I think we can continue here

@starius starius force-pushed the goroutines branch 2 times, most recently from 3f7a66f to 7ce33f8 Compare January 22, 2025 18:46
@starius starius requested a review from ellemouton January 22, 2025 20:03
Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good after squash 🙏

Let's follow up soon to replace the TODOs

Replaced the use of s.quit and s.wg with s.gm (GoroutineManager).

This fixes a race condition between s.wg.Add(1) and s.wg.Wait().
Also added a test which used to fail under `-race` before this commit.
@starius
Copy link
Collaborator Author

starius commented Jan 23, 2025

Squashed the commits.

@saubyk saubyk removed the request for review from Crypt-iQ January 23, 2025 15:31
Copy link
Member

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that, ideally, the code should be refactored into an event-loop style, centralizing all goroutine launches and state changes within a single goroutine and using channels to transmit data to and from it. This approach aligns with the patterns we follow in other packages. However, implementing such a change would require significant time and extensive modifications to the package. What are your thoughts?

Have you tried the event loop approach? At a glance I think we only need to add a new channel receiver attemptResultReq on Switch and read it in the main loop htlcForwader? Seems doable as the diff is small https://gist.github.com/yyforyongyu/7cf8d2e2586b2c38d197e05315b9d55d

The other approach is simply removing the wg.Add - why do we need it or am I missing anything here?

diff --git a/htlcswitch/switch.go b/htlcswitch/switch.go
index 720625f2c..5e11ce794 100644
--- a/htlcswitch/switch.go
+++ b/htlcswitch/switch.go
@@ -493,10 +493,7 @@ func (s *Switch) GetAttemptResult(attemptID uint64, paymentHash lntypes.Hash,
 	// Since the attempt was known, we can start a goroutine that can
 	// extract the result when it is available, and pass it on to the
 	// caller.
-	s.wg.Add(1)
 	go func() {
-		defer s.wg.Done()
-
 		var n *networkResult
 		select {
 		case n = <-nChan:
@@ -518,12 +515,15 @@ func (s *Switch) GetAttemptResult(attemptID uint64, paymentHash lntypes.Hash,
 		if err != nil {
 			e := fmt.Errorf("unable to extract result: %w", err)
 			log.Error(e)
-			resultChan <- &PaymentResult{
-				Error: e,
-			}
+			fn.SendOrQuit(
+				resultChan, &PaymentResult{
+					Error: e,
+				}, s.quit,
+			)
 			return
 		}
-		resultChan <- result
+
+		fn.SendOrQuit(resultChan, result, s.quit)
 	}()
 
 	return resultChan, nil

I think we are more or less on the same page, as we know it's a temporary mitigation to the issue. And I wanna stress again about the wrong usage of wg.Add(1), as explained from this OG comment.

Or my question is this - now that we have the new fn.GoroutineManager, how are we gonna prevent future development from using it to cover the mistake that a wg.Add is called inside a goroutine?

}()
})
// The switch shutting down is signaled by closing the channel.
if !ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we still need this check? Won't the line <-ctx.Done() be hit when it's shutting down?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If GoroutineManager.Stop is called before the Go method (i.e., the switch is in the process of stopping), the Go method will return false without launching a new goroutine. In such cases, we should perform the same action as if it had stopped after launching the goroutine - specifically, closing resultChan. Failing to close the channel and simply returning it could cause the caller to get stuck indefinitely while waiting to receive from the channel.

ok := s.gm.Go(context.TODO(), func(ctx context.Context) {
s.logFwdErrs(ctx, &numSent, &wg, fwdChan)
})
if !ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here - why do we need this check? I think s.logFwdErrs will listen on <-s.gm.Done(): and quit?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a similar situation. We should handle it the same way if the goroutine manager is stopped before the Go method is executed.

@starius
Copy link
Collaborator Author

starius commented Jan 27, 2025

@yyforyongyu

The other approach is simply removing the wg.Add - why do we need it or am I missing anything here?

If we remove wg.Add, we would end up with an untracked goroutine. The purpose of the WaitGroup here is to ensure that all started goroutines are properly accounted for and to wait for their completion in Switch.Stop(). This guarantees that all goroutines have finished before the switch is turned off.

@starius starius requested a review from yyforyongyu January 28, 2025 02:23
@lightninglabs-deploy
Copy link

@ProofOfKeags: review reminder
@yyforyongyu: review reminder
@starius, remember to re-request review from reviewers when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

[bug]: htlcswitch may crash upon shutdown because of a race in WaitGroup
7 participants