Contents:
- Why use httpx?
- Getting started
- Retry FAQ
- Timeout FAQ
- Racing FAQ (concurrent requests)
- Plugins (event handlers)
- Detailed feature FAQ
- Alternative HTTP client libraries
See Also: Usage Guide | README | Full API Documentation
Package httpx is a client-side HTTP library providing enterprise level HTTP transaction reliability.
Package httpx is targeted primarily at GoLang web services and web servers.
Package httpx allows web applications written in Go to invoke their dependencies over HTTP in a reliable way, by: retrying failed HTTP requests (retry); applying flexible time policies (timeout); and sending concurrent HTTP requests to the same endpoint (racing).
If you are building a web application in Go then yes, we think so!
Consult the Alternative HTTP client libraries section to how httpx compares with other GoLang HTTP client libraries.
Package httpx works on Go 1.14 and higher.
go get -u github.com/gogama/httpx
- What is the default retry behavior?
- How do I set the retry backoff period?
- How do I specify when to retry?
- Can I make my own custom retry policy?
- How do I turn off retry altogether?
- Can a retry policy modify the request execution?
- What goroutine executes the retry policy methods?
If you use a zero-value &httpx.Client{}
or set its RetryPolicy
member to nil
, then a sensible default retry policy is applied.
The default policy is called retry.DefaultPolicy
. Consult the GoDoc
for more detailed information.
Provide an instance of the retry.Waiter
interface to retry.NewPolicy
.
- For constant backoff, use the built-in waiter constructor
retry.NewFixedWaiter
. - For exponential backoff with optional jitter, use the built-in waiter
constructor
retry.NewExpWaiter
. - For custom behavior, provide your own waiter implementation.
Provide an instance of the retry.Decider
interface to retry.NewPolicy
.
You can assemble a custom decider by combining built-in decider functions
using the And
and Or
operators, for example:
myDecider := retry.Times(10).And(
retry.StatusCode(503).Or(
retry.TransientErr)
)
For custom behavior, provide your own decider implementation.
If your retry decision and backoff computation are uncoupled, write your
own retry.Decider
or retry.Waiter
or both and combine them using
retry.NewPolicy
. If they are coupled, implement the retry.Policy
interface.
Use the built-in policy retry.Never
.
No. The retry policy must treat the execution as immutable, with one
exception: if the retry policy needs to save state, it may use the
execution's SetValue
method.
The retry policy is always called from the goroutine which called the
request execution method (Do
, Get
, Post
, etc.) on
httpx.Client
. This is always true, even when the retry policy is
being used alongside the racing feature.
- What is the default timeout behavior?
- How do I set a constant timeout?
- What is an adaptive timeout?
- Why would I use an adaptive timeout?
- How do I set an adaptive timeout?
- Can I make my own custom timeout policy?
- How do I turn off timeouts?
- Can a timeout policy modify the request execution?
- What goroutine executes the timeout policy methods?
If you use a zero-value &httpx.Client{}
or set its TimeoutPolicy
member to nil
, then a sensible default timeout policy is applied.
The default policy is called timeout.DefaultPolicy
. Consult the GoDoc
for more detailed information.
Use the built-in timeout policy constructor timeout.Fixed
.
Instead of being a static or constant value, an adaptive timeout changes based on whether the previous request attempt timed out.
An adaptive timeout may be helpful in smoothing over a rough patch of higher response latencies from a downstream dependency.
By setting a longer timeout value when a previous request attempt timed out, you can set tight initial timeouts while being confident you won't brown out the dependency or suffer an availability drop if the dependency goes through a slow period.
If you are interested in adaptive timeouts, you may also find the racing feature applies to your use case.
Use the built-in timeout policy constructor timeout.Adaptive
.
Of course! Just implement the timeout.Policy
interface.
Use the built-in timeout policy timeout.Infinite
.
No. The timeout policy must treat the execution as immutable, with one
exception: if the timeout policy needs to save state, it may use the
execution's SetValue
method.
The timeout policy is always called from the goroutine which called the
request execution method (Do
, Get
, Post
, etc.) on
httpx.Client
. This is always true, even when the timeout policy is
being used alongside the racing feature.
- What is racing (concurrent requests)?
- Why would I use racing?
- Does racing have any associated costs or risks?
- What is the default racing behavior?
- What is a wave?
- When an attempt finishes within a wave, what happens to the other in-flight concurrent attempts?
- How do I start parallel requests at predetermined intervals?
- Can I set a circuit breaker to disable racing?
- Can I make my own custom racing policy?
- Can a racing policy modify the request execution?
- How does retry work with the racing feature?
- How do timeouts work with the racing feature?
- How do event handlers work with the racing feature?
- What goroutine executes the racing policy methods?
- How do I determine if a request attempt was redundant?
Racing means making multiple parallel HTTP request attempts to satisfy one logical HTTP request plan, and using the result from the fastest attempt (first to complete) as the final HTTP result.
For example, you request GET /dogs/german-shepherd
from pets.com
.
You haven't received the response after 200ms, so you send a second
GET /dogs/german-shepherd
without cancelling the first one. Now the
first and second request attempts are "racing" each other, and the first
to complete will satisfy the logical request.
The use case for racing is similar to the use case for adaptive timeouts: racing concurrent requests can help smooth over pockets of high latency from a downstream web service, enabling you to get a successful response to your customer more rapidly even when your dependency is experiencing transient slowness.
Before using racing, and when configuring your racing policy, you should consider the following factors:
- Cost. A racing policy may result in you sending extra requests to the downstream web service. If you pay another business for these requests, your bill may increase. If your downstream dependency is another service in your own organization, the increased traffic may indirectly increase your organization's costs.
- Brownout. A carelessly-designed racing policy may result in a surge in traffic to your downstream dependency at precisely the moment when the dependency is struggling to handle its existing traffic, let alone added load.
- Idempotency. If the operation you are requesting on the remote web service is not idempotent, it may not be a good idea to send multiple parallel requests to the service. For non-idempotent requests, do the analysis to determine if racing is right for you.
Fortunately a well-designed racing policy will not materially increase cost or brownout risk (see e.g. Can I set a circuit breaker to disable racing?) and can be disabled for non-idempotent requests.
Racing is off by default. If you use the zero-value httpx.Client
, or
any httpx.Client
with a nil
racing policy, all HTTP requests
attempts for a given request execution will be made serially.
A wave is a group of request attempts that are racing one another (overlap in time due to concurrent execution). Since racing is disabled unless an explicit racing policy is specified, by default every wave contains only one attempt.
When racing is enabled, concurrent request attempts are grouped in waves. If all request attempts within the wave finish and are retryable, the client waits for the pause duration determined by the retry policy and then begins a new wave.
6. When an attempt finishes, what happens to the other in-flight concurrent attempts in the same wave?
As soon as one request attempt finishes, either due to successfully reading the whole response body or due to error, the wave is closed out: no new parallel attempts are added in to the wave.
What happens to the other in-flight request attempts within the wave depends on the retry policy. (See How does retry work with the racing feature?)
Use the built-in scheduler constructor racing.NewStaticScheduler
.
Yes. Implement the racing.Starter
interface to allow/deny starting new
parallel requests.
The built-in constructor racing.NewThrottleStarter
creates a starter
which can throttle new racing attempts if too many racing attempts were
recently started, effectively returning request plan execution to serial
attempt mode until a cooling-off period has elapsed. This built-in
starter may already satisfy your circuit-breaking needs.
Yes. A racing policy is composed of a concurrent attempt scheduler
and a concurrent attempt starter. If your scheduling and start functions
are uncoupled, write your own racing.Scheduler
or racing.Starter
or both and combine them using racing.NewPolicy
. If the two functions
are coupled, implement the racing.Policy
interface.
No. The racing policy must treat the execution as immutable, with one
exception: if the racing policy needs to save state, it may use the
execution's SetValue
method.
Your retry policy works with racing request attempts in the intuitively correct manner, roughly as if the racing attempts had been executed serially in the order in which they ended.
When a racing request attempt ends, either due to being finished or due to
error, the retry policy's Decide
method is called for a retry decision.
Just as in the serial attempt case, a positive retry decision means "keep
trying". Since other in-flight concurrent attempts also represent tries,
these in-flight attempts are allowed to finish and tested for retryability
with Decide
. If all in-flight attempts in the wave have finished with
a positive retry decision, httpx.Client
waits for the time indicated
by the retry policy's Wait
method and then starts a new wave.
Again as in the serial case, a negative retry decision means "stop
trying". As soon as one attempt finishes with a negative retry decision,
all other in-flight attempts in the race are cancelled with the special
causal error racing.Redundant
. The attempt which finished with the negative
retry decision represents the final state of the HTTP request plan execution.
Your timeout policy works with racing request attempts exactly as it does in the case of serially executed request attempts. The timeout policy is called once before each request attempt, to determine the timeout applicable to that attempt. This is true whether or not the attempt is racing other concurrent attempts.
Event handlers work the same way whether racing is enabled or not.
Event handlers are always called from the goroutine which called the
request execution method (Do
, Get
, Post
, etc.) on httpx.Client
.
Event handlers therefore execute serially even when request attempts are executing concurrently. Events for different attempts racing in the same wave may be interleaved, and their order is generally undefined, but the following invariants are true:
- The
BeforeAttempt
handler for attempti
always executes before theBeforeAttempt
handler for attempti+1
. - The relative order of events for attempt
i
is always the same:BeforeAttempt
,BeforeReadBody
(optional),AfterAttemptTimeout
(optional),AfterAttempt
.
The racing policy is always called from the goroutine which called the
request execution method (Do
, Get
, Post
, etc.) on
httpx.Client
.
If a request attempt was cancelled as redundant, the execution error
passed to the AfterAttempt
handler for that request attempt will be
set to a *url.Error
whose Err
member has the value racing.Redundant
.
You can use the standard errors
package to unwrap or inspect it. For
example, to test if the attempt was redundant, include code such as this
in your AfterAttempt
handler:
func myAfterAttemptHandler(_ httpx.Event, e *request.Execution) {
if errors.Is(e.Err, racing.Redundant) {
// The request attempt was cancelled as redundant.
}
}
- What are event handlers useful for?
- How do I add an event handler to an
httpx.Client
? - What event handlers are available?
- Can an event handler modify the request execution?
- What are plugins?
- Are there any pre-made plugins I can leverage?
- What goroutine executes the event handler methods?
Event handlers let you mix in your own logic at designated plug points in the HTTP request plan execution logic.
Examples of handlers one might implement:
- an OAuth decorator that ensures a non-expired bearer token header on each request attempt, including retries;
- a logging component that logs request attempts, outcomes, and timings;
- a metering component that sends detailed metrics to Amazon CloudWatch, AWS X-Ray, Azure Monitor, or the like;
- a response post-processor that unmarshals the response body so heavyweight unmarshaling does not need to be done separately by the retry policy and the end consumer.
Add a handler group to your httpx.Client
, if it doesn't have one
already, and push your event handler into the handler group.
func main() {
client := &httpx.Client{
Handlers: &httpx.HandlerGroup{},
}
client.Handlers.PushBack(httpx.BeforeExecutionStart, httpx.HandlerFunc(myHandler))
e, err := client.Get("https://example.com")
...
}
func myHandler(evt httpx.Event, e *httpx.Execution) {
fmt.Println("Hello from an event handler!")
}
BeforeExecutionStart
- once per execution, alwaysBeforeAttempt
- once per attempt, alwaysBeforeReadBody
- once per attempt, only if response headers were received without errorAfterAttemptTimeout
- once per attempt, only if the attempt timed outAfterAttempt
- once per attempt, alwaysAfterPlanTimeout
- once per execution, only if the request plan context timed out, distinct from an individual attempt timeoutAfterExecutionEnd
- once per execution, always
Event handlers may change the execution in the ways listed below, but must otherwise treat the execution as immutable.
- Any event handler may use
SetValue
to store a value into the execution. - The
BeforeExecutionStart
event handler may replace the execution plan with an equivalent plan. The new plan's context must be equal to, or a child of, the old plan's context. - The
BeforeAttempt
event handler may replace the current attempt's request with an equivalent request or modify the request fields. The new request's context must be equal to, or a child of, the old request's context. - The
BeforeReadBody
event handler may replace the current attempt's response with an equivalent response or modify the response fields. If the response body reader is altered, it must be replaced by a new, unclosed, reader and the event handler is responsible for ensuring the old body reader is fully read and closed to avoid leaking connections.
A plugin is a group of one or more event handlers working together to
add a feature to httpx.Client
.
Yes. See the Plugins section in README.md.
Event handlers are called from the goroutine which called the request
execution method (Do
, Get
, Post
, etc.) on httpx.Client
. This
is always true, even when the timeout policy is being used alongside the
racing feature.
- What is the default
HTTPDoer
used by anhttpx.Client
? - I use
http.Client
as myHTTPDoer
. How do I configure it? - What client-side timeouts should I use on
http.Client
?
An httpx.Client
with a nil
valued HTTPDoer
(including the zero value
client) uses http.DefaultClient
as its HTTPDoer
.
Configure it as you normally would, bearing in mind that it is usually
preferable not to set any timeouts on the underlying http.Client
(see
below).
If using the Go standard http.Client
as the HTTPDoer
, it is preferable not
to set any client-side timeouts on your underlying http.Client
unless you set
the httpx.Client
timeout policy to timeout.Infinite
.
To be slightly more nuanced, you may leverage any timeouts on the underlying
http.Client
, including on its transport, provided you are aware of how they
will play with your timeout and retry policies. For example, it may make sense
to set the dial or TLS handshake timeouts on the transport, but if you do so,
you will likely want to have them set to a lower value than the lowest timeout
your httpx timeout policy can return.
- What is a request plan?
- What is a request execution?
- Why don't request plans support...?
- Why does httpx consume pre-buffered request bodies?
- Why does httpx produce pre-buffered response bodies?
- Can I turn off request body pre-buffering?
- Can I turn off response body pre-buffering?
- Can I wrap
httpx.Client
with a Go standardhttp.Client
?
A request.Plan
is one of the two core data types in httpx (alongside
request.Execution
). A request plan declares a plan for executing an
HTTP transaction reliably, using multiple attempts (via retry, racing,
or both) if necessary.
A request plan is analogous to Go's http.Request
and in fact has a
very similar structure. Notable differences include that request.Plan
removes server-side fields and fully buffers the request body into a
[]byte
.
A request.Execution
is the other core data type in httpx (along with
request.Plan
). A request execution represents the intermediate state
involved in executing the request plan, and the final state after the
request plan execution has ended.
During the execution of a request plan initiated by a call to one of
the executing methods (Do
, Get
, Post
, etc.) from httpx.Client
,
the request.Execution
representing execution state is passed to all
policy methods (retry, timeout, racing) and all event handler methods.
This allows policies and event handlers to act according to the detailed
and current execution state...
The request.Plan
structure is equivalent to the Go standard library
http.Request
structure with the following fields removed:
- all server-only fields, because httpx is a client-only package;
- the
Cancel
channel, because it is deprecated; - the
GetBody
function, because it is redundant: the request plan already has a fully-buffered body which it can reuse on redirects and retries; and - the
Trailer
field, because trailers make less sense when the entire request body is buffered, and trailer support in servers is in any event uncommon.
Requests are buffered to allow retry and racing logic to work correctly.
For the retry logic to work, requests need to be repeatable, which means httpx
can't consume a one-time request body stream like the Go standard HTTP client
does. (Of course httpx could consume a function that returns an io.Reader
,
but this would push more complexity onto the programmer when the goal is to
simplify.)
-
Retryable errors can happen while reading the response body. If the retry framework doesn't read the whole response body, it can't assist by retrying errors occurring during body read. This is a major flaw in other HTTP retry frameworks written in Go since they require the user to read the body outside the retry loop.
-
Response body may contain information relevant to a retry decision. For example, Esri's ArcGIS web service always returns an HTTP response header containing status HTTP 200 OK. The response body, however, may be a JSON error object with its own status code indicating retryability. The entire response body must be read to make a correct retry decision.
To turn off request buffering, set a nil
request body on the request
plan, and write a handler for the httpx.BeforeAttempt
event which sets
the Body
, GetBody
, and ContentLength
fields on the execution's
request.
To turn off response buffering, write a handler for the
httpx.BeforeReadBody
event which replaces the response body reader on
the execution's response object it with a no-op reader. Your code is
responsible for draining and closing the original reader to avoid
leaking the connection.
You can make an httpx.Client
look like a standard GoLang HTTP
client, if absolutely necessary, but we do not recommend it and don't
provide an out-of-the-box implementation. Instead, we recommend building
your code around single-method interfaces like httpx.Doer
.
The stock technique for converting to an http.Client
is to wrap the
target (httpx.Client
) in an implementation of the http.RoundTripper
interface and then use your wrapper RoundTripper
implementation as the
Transport
for the http.Client
. The advantage of doing this is it's
relatively simple and allows you to avoid changing code that consumes
*http.Client
. The disadvantages are that it explicitly violates just
about every promise made in the documented RoundTripper
interface
contract: "a single HTTP transaction", "should not attempt to interpret
the request", etc.
The feature matrix below shows how httpx stacks up against other common HTTP retry libraries for Go:
httpx | heimdall | rehttp | httpretry | go-http-retry | go-retryablehttp | |
---|---|---|---|---|---|---|
Basic retry | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Response buffering | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
Flexible retry policies (1) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ |
Accurate transient error classification (2) | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
Simple event handlers | ✔ | ✔ | ❌ | ❌ | ❌ | ✔ |
Comprehensive event handlers/plugins | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
Flexible and adaptive timeouts | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
Racing concurrent requests | ✔ | ❌ | ❌ | ❌ | ❌ | ❌ |
License | MIT | Apache 2.0 | BSD 3-Clause | MIT | MIT | MPL-2.0 |
Dependencies (3) | 0 | 10+ | N/A (4) | 0 | N/A (4) | 2 |
Notes.
- Flexible retry policy means the user can configure both the retry decision and the backoff.
- See package transient and retry.TransientErr.
- Dependencies means non-test dependencies.
- N/A for dependencies means not a Go module.