Skip to content

Commit

Permalink
fix: Limit 429 retries (#602)
Browse files Browse the repository at this point in the history
  • Loading branch information
adamspofford-dfinity authored Sep 30, 2024
1 parent 7a109b9 commit c3e07ce
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 4 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## Unreleased

* Limited the number of HTTP 429 retries. Users receiving this error should configure `with_max_concurrent_requests`.
* Added `Envelope::encode_bytes` and `Query/UpdateBuilder::into_envelope` for external signing workflows.
* Added `AgentBuilder::with_arc_http_middleware` for `Transport`-like functionality at the level of HTTP requests.
* Add support for dynamic routing based on boundary node discovery. This is an internal feature for now, with a feature flag `_internal_dynamic-routing`.
Expand Down
14 changes: 10 additions & 4 deletions ic-agent/src/agent/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1926,17 +1926,23 @@ impl HttpService for Retry429Logic {
async fn call<'a>(
&'a self,
req: &'a (dyn Fn() -> Result<Request, AgentError> + Send + Sync),
_max_retries: usize,
_max_tcp_retries: usize,
) -> Result<Response, AgentError> {
let mut retries = 0;
loop {
#[cfg(not(target_family = "wasm"))]
let resp = self.client.call(req, _max_retries).await?;
let resp = self.client.call(req, _max_tcp_retries).await?;
// Client inconveniently does not implement Service on wasm
#[cfg(target_family = "wasm")]
let resp = self.client.execute(req()?).await?;
if resp.status() == StatusCode::TOO_MANY_REQUESTS {
crate::util::sleep(Duration::from_millis(250)).await;
continue;
if retries == 6 {
break Ok(resp);
} else {
retries += 1;
crate::util::sleep(Duration::from_millis(250)).await;
continue;
}
} else {
break Ok(resp);
}
Expand Down

0 comments on commit c3e07ce

Please sign in to comment.