-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a recovery mechanism for failed http requests #66
Comments
I find the idea reasonable; however, look at #65 (comment) for some remarks. Most importantly, in the face of network congestion, a "retry" approach must be live-tested to determine that the particular HTTP errors don't result in duplicate orders. (Hence "need info".) OT: Have you encountered any other major issues with Are there no other bugs? (Hard to believe.) Does no one use it, since Did everyone give up on Kraken, since it lags so much? (Passionately empathising.) |
Thanks @veox, your comments in the pull req make lots of sense. I'll upload a new revision once test it for a while. I've started using krakenex few days ago through https://github.com/Endogen/Telegram-Kraken-Bot and was annoyed with constant networking problems as original version of the bot references krakenex 1.0 (i made a pull request today to use 2.0.0 RC). I wish i looked at your repo and notice that you already have 2.0.0rc, but i didn't. Ended up rewriting krakenex to use Requests on my own, used it for couple days and only today noticed that you already did all the hard work ;) My observations are that 2.0.0 rc is more stable, especially in multithreaded use-case (as telegram bot has several active threads sending requests). Adding retries also increased a success rate making orders and querying balance. I haven't noticed any issues so far. And yes, lots of folks who are still trying to use Kraken through web UI are giving up now. |
For ref: a query that resulted in a The assumption that a nonce will prevent duplicate orders can therefore be tested if attempting a retry on a EDIT: To clarify: this is risky, in general don't do it! I've already tested this (see #66 (comment)), no need to step on the rake. :) |
I'll try to get to a "proper" |
Cool. Telegram-bot author just accepted a pull request so hopefully there
will be a few more folks on 2.0.0rc
On Sun, Nov 12, 2017, 20:46 Noel Maersk ***@***.***> wrote:
I'll try to get to a "proper" v2.0.0 release next week, with pip package
and all. Hopefully, that'll get more people to use it. Also, live-testing
PR #65 <#65>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#66 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AOQAs1PrGUUaE0f2zpQ79-oFT-8ID8MBks5s10sfgaJpZM4Qa6qp>
.
--
-- Georgiy Zhytar
Check out what I'm doing and subscribe to get your weekly portion of
inspiration and knowledge to learn leading wisely http://LeadingWise.ly
|
Hello, Firstly, @veox, thank you very much for this very functional, but at the same time extremely simple to use interface to the Kraken API. I have been using this library for about a month now... Not trading yet, but still working on putting together a platform for automated trading (maybe someday, I will get that far!) Just a comment on this particular proposed change...: I personally think that the implementation of the retry logic should be placed higher up at the application layer. something like "try, except" blocks could be used to process the error, and then probably send a request to check the current orders to first verify the status of the order before attempting a retry. Regards, |
@philippose Thanks for your thoughts! I do tend to agree in general that it's "better" to scream loudly of a fault and do nothing when money is involved; however, the Perhaps, if re-using the Otherwise, one could add
This is error-prone, though, since the time-to-wait seems to depend on the load on Kraken's trade execution engine. |
That is, if the Haven't tried it yet. Hopefully - next week. EDIT: This will likely only work if there haven't been any queries in the interim with higher |
TL;DR: Note: I've tried the approach from my previous comment. It can be seen in this gist. In short - it works. (Tried a few times.) A query that seems to fail with a {'error': ['EAPI:Invalid nonce']} In other words, the nonce is indeed sufficient for Kraken to recognise it as a duplicate. (At least for now...) This approach, however, adds a lot of boilerplate to the end application. The "wrapper" must be repeated for every query type, either manually or with a decorator. This is unwieldy, and, more importantly, would likely be too complicated for a lot of people using @gzhytar's approach seems warranted - it doesn't add too much bloat, and is a solution for a common issue, especially in light of Kraken's current (abysmal) performance. This should be, however, a deliberate opt-in setting, as mentioned in #65 (comment). Otherwise, it's only likely to further increase the load on Kraken's TEX. |
Appreciate all the work on this, I am using v1 and have implemented similar logic into the API class. One thing I do not quite understand is Kraken's "nonce window" implementation. From experimentation I know resubmitting a nonce with a nonce window > 0 can make for a duplicate submit. Here's my logic/flow for private queries: The Kraken has been very unhappy lately... ps. (likely off-topic but...) how does v2 improve the reliability of the network connections? I don't see how it would make better the onslaught of 5XX server errors |
@pawapps The "nonce window" part would make a good separate issue. ;) As to network connection reliability:
In The latter is a rare case these days, which this issue is about (kind of...). |
It's been about a month of testing, and I haven't as-of-yet experienced a duplicate order when using the nonce-protection approach (see PR #65). |
krakenex: 2.0.0rc2
What are you trying to achieve?
During peak times large amount of http requests are failing without a specific response from Kraken server. These are intermittent problems with CloudFlare/Kraken and for some of returned http codes it should be safe to retry an operation. As each request has a unique nonce which is checked on a server, it should prevent execution of several identical requests if they get to a server despite a failed status_code.
Max number of retries should be configurable
EDITed by @veox:
Pull requests with the suggested change:
#65#99 #100The text was updated successfully, but these errors were encountered: