New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-chunked resumable uploads #601
Comments
From yan...@google.com on November 02, 2012 10:26:55 Status: Accepted |
From rmis...@google.com on December 11, 2012 05:37:08 Labels: -Milestone-Version1.13.0 Milestone-Version1.14.0 |
From yan...@google.com on January 19, 2013 05:55:52 Streaming is not possible on Google App Engine. Therefore, we definitely do not want this for platforms like Google App Engine. Labels: -Milestone-Version1.14.0 Milestone-Version2.1.0 |
From yan...@google.com on February 06, 2013 16:05:10 Labels: -Milestone-Version2.1.0 Milestone-Version1.16.0 |
From yan...@google.com on June 10, 2013 06:26:55 Owner: pele...@google.com |
From pele...@google.com on July 28, 2013 23:09:59 Labels: -Milestone-Version1.17.0 Milestone-Version1.18.0 |
From yan...@google.com on September 27, 2013 05:04:11 Labels: -Milestone-Version1.18.0 |
From nherr...@google.com on August 09, 2012 14:24:07
External references, such as a standards document, or specification? https://devsite.googleplex.com/storage/docs/json_api/v1/how-tos/upload#resumable Java environments (e.g. Java 6, Android 2.3, App Engine, or All)? Java 6, non-App Engine Please describe the feature requested. Requests using the resumable upload feature be sent as one streaming HTTP request (i.e., not having to be entirely resident in application memory) rather than chunks, as chunking incurs a significant performance penalty (4 to 5x longer) over just sending bytes until a connection error or early error result is returned.
Specifically, that a buffered stream wrap (if necessary) the underlying InputStream such that is has enough buffer to go back as far as required if the upload fails. This should be calculable based on some minimum internal commit chunk size plus (speed of the connection * maximum time before you know something went wrong with the connection, e.g., tcp timeout). Then, you are guaranteed in case of TCP termination or early HTTP error result that you can perform the resume an interrupted upload steps (query for how many bytes remain to be sent) and then send a subsequent single streaming HTTP session with the remaining bytes and the appropriate Content-Range header. Because there's no regular HTTP result code as for a chunk, you have to use that same calculus to determine when to invalidate earlier parts of the stream (record current location + Reset + Skip + Mark + Skip to current location) to avoid bloating application memory.
Original issue: http://code.google.com/p/google-api-java-client/issues/detail?id=587
The text was updated successfully, but these errors were encountered: