I want to use httplib2 to fetch large files (say, 1G) and there doesn't seem to be a way to read response body iteratively - the request() method returns whole body as an in-memory string.
I look at the code and the internal APIs do support iterative reading, but somewhere later content.read() is called and whole thing is pulled to memory.
Maybe you could expose the internal API via a new method or a flag to request() (like "read_full_body=True")?
Comment #1
Posted on Mar 12, 2013 by Helpful Lion+1.
I'm using httplib2 and try to PUT large files(>5GB file). but it will stops during processing. Is there a way to PUTing iteratively or to prevent timeout?
Status: New
Labels:
Type-Defect
Priority-Medium