Fixed
Status Update
Comments
ia...@gmail.com <ia...@gmail.com> #2
Perhaps one way to solve this would be to allow all of the settings you can set with com.google.appengine.api.files.GSFileOptions.GSFileOptionsBuilder on the com.google.appengine.api.blobstore.UploadOptions class. So you could specify up front what the filename should be or any metadata you need.
li...@gmail.com <li...@gmail.com> #3
If you retrieve the BlobInfo for that blob key you should be able to get the filename.
ch...@gmail.com <ch...@gmail.com> #4
I can get the BlobInfo for the key, but it just gives me the user's upload filename which is not what I need. I ran a test in prod and it shows this:
BlobInfo info = new BlobInfoFactory().loadBlobInfo(blobKey);
// This just gives the user's filename like "cute-kitten.jpg":
log.info (info.getFilename());
However, I'm looking for the autogenerated filename that is created in google storage. I refreshed the cloud storage explorer and found the GS filename generated for this test was "L2FwcHMtdXBsb2FkL2Jsb2JzL2UzNUFBVE0tMEIxM1dwREFTc0t3RW9LOFE" and I can't find a way to get this.
BlobInfo info = new BlobInfoFactory().loadBlobInfo(blobKey);
// This just gives the user's filename like "cute-kitten.jpg":
However, I'm looking for the autogenerated filename that is created in google storage. I refreshed the cloud storage explorer and found the GS filename generated for this test was "L2FwcHMtdXBsb2FkL2Jsb2JzL2UzNUFBVE0tMEIxM1dwREFTc0t3RW9LOFE" and I can't find a way to get this.
ha...@gmail.com <ha...@gmail.com> #5
This issue is not specific to Java. I'm using Python and cannot retrieve the Google storage filename after a user upload. The BlobInfo key is unrelated as far as I know (Strangely it starts with "?ISO-8859-1...").
gv...@gmail.com <gv...@gmail.com> #6
Facing the same issue. Without knowing GS filename this entire API is useless for GAE app. Its possible to upload data to GS but impossible to read/delete them. Is there any workaround for this issue?
bj...@gmail.com <bj...@gmail.com> #7
de...@gmail.com <de...@gmail.com> #8
I'm not sure the person posting the last comment actually read the issue. The problem is that we *only* have the blobkey. That's not enough for some use cases using GCS.
For example, it is impossible to add metadata headers to the uploaded files that are stored in GCS.
For example, it is impossible to add metadata headers to the uploaded files that are stored in GCS.
de...@gmail.com <de...@gmail.com> #9
I understand & acknowldge "it is impossible to add metadata headers to the uploaded files that are stored in GCS."
I was responding to comment #5 which was "Its possible to upload data to GS but impossible to read/delete them."
I was responding to
ro...@gmail.com <ro...@gmail.com> #10
Ok, that makes sense and I see where you are coming from.
A few clarifications... Just trying to help out the next guy that hits this, since I've since moved from GAE to AWS...
1. The documentation is unclear if you call delete on a blobkey if it deletes the key/pointer or the actual file in GCS. For some reason I assumed that the delete removed the blobkey, but not the actual file.
2. When our application was written to work with GCS, we had a custom uploader that just put files in GCS based on a naming convention into the filestore. So to read the file we didn't have to know a "key" because it was just a user's ID and some other derived info.
When we switched to the official uploader that always returned a blobkey, we couldn't do that anymore. Without changing our code to store the blobkey we would have no way to read it. Perhaps the poster from comment #5 means something similar? Just a guess...
A few clarifications... Just trying to help out the next guy that hits this, since I've since moved from GAE to AWS...
1. The documentation is unclear if you call delete on a blobkey if it deletes the key/pointer or the actual file in GCS. For some reason I assumed that the delete removed the blobkey, but not the actual file.
2. When our application was written to work with GCS, we had a custom uploader that just put files in GCS based on a naming convention into the filestore. So to read the file we didn't have to know a "key" because it was just a user's ID and some other derived info.
When we switched to the official uploader that always returned a blobkey, we couldn't do that anymore. Without changing our code to store the blobkey we would have no way to read it. Perhaps the poster from
ka...@gmail.com <ka...@gmail.com> #11
I meant that its impossible to access data that resides in GS (are they?) via GS REST API, since we don't know data url. I.e. if I upload data via GAE API I forced to use only this API (e.g. blobstoreService.serve() to give blob data to the user etc.).
However, this issue also adds a lot of confusion. Files are being uploaded to the GS (I can see them in GS Manager) but also appear in GAE app blobstore viewer. Some question arise here:
- will I be charged twice, for GAE app storage and GS storage space
- does it differ from the case when I put data to GS via curl -X PUT http://.../bucker/file in any way, i.e. is GS differs in any way by implementation from Blobstore, or its just a rebrended storage with new REST API to access it and different data namespace?
However, this issue also adds a lot of confusion. Files are being uploaded to the GS (I can see them in GS Manager) but also appear in GAE app blobstore viewer. Some question arise here:
- will I be charged twice, for GAE app storage and GS storage space
- does it differ from the case when I put data to GS via curl -X PUT http://.../bucker/file in any way, i.e. is GS differs in any way by implementation from Blobstore, or its just a rebrended storage with new REST API to access it and different data namespace?
kc...@gmail.com <kc...@gmail.com> #12
with 1.7.5 release is there any way to set the filename when calling createUploadUrl(java.lang.String, com.google.appengine.api.blobstore.UploadOptions)
Is it correct then, that to do named filed upload I have to use GCS rest api?
Is it correct then, that to do named filed upload I have to use GCS rest api?
bi...@gmail.com <bi...@gmail.com> #14
[Comment deleted]
jo...@gmail.com <jo...@gmail.com> #16
I'm having the same issue with python Blobstore API and GCS. What I found is that, when using blobstore.creat_upload_url('/foo/bar/', gs_bucket_name='foobucket') to generate a upload url just like using blobstore only. It works, the data will be uploaded to GCS, but also leave an instance that you can see in the BlobViewer from application dashboard. So the first question is, does Google charge for both object (blob and gcs)?
Second, when using blobstore.create_gs_key('/gs/bucket/filename') returns a string, I guess it's a key associated with the file exsiting on GCS but it's not documented how to use it. First, I thought it is used to retrieve/delete object in GCS, but it's not! I tried blobstore.delete('key') to delete object in GCS it doesn't do anything. Meanwhile, when using get_uploads('fieldname')[0] to retrieve a Blobinfo from form, you can get BlobKey from this, i.g.:
uploaded_file = self.get_uploads('fieldname')[0]
file_blobkey = uploaded_file.key()
Then I can use this blobkey to delete object in BlobViewer and GCS, then the second question is what does create_gs_key() do? How we gonna use it?
Third question is, how do I serve files from GCS through AppEngine without using AppEngine's bandwidth? I know ImageService can serve image, does it work for pdf/csv/word etc. as well?
Need help here!
Thanks
Second, when using blobstore.create_gs_key('/gs/bucket/filename') returns a string, I guess it's a key associated with the file exsiting on GCS but it's not documented how to use it. First, I thought it is used to retrieve/delete object in GCS, but it's not! I tried blobstore.delete('key') to delete object in GCS it doesn't do anything. Meanwhile, when using get_uploads('fieldname')[0] to retrieve a Blobinfo from form, you can get BlobKey from this, i.g.:
uploaded_file = self.get_uploads('fieldname')[0]
file_blobkey = uploaded_file.key()
Then I can use this blobkey to delete object in BlobViewer and GCS, then the second question is what does create_gs_key() do? How we gonna use it?
Third question is, how do I serve files from GCS through AppEngine without using AppEngine's bandwidth? I know ImageService can serve image, does it work for pdf/csv/word etc. as well?
Need help here!
Thanks
jo...@gmail.com <jo...@gmail.com> #17
Is there an explanation of the rationale behind the 1000 file limit? I don't quite
understand it.
understand it.
ma...@gmail.com <ma...@gmail.com> #18
This is so frickin' insanely bastard annoying. Starring.
gv...@gmail.com <gv...@gmail.com> #19
I'm attaching a file my_zipimport.py which provides a zip importer that I have tested
successfully with Rietveld and Django in the SDK as well as in production. The file
is open source, using the Apache 2.0 license.
I make no promises that the API will remain the same, but I expect that this will
show up in a future SDK. (Alas, it's too late for the next SDK release, which is
already in the moddle of QA.)
Source code modifications:
import my_zipipmport
my_zipimport.install()
sys.path.insert(0, 'django.zip')
CAVEAT: when using dev_appserver, somehow the default loader is tried first, and it
will find the copy of django 0.96 in the SDK even though the zip file is first on the
path. I'll have to figure out why that is; my work-around so far has been to remove,
rename or otherwise disable the <SDK>/lib/django/ directory.
To make a django.zip that fits in under 1 MB, I did the following in my Linux shell:
zip -q django.zip `find django -name .svn -prune -o -type f ! -name '*.pyc' ! -name
'*.[pm]o' -print`
The find command skips .svn directories and .pyc files, but also .po and .mo files
which have something to do with internationalization and can apparently be missed.
The resulting django.zip is under 0.8 MB.
I'm sure there are other things you could remove (e.g. the Django admin app, most db
backends, etc.) but that's a separate project.
successfully with Rietveld and Django in the SDK as well as in production. The file
is open source, using the Apache 2.0 license.
I make no promises that the API will remain the same, but I expect that this will
show up in a future SDK. (Alas, it's too late for the next SDK release, which is
already in the moddle of QA.)
Source code modifications:
import my_zipipmport
my_zipimport.install()
sys.path.insert(0, 'django.zip')
CAVEAT: when using dev_appserver, somehow the default loader is tried first, and it
will find the copy of django 0.96 in the SDK even though the zip file is first on the
path. I'll have to figure out why that is; my work-around so far has been to remove,
rename or otherwise disable the <SDK>/lib/django/ directory.
To make a django.zip that fits in under 1 MB, I did the following in my Linux shell:
zip -q django.zip `find django -name .svn -prune -o -type f ! -name '*.pyc' ! -name
'*.[pm]o' -print`
The find command skips .svn directories and .pyc files, but also .po and .mo files
which have something to do with internationalization and can apparently be missed.
The resulting django.zip is under 0.8 MB.
I'm sure there are other things you could remove (e.g. the Django admin app, most db
backends, etc.) but that's a separate project.
da...@gmail.com <da...@gmail.com> #20
Is there any plan to remove the limit?
sl...@gmail.com <sl...@gmail.com> #21
They seem to be storing the uploaded files in BigTable, based on bits I've heard in
the developers' interviews. If so they may be running up against the 1000-record
limit on query results. If true, this would give a nice logical explanation for the
problem, but unfortunately suggests it would be difficult to raise.
The problem with zipped packages is that some packages use ``__file__`` extensively.
While Setuptools discourages this because it prevents zipping, zipped packages have
never been a requirement anywhere. So if App Engine indirectly requires large
packages to be zipped, it places a significant burden on developers of existing
packages, which should at least be acknowledged.
the developers' interviews. If so they may be running up against the 1000-record
limit on query results. If true, this would give a nice logical explanation for the
problem, but unfortunately suggests it would be difficult to raise.
The problem with zipped packages is that some packages use ``__file__`` extensively.
While Setuptools discourages this because it prevents zipping, zipped packages have
never been a requirement anywhere. So if App Engine indirectly requires large
packages to be zipped, it places a significant burden on developers of existing
packages, which should at least be acknowledged.
ma...@gmail.com <ma...@gmail.com> #22
@21
I use a ton of packages in apps that get zipped to be bundled with py2exe and I never
seen a package misbehave because of that. So I think you overestimate how much of an
issue it is.
I use a ton of packages in apps that get zipped to be bundled with py2exe and I never
seen a package misbehave because of that. So I think you overestimate how much of an
issue it is.
ch...@gmail.com <ch...@gmail.com> #23
@22 py2exe produced packages can be temporarily unzipped, I think. appengine packages
can't.
can't.
ma...@gmail.com <ma...@gmail.com> #24
@23 yes they could but that never happens AFAICT. If you choose to zip the .dll /
.pyd files as well those would need to be unzipped, but that doesn't apply to
appengine anyway.
.pyd files as well those would need to be unzipped, but that doesn't apply to
appengine anyway.
da...@gmail.com <da...@gmail.com> #25
Well, what if the custom python interpreter somehow implemented the zimport module
seamlessly? That would make it much easier than having us have to zip up and package
all extra modules every release.
seamlessly? That would make it much easier than having us have to zip up and package
all extra modules every release.
py...@gmail.com <py...@gmail.com> #26
Pylons support please.
gv...@gmail.com <gv...@gmail.com> #27
Here's a new version, named py_zipimport. The instructions are different:
import sys
import py_zipimport
sys.path.insert(0, 'django.zip') # or whatever
Please give it a try.
import sys
import py_zipimport
sys.path.insert(0, 'django.zip') # or whatever
Please give it a try.
gv...@gmail.com <gv...@gmail.com> #28
PS. That still doesn't work in the SDK. Changes to the SDK are necessary to support
zipimport. Those will come in a future SDK version (the next SDK version is already
too far along in QA to include it).
zipimport. Those will come in a future SDK version (the next SDK version is already
too far along in QA to include it).
sl...@gmail.com <sl...@gmail.com> #29
Would it be possible to to post a patch for the SDK in the meantime?
gv...@gmail.com <gv...@gmail.com> #30
Sorry, no, I don't want to bypass our SDK QA process.
You don't need the zip import when using the SDK anyway -- it doesn't enforce the
limit on number of files.
You don't need the zip import when using the SDK anyway -- it doesn't enforce the
limit on number of files.
gc...@gmail.com <gc...@gmail.com> #31
So if I understand that right I can't run the same application on the SDK and on app
engine if I want to use py_zipimport?
engine if I want to use py_zipimport?
gv...@gmail.com <gv...@gmail.com> #32
@gcarothers: All you need to do is unzip the zipfile in your project when using the
SDK. And this is a temporary situation.
SDK. And this is a temporary situation.
br...@gmail.com <br...@gmail.com> #33
My application needs more than 1000 files.
I tried the bulkload gag initially, but it was unusably slow. Think it would have
been over 8 hours to load my initial 72mb of data.
So... I planned to upload xml files and search them with BOSS (hopefully) instead...
Now I need to find a work around to my work around's work around.
I can understand a file limit quota, but a file count quota?
I tried the bulkload gag initially, but it was unusably slow. Think it would have
been over 8 hours to load my initial 72mb of data.
So... I planned to upload xml files and search them with BOSS (hopefully) instead...
Now I need to find a work around to my work around's work around.
I can understand a file limit quota, but a file count quota?
gv...@gmail.com <gv...@gmail.com> #34
FWIW, zipimport support has been rolled out to production. SDK support will come
with the 1.1.3 SDK (really soon now).
with the 1.1.3 SDK (really soon now).
hu...@gmail.com <hu...@gmail.com> #35
That is great.
i would also appreciate for some instructions on this, when it is available.
i would also appreciate for some instructions on this, when it is available.
gv...@gmail.com <gv...@gmail.com> #36
An article about how to use this is forthcoming. Please stand by.
ka...@gmail.com <ka...@gmail.com> #37
Does the file limit exist because of app versioning (e.g. 50 applications versions
stored = 50000 files)?
If so, would it make sense to support unversioned files (icons, testfiles, docs,
junk, libraries) via the yaml-config?
stored = 50000 files)?
If so, would it make sense to support unversioned files (icons, testfiles, docs,
junk, libraries) via the yaml-config?
gv...@gmail.com <gv...@gmail.com> #38
No, it exists because of the sheer number of apps. E.g. 1,000,000 apps ==
1,000,000,000 files.
1,000,000,000 files.
no...@gmail.com <no...@gmail.com> #39
Would it be silly to suggest code just gets zipped automatically when placed on the appengine architecture? This
would certainly make it easier for the developers, although at a cost to Google's complexity possibly.
would certainly make it easier for the developers, although at a cost to Google's complexity possibly.
gv...@gmail.com <gv...@gmail.com> #40
No, the developer has to request zipping. Lots of code doesn't run correctly from
zip files, e.g. anything opening data files relative to __file__.
zip files, e.g. anything opening data files relative to __file__.
no...@gmail.com <no...@gmail.com> #41
One more idea. I wonder how many of the files developers uploaded have the exact same checksum? It seems
like this could present one way to limit the total files uploaded as the upload mechanism could keep a global
checksum database and each time a file was already uploaded to the google infrastructure it could just symlink
it, instead of create a new one.
like this could present one way to limit the total files uploaded as the upload mechanism could keep a global
checksum database and each time a file was already uploaded to the google infrastructure it could just symlink
it, instead of create a new one.
[Deleted User] <[Deleted User]> #42
I believe that App Engine is already doing this -- at least within user accounts.
I created a staging app with the same code as my main app and no files were uploaded on initial update.
I don't know whether it reuses files across all applications, but I wouldn't be surprised if it did. However, even
when reused, those files count against your limit.
I created a staging app with the same code as my main app and no files were uploaded on initial update.
I don't know whether it reuses files across all applications, but I wouldn't be surprised if it did. However, even
when reused, those files count against your limit.
no...@gmail.com <no...@gmail.com> #43
Another obvious alternative to dealing with this file limit problem could be to do what Google is doing with
JQuery hosting. If they don't want to support every web framework under the sun, which is really the issue here
in some ways, they could have a common or 3rd party area.
You could probably satisfy 90% of people's file limit problems, I would guess, by just having an "semi-unofficial"
framework depot that developers could use. To prevent security issues the files uploaded could be managed by
the framework developers, are at the least, could use checksums to compare the uploaded file against the file in
the framework developers public repository.
JQuery hosting. If they don't want to support every web framework under the sun, which is really the issue here
in some ways, they could have a common or 3rd party area.
You could probably satisfy 90% of people's file limit problems, I would guess, by just having an "semi-unofficial"
framework depot that developers could use. To prevent security issues the files uploaded could be managed by
the framework developers, are at the least, could use checksums to compare the uploaded file against the file in
the framework developers public repository.
sl...@gmail.com <sl...@gmail.com> #44
1000-file limit. Of course there would be a problem of versioning, especially with
frameworks that are changing rapidly. But if App Engine could provide directories we
could add to our Python path to enable a certain framework version, that would solve
that.
gv...@gmail.com <gv...@gmail.com> #45
I have floated this idea internally before. It may happen, but there are other
priorities.
priorities.
be...@gmail.com <be...@gmail.com> #46
@36 Guido, now that 1.1.3 is out, is there any documentation on including django.zip
and this zipserve I've been hearing about?
and this zipserve I've been hearing about?
wh...@gmail.com <wh...@gmail.com> #47
It would pretty awesome if GAE had a shared PyPI mirror. Then developers could just
add /pypi/somelibrary-someversion/ onto their path. Aside from likely being quite a
lot of work to set something like that up, maybe there are some reasons why this is a
silly idea?
There would be issues such as dealing with libraries that have C extensions (maybe if
PyPI required a GAE-safe-flag or something like that), and dealing with the
possibility of someone uploading something nasty onto PyPI in an attempt to inject
nasty code into apps.
add /pypi/somelibrary-someversion/ onto their path. Aside from likely being quite a
lot of work to set something like that up, maybe there are some reasons why this is a
silly idea?
There would be issues such as dealing with libraries that have C extensions (maybe if
PyPI required a GAE-safe-flag or something like that), and dealing with the
possibility of someone uploading something nasty onto PyPI in an attempt to inject
nasty code into apps.
am...@gmail.com <am...@gmail.com>
de...@gmail.com <de...@gmail.com> #48
I just tried this out with CherryPy and Mako on SDK version 1.1.5 and it seems to
work just fine. You don't need Guido's py_sipimport.py file that is in comment #27 .
Example code below:
import sys
sys.path.insert(0, 'cherrypy.zip')
sys.path.insert(0, 'mako.zip')
import cherrypy
from cherrypy import tools
from mako.template import Template
from mako.lookup import TemplateLookup
import wsgiref.handlers
... the rest of your code...
Worth repeating, I have not tested this thoroughly. I do not that CherryPy 3.1 does
sometime load using __file__, mostly in the tests and tutorials, but also a few times
in the plugins file. Hopefully the tools are not effected. I'll update once a more
thorough test is done. Mako should be fine as-is.
work just fine. You don't need Guido's py_sipimport.py file that is in
Example code below:
import sys
sys.path.insert(0, 'cherrypy.zip')
sys.path.insert(0, 'mako.zip')
import cherrypy
from cherrypy import tools
from mako.template import Template
from mako.lookup import TemplateLookup
import wsgiref.handlers
... the rest of your code...
Worth repeating, I have not tested this thoroughly. I do not that CherryPy 3.1 does
sometime load using __file__, mostly in the tests and tutorials, but also a few times
in the plugins file. Hopefully the tools are not effected. I'll update once a more
thorough test is done. Mako should be fine as-is.
gv...@gmail.com <gv...@gmail.com> #49
Yes, delagoya is right. Ever since 1.1.3 you don't need the py_zipimport.py any
more; this is all built in now. I'd remove it from this issue except I don't seem to
have that power.
more; this is all built in now. I'd remove it from this issue except I don't seem to
have that power.
no...@gmail.com <no...@gmail.com> #50
Google, thanks for working to fix the web framework issues like this, despite some heavy wining from people
like myself. I appreciate it!
like myself. I appreciate it!
gv...@gmail.com <gv...@gmail.com> #51
Would people have a problem if I *removed* the two comments with obsolete copies of
my [py_]zipimport.py, to save future readers wasted time if they download one of
these without reading the whole thread?
my [py_]zipimport.py, to save future readers wasted time if they download one of
these without reading the whole thread?
bl...@gmail.com <bl...@gmail.com> #52
Some editing would probably be a good idea.
The best option might be to write an article for
http://code.google.com/appengine/articles/ about how to use zipped eggs on AppEngine,
then add links to your old comments.
The best option might be to write an article for
then add links to your old comments.
gv...@gmail.com <gv...@gmail.com> #53
Unfortunately I cannot do any editing -- I can either delete comments 19 and 27
completely, or leave them unchanged.
An article is already there:
http://code.google.com/appengine/articles/django10_zipimport.html
completely, or leave them unchanged.
An article is already there:
Jo...@hotmail.com <Jo...@hotmail.com> #54
The subject of comment 52 sounds like the right thing to do to me.
de...@gmail.com <de...@gmail.com> #55
Well, good thing that article is so thorough, saves me the trouble of writing one up
on my new GAE focused blog (import shameless.plug ;)http://appmecha.wordpress.com )
One note, the fact that the article is Django focused may hide the fact that this is
a general solution for import of any libraries that an application may need (provided
you test that it does work with your library).
I created a recipe in the cookbook that is basically my test above and posted a link
back to the Django article.
http://appengine-cookbook.appspot.com/recipe/respect-the-file-quota-with-zipimport
on my new GAE focused blog (import shameless.plug ;)
One note, the fact that the article is Django focused may hide the fact that this is
a general solution for import of any libraries that an application may need (provided
you test that it does work with your library).
I created a recipe in the cookbook that is basically my test above and posted a link
back to the Django article.
ia...@gmail.com <ia...@gmail.com> #56
When using a large Javascript library all the static files also count against the
quota. For instance, Xinha (a WYSIWYG library) has 851 files in its core, plugins,
and modules (excluding docs, examples, etc). Obviously this leaves far too few files
for the application. Some of this can be saved with skip_files (though simple
getting down to 851 files requires skip_files) and combining Javascript, but there
are also valid reasons to want those files separate, and things like icons can't be
combined.
Supporting zipimport helps, but the limit remains very low for anyone using existing
libraries, and of course zipimport is not applicable to Javascript libraries.
quota. For instance, Xinha (a WYSIWYG library) has 851 files in its core, plugins,
and modules (excluding docs, examples, etc). Obviously this leaves far too few files
for the application. Some of this can be saved with skip_files (though simple
getting down to 851 files requires skip_files) and combining Javascript, but there
are also valid reasons to want those files separate, and things like icons can't be
combined.
Supporting zipimport helps, but the limit remains very low for anyone using existing
libraries, and of course zipimport is not applicable to Javascript libraries.
no...@gmail.com <no...@gmail.com> #57
It seems like the "Google way", would be to use this:
http://code.google.com/apis/ajaxlibs/documentation/#googleDotLoad
But, this does create a bottleneck if someone wants to use something that isn't currently supported. It might be
nice to have a "Google Video" type system where users could upload their own libraries, and then the space
would still be conserved as something on the back end could do a checksum to detect if these files had already
been loaded.
But, this does create a bottleneck if someone wants to use something that isn't currently supported. It might be
nice to have a "Google Video" type system where users could upload their own libraries, and then the space
would still be conserved as something on the back end could do a checksum to detect if these files had already
been loaded.
gv...@gmail.com <gv...@gmail.com> #58
Again, how about zipserve?
Jo...@hotmail.com <Jo...@hotmail.com> #59
I'd guess what is meant by zipserve is
http://code.google.com/p/googleappengine/source/browse/trunk/google/appengine/ext/zipserve/__init__.py
That could do the trick, but I can find no reference to it in the docs, which would
likely explain why most of us don't seem to know about it.
If zipimport and zipserve are going to be the official workaround to the 1000 file
limit and that limit is not going to be removed it would be helpful if the visibility
of both zipimport and zipserve were increased. If the samples/tutorials included
their use then folks would be using them from the get go and there would be no need
to refactor latter when one runs into the 1000 file limit.
That could do the trick, but I can find no reference to it in the docs, which would
likely explain why most of us don't seem to know about it.
If zipimport and zipserve are going to be the official workaround to the 1000 file
limit and that limit is not going to be removed it would be helpful if the visibility
of both zipimport and zipserve were increased. If the samples/tutorials included
their use then folks would be using them from the get go and there would be no need
to refactor latter when one runs into the 1000 file limit.
gv...@gmail.com <gv...@gmail.com> #60
We will document zipserve -- it was an oversight that it wasn't documented before.
For now, the docstrings from
http://code.google.com/p/googleappengine/source/browse/trunk/google/appengine/ext/zipserve/__init__.py
should serve as plenty of documentation to get you started!
For now, the docstrings from
should serve as plenty of documentation to get you started!
Jo...@hotmail.com <Jo...@hotmail.com> #61
Just saw this thread on the appengine group,
http://groups.google.com/group/google-appengine/browse_thread/thread/400c37cc773b9f46
. The poster indicates that using zipimport puts in a floor of about 500ms on
request processing.
. The poster indicates that using zipimport puts in a floor of about 500ms on
request processing.
gv...@gmail.com <gv...@gmail.com> #62
Odd. It should only happen on the first request (for a specific process). I can
confirm that this is not a problem for Rietveld, for example -- so there must be
something in that app that keeps triggering the unzip.
confirm that this is not a problem for Rietveld, for example -- so there must be
something in that app that keeps triggering the unzip.
je...@gmail.com <je...@gmail.com> #63
could someone confirm whether or not this 1000 blob limit actually exists? comment
#14 claims this was only a typo in the documentation. if so, this "issue" should be
deleted, no?
#14 claims this was only a typo in the documentation. if so, this "issue" should be
deleted, no?
jo...@google.com <jo...@google.com> #64
The limit is 1000 code files and 1000 static files. Each file may be up to 10MB.
Also, the total size of all code files must remain under 150MB (there is no analogous
limit on static files).
Also, to briefly summarize, at launch we had very tight limits and no workarounds
(i.e. the limits were 1MB per file). Then, in July-September we rolled out zipimport
and zipserve, which provide very nice workarounds by zipping small code or static
files. Then, last week, we raised the per-file size limit from 1MB to 10MB.
At this point it would be worthwhile to reflect on this issue and determine what is
the most salient pain point to attack next. Do some people still need the number of
code or static files to increase? What new limit would make your app work? Is there
something else that you would really like instead to get your app to work?
Also, the total size of all code files must remain under 150MB (there is no analogous
limit on static files).
Also, to briefly summarize, at launch we had very tight limits and no workarounds
(i.e. the limits were 1MB per file). Then, in July-September we rolled out zipimport
and zipserve, which provide very nice workarounds by zipping small code or static
files. Then, last week, we raised the per-file size limit from 1MB to 10MB.
At this point it would be worthwhile to reflect on this issue and determine what is
the most salient pain point to attack next. Do some people still need the number of
code or static files to increase? What new limit would make your app work? Is there
something else that you would really like instead to get your app to work?
pr...@gmail.com <pr...@gmail.com> #65
Having to zip up stuff can still be problematic for libraries that do not support
such. Stuff big enough to need zipping in the first place generally use a large
number of asset files, not strictly code files.
Either removing or upping the code file limit to around 10,000 would alleviate most
any issue is my feeling. Much like the 1MB size limit was a bit on the short side, so
is 1,000 code file limit. It doesn't take more than a couple decently sized libraries
to break it.
such. Stuff big enough to need zipping in the first place generally use a large
number of asset files, not strictly code files.
Either removing or upping the code file limit to around 10,000 would alleviate most
any issue is my feeling. Much like the 1MB size limit was a bit on the short side, so
is 1,000 code file limit. It doesn't take more than a couple decently sized libraries
to break it.
le...@gmail.com <le...@gmail.com> #66
Well considering so many projects seem to have to take special code paths just for
appengine, making them also try to squeeze their frameworks down below 1000files is
asking a lot in my opinion.
To alleviate this issue would be meeting them halfway on a lot of things, making
appengine that much less of a hassle to get into, and more worthwhile.
appengine, making them also try to squeeze their frameworks down below 1000files is
asking a lot in my opinion.
To alleviate this issue would be meeting them halfway on a lot of things, making
appengine that much less of a hassle to get into, and more worthwhile.
sl...@gmail.com <sl...@gmail.com> #67
This bug/feature affects the java implementation as well so when using smartgwt I had
to create a java version of the zipserve utility.
I'd rather prefer to have the file limit raised though (smartgwt has about 1500
static files).
to create a java version of the zipserve utility.
I'd rather prefer to have the file limit raised though (smartgwt has about 1500
static files).
ki...@gmail.com <ki...@gmail.com> #68
users/clients in these countries are not able to access your GAE services with your
own domain name.
See
ew...@gmail.com <ew...@gmail.com> #69
The limit of 1000 files was really a very bad idea. When does it get changed. How do
I get notified.
I get notified.
ar...@gmail.com <ar...@gmail.com> #70
I can this is still an issue as it has been posted more than a year ago. This is
affecting my program too. I am using Google App Engine on Java and that is a shame
that we can't upload past that limit. Also, the solution using ZipPackages doesn't
work for JAVA or at least it didn't work when i tried it
http://code.google.com/p/app-engine-patch/wiki/ZipPackages . Either fix this issue by
providing some workaround or just let us upload WAR files directly.
affecting my program too. I am using Google App Engine on Java and that is a shame
that we can't upload past that limit. Also, the solution using ZipPackages doesn't
work for JAVA or at least it didn't work when i tried it
providing some workaround or just let us upload WAR files directly.
de...@gmail.com <de...@gmail.com> #71
Such limit in the number of files is obsolete nowadays.
It creates extra complexity, and reduce lib code reuse.
It creates extra complexity, and reduce lib code reuse.
ar...@gmail.com <ar...@gmail.com> #72
I found out the source of problems. It is not in lib but the amount of images and
other files the application is trying to upload. If it was save as a WAR file this
problem would have been bypassed. So anybody knows of workaround or fix?
other files the application is trying to upload. If it was save as a WAR file this
problem would have been bypassed. So anybody knows of workaround or fix?
ye...@gmail.com <ye...@gmail.com> #73
Hi,
This limitation seems a bit unfair to Java and especially GWT. When compiled, each
class, including top-level, nested, local and anonymous, gets its own .class file,
which ends up in the WEB-INF/classes folder. I only have 433 Java files in my source
tree, but they become 830 class files in WEB-INF/classes. This is not so much the
fault of the server-side code. It is the GWT code, which, like with many
component-oriented GUI frameworks, relies a lot on anonymous classes used as even
handlers and such.
I think WEB-INF/classes directory should count as one file. Google Eclipse plugin
could simply jar it up and send it over to app-engine as a single jar file. That
would solve the problem.
I have submitted a specific request regarding the WEB-INF/classes folder as a
separate issue. Feel free to comment and star it:
http://code.google.com/p/googleappengine/issues/detail?id=1579
Thanks,
Yegor
This limitation seems a bit unfair to Java and especially GWT. When compiled, each
class, including top-level, nested, local and anonymous, gets its own .class file,
which ends up in the WEB-INF/classes folder. I only have 433 Java files in my source
tree, but they become 830 class files in WEB-INF/classes. This is not so much the
fault of the server-side code. It is the GWT code, which, like with many
component-oriented GUI frameworks, relies a lot on anonymous classes used as even
handlers and such.
I think WEB-INF/classes directory should count as one file. Google Eclipse plugin
could simply jar it up and send it over to app-engine as a single jar file. That
would solve the problem.
I have submitted a specific request regarding the WEB-INF/classes folder as a
separate issue. Feel free to comment and star it:
Thanks,
Yegor
ne...@gmail.com <ne...@gmail.com> #74
This is in deed a major issue for GWT apps.
What is the workaround, please?
What is the workaround, please?
ye...@gmail.com <ye...@gmail.com> #75
Hi, nexource,
The workaround is to use a build script and command-line tools instead of the Eclipse
plugin. Instruct your build script to jar all files from WEB-INF/classes and put the
jar file into WEB-INF/lib, then delete WEB-INF/classes. Here's an Ant script snippet
for you:
<jar destfile="${dist.dir}/WEB-INF/lib/yourappname.jar">
<fileset dir="${dist.dir}/WEB-INF/classes">
<include name="**/*" />
</fileset>
</jar>
<delete dir="${dist.dir}/WEB-INF/classes" />
My ${dist.dir} is a copy of the "war" directory so the script leaves the original
files untouched, otherwise you may run into complications with Eclipse. Eclipse does
not play well with file-system changes made by external tools, such as Ant.
You will find a good tutorial here:
http://code.google.com/appengine/docs/java/tools/ant.html
Hope this helps.
Yegor
The workaround is to use a build script and command-line tools instead of the Eclipse
plugin. Instruct your build script to jar all files from WEB-INF/classes and put the
jar file into WEB-INF/lib, then delete WEB-INF/classes. Here's an Ant script snippet
for you:
<jar destfile="${dist.dir}/WEB-INF/lib/yourappname.jar">
<fileset dir="${dist.dir}/WEB-INF/classes">
<include name="**/*" />
</fileset>
</jar>
<delete dir="${dist.dir}/WEB-INF/classes" />
My ${dist.dir} is a copy of the "war" directory so the script leaves the original
files untouched, otherwise you may run into complications with Eclipse. Eclipse does
not play well with file-system changes made by external tools, such as Ant.
You will find a good tutorial here:
Hope this helps.
Yegor
ne...@gmail.com <ne...@gmail.com> #76
Hi Yegor,
Thanks for the reply!
What about when you have a lot of resources as GWT-compiled files or small images?
Cheers
Rudi
Thanks for the reply!
What about when you have a lot of resources as GWT-compiled files or small images?
Cheers
Rudi
ye...@gmail.com <ye...@gmail.com> #77
nexource,
Usually GWT-compiler does not produce a lot of files (unless you are localizing for
all countries and dialects in the world). It's the Java compiler that will produce a
lot of files. The latter is solved by jarring everything in WEB-INF/classes into a
single archive. As for lots of small images, ImageBundle should help you solve it,
explanation here:
http://tinyurl.com/oqayat
Yegor
Usually GWT-compiler does not produce a lot of files (unless you are localizing for
all countries and dialects in the world). It's the Java compiler that will produce a
lot of files. The latter is solved by jarring everything in WEB-INF/classes into a
single archive. As for lots of small images, ImageBundle should help you solve it,
explanation here:
Yegor
ab...@gmail.com <ab...@gmail.com> #78
Hi,
I am using tatami package in my application which provides wrappers for dojo.
The js files of dojo are created in the "projectroot/war/projectname/dojo" folder
during compilation and counts more then 1000. while uploading with eclipse plugin it
says:
java.io.IOException: Error posting to URL:
http://appengine.google.com/api/appversion/addblob?path=__static__%2Fboxadder%2Fdijit%2FEditor.js&app_id=b-tracker&version=1&
400 Bad Request
Max number of files and blobs is 1000
I zipped dojo folder and kept it in /projectroot/war/projectname/ which is the
location of dojo directory and removed dojo folder, as described at
http://code.google.com/p/app-engine-patch/wiki/ZipPackages . After this application
get uploaded on app engine as number of files is less then 1000 now but,then my
application stops working as it is unable to find the required js files.
Does ZipPackages patch works for java projects as well? any workaround to get a
solution for this problem?
I am using tatami package in my application which provides wrappers for dojo.
The js files of dojo are created in the "projectroot/war/projectname/dojo" folder
during compilation and counts more then 1000. while uploading with eclipse plugin it
says:
java.io.IOException: Error posting to URL:
400 Bad Request
Max number of files and blobs is 1000
I zipped dojo folder and kept it in /projectroot/war/projectname/ which is the
location of dojo directory and removed dojo folder, as described at
get uploaded on app engine as number of files is less then 1000 now but,then my
application stops working as it is unable to find the required js files.
Does ZipPackages patch works for java projects as well? any workaround to get a
solution for this problem?
na...@gmail.com <na...@gmail.com> #79
For anyone using popular JS frameworks, Google does host them on their ajaxapis
server. You can use google.load if you want (apparently can do some geolocation
optimization), however you can access directly using urls described here:
http://code.google.com/apis/ajaxlibs/documentation/#AjaxLibraries
That way you don't have to host these in your app (and they come from a pretty fast
CDN too!) The only issue you might run into is that for frameworks like dojo and
others, behaviour can be issue with cross-domain javascript.
Another thing is to configure the "static-files" directive in the appengine-web
deployment descriptors. Most people don't know, but typically files are double
counted as static and resource files (especially bad when you have a lot of
javascript/css/html resources). Typically that saves a ton when deploying your app
and running into the file limits.
Going ahead, there are various things that I think should be done. I'm not
completely against the limits (it is a shared resource architecture, so you can't
kill them for trying to limit stuff for performance reasons). However, I think that
all types of files should be handled separately and have their own sets of limits.
Since they are handled separately when serving, I think this would work out a bit
better. Also, ,ost people exceed these limits because they are dealing extra static
resources and framework files for the most part. Google can already deal with
javascript frameworks using the Ajax Libraries API, maybe they can come with a
solution for others that use frameworks like GWT and Django so they don't have to
host those along with their app.
server. You can use google.load if you want (apparently can do some geolocation
optimization), however you can access directly using urls described here:
That way you don't have to host these in your app (and they come from a pretty fast
CDN too!) The only issue you might run into is that for frameworks like dojo and
others, behaviour can be issue with cross-domain javascript.
Another thing is to configure the "static-files" directive in the appengine-web
deployment descriptors. Most people don't know, but typically files are double
counted as static and resource files (especially bad when you have a lot of
javascript/css/html resources). Typically that saves a ton when deploying your app
and running into the file limits.
Going ahead, there are various things that I think should be done. I'm not
completely against the limits (it is a shared resource architecture, so you can't
kill them for trying to limit stuff for performance reasons). However, I think that
all types of files should be handled separately and have their own sets of limits.
Since they are handled separately when serving, I think this would work out a bit
better. Also, ,ost people exceed these limits because they are dealing extra static
resources and framework files for the most part. Google can already deal with
javascript frameworks using the Ajax Libraries API, maybe they can come with a
solution for others that use frameworks like GWT and Django so they don't have to
host those along with their app.
iq...@gmail.com <iq...@gmail.com> #80
Please solve it.
gv...@gmail.com <gv...@gmail.com> #81
I expect this limit will increase to 3000 soon.
ma...@gmail.com <ma...@gmail.com> #82
@gvanrossum: we all hope very soon!
a....@gmail.com <a....@gmail.com> #83
...I just uploaded 1350 files... Anyone else notice this?
...Mind you, that's just what Eclipse tells me it has cloned; I set up some filtering
to tell the plugin to NOT include .client. packages, AND that
pre-upload-compile-to-jar-with-ant trick {still 1300 AFTER compiling my more static
modules}.
Perhaps someone else can try a few thousand files to see if we can finally loosen our
belts and stop worrying about how many small files and inline classes we use...
...Mind you, that's just what Eclipse tells me it has cloned; I set up some filtering
to tell the plugin to NOT include .client. packages, AND that
pre-upload-compile-to-jar-with-ant trick {still 1300 AFTER compiling my more static
modules}.
Perhaps someone else can try a few thousand files to see if we can finally loosen our
belts and stop worrying about how many small files and inline classes we use...
wk...@gmail.com <wk...@gmail.com> #84
Cool, this is what I got with app-engine-patch when uploading lots of files:
Max number of files and blobs is 3000.
Max number of files and blobs is 3000.
gv...@gmail.com <gv...@gmail.com> #85
All, the combined limit on static and code files has indeed increased to 3000. There
is no plan to increase it further. The following limits are also still in place:
150 MB max combined size of code files
10 MB max individual size of any file
1000 files max per directory (not counting files in subdirectories)
In the quoted message, "blob" refers to static files; "file" refers to code files.
is no plan to increase it further. The following limits are also still in place:
150 MB max combined size of code files
10 MB max individual size of any file
1000 files max per directory (not counting files in subdirectories)
In the quoted message, "blob" refers to static files; "file" refers to code files.
ki...@google.com <ki...@google.com> #86
Adding descriptive text to this in the hope that Google search picks it up: What is
the maximum number of static files that you can upload to an Google AppEngine
application? As of Jan 2010, this is 3000 -- see gvanrossum's breakdown on this
(comment 86)
the maximum number of static files that you can upload to an Google AppEngine
application? As of Jan 2010, this is 3000 -- see gvanrossum's breakdown on this
(comment 86)
ga...@gmail.com <ga...@gmail.com> #87
Taking http://code.google.com/p/googleappengine/issues/detail?id=161#c68 as the base, I have been able to put up SmartGWT on GAE at
http://mastergaurav.appspot.com
Thanks slindhom!
I have made one minor change in the servlet -- look for "If-Modified-Since" header and return a 304.
According to W3C, athttp://www.w3.org/Protocols/HTTP/HTRQ_Headers.html#if-modified-since
- if the requested document has not changed since the time specified in this field the document will not be sent, but instead a Not Modified 304 reply.
The servlet does not send any content in case of a 304-response.
-Gaurav
http://www.mastergaurav.com
Thanks slindhom!
I have made one minor change in the servlet -- look for "If-Modified-Since" header and return a 304.
According to W3C, at
- if the requested document has not changed since the time specified in this field the document will not be sent, but instead a Not Modified 304 reply.
The servlet does not send any content in case of a 304-response.
-Gaurav
ga...@gmail.com <ga...@gmail.com> #88
se...@gmail.com <se...@gmail.com> #90
please allow read static file larger than 1mb. read enough without write enough is enough. it's ease us having to write utility to read from blobstore.
wl...@gmail.com <wl...@gmail.com> #91
It would have been nice if appcfg will just abort if I'm uploading more than 3000, and not just process it for half hour and give me the out of limit error.
je...@gmail.com <je...@gmail.com> #92
Was this limit lowered back to 1000? I have 1014 files and I keep getting an error stating "backend null" (nice error descriptions!)
I think I'll be ditching App Engine. It was nice & easy deploying right from Eclipse, but this file limitation just leaves me with no choice.
I think I'll be ditching App Engine. It was nice & easy deploying right from Eclipse, but this file limitation just leaves me with no choice.
sc...@google.com <sc...@google.com> #93
I don't think the error message you are getting is related to the number of files you have.
bo...@gmail.com <bo...@gmail.com> #94
SUppose am working on an application where there will be massive number of pictures coming in my app and need to be stored for further use, does this mean that there wont be limit on how many images should be stored in the images directory or this will mean that once my limit of 3000 is reached; no more images will be uploaded?
And if that's the case whats an alternative, use google cloud storage?
And
And if that's the case whats an alternative, use google cloud storage?
And
mc...@gmail.com <mc...@gmail.com> #95
i was trying to deployed my WebApp today on Google App Engine using Eclipse.
i got an error saying:
"Max number of files and blobs is 10000.
See the deployment console for more details"
Really?
My WebApp contain: 826 Files, 106 Folders
What are you GOOGELERS are on about with your "Max number of files and blobs is 10000" - i think it is time with ditching GoogleAppEngine :)
i got an error saying:
"Max number of files and blobs is 10000.
See the deployment console for more details"
Really?
My WebApp contain: 826 Files, 106 Folders
What are you GOOGELERS are on about with your "Max number of files and blobs is 10000" - i think it is time with ditching GoogleAppEngine :)
[Deleted User] <[Deleted User]> #96
Hi,
Did you find any solution #96?
I'm in the same situation.
Did you find any solution #96?
I'm in the same situation.
ch...@gmail.com <ch...@gmail.com> #97
I was able to reduce the number of files by editing the app.yaml file to include the supported libraries rather than uploading these files such as Jinja2 and markupsafe. There is a list of supported libraries on appengine's site:
https://developers.google.com/appengine/docs/python/tools/libraries27
I plan to move static files to another server such as AWS S3. I'm still looking for more "creative" ways to reduce the number of files.
Example of the edits on app.yaml file:
libraries:
- name: PIL
version: latest
- name: webob
version: latest
- name: webapp2
version: "2.5.2"
- name: jinja2
version: latest
- name: markupsafe
version: latest
I plan to move static files to another server such as AWS S3. I'm still looking for more "creative" ways to reduce the number of files.
Example of the edits on app.yaml file:
libraries:
- name: PIL
version: latest
- name: webob
version: latest
- name: webapp2
version: "2.5.2"
- name: jinja2
version: latest
- name: markupsafe
version: latest
er...@gmail.com <er...@gmail.com> #98
I am using PHP on GAE and this 10000 files limit is not possible with most of the PHP Apps. It is the normal anatomy to have a lot of file and exceecing the 10000 limit with PHP apps. Not even standard software like wordpress and drupal will work to upload. Pls raise this limit again
wi...@gmail.com <wi...@gmail.com> #99
appcfg.py: error: Error parsing C:\newpro\app.yaml: Found more than 100 URLMap entries in application configuration
in "C:\newpro\app.yaml"
i have faced this error. please give any suggesion
in "C:\newpro\app.yaml"
i have faced this error. please give any suggesion
de...@tyo.com.au <de...@tyo.com.au> #100
[Comment deleted]
be...@rogmansmedia.nl <be...@rogmansmedia.nl> #101
Trying to run a PHP app on GAE: can not deploy because I have to many files.
I removed the *Google API Client* from composer, good for *4500 files*.
So come on Google, raise that limit today.
I removed the *Google API Client* from composer, good for *4500 files*.
So come on Google, raise that limit today.
Description
2008-04-11 22:13:46,924 ERROR __init__.py:1294 An unexpected error
occurred. Aborting.
Rolling back the update.
Error 400: --- begin server output ---
Max number of files and blobs is 1000.
It doesn't happen every time, though, only about every other time.
Currently my application just reached 1001 files. This is about a dozen
libraries, so if there really is some limit at 1000 files it really needs
to be raised.