What steps will reproduce the problem? 1.weed master -defaultReplicationType=001 weed volume -port=8081 -dir=/data/data/1 -max=100 -mserver="localhost:9333" -ip="localhost" -publicUrl="localhost:8080" weed volume -port=8082 -dir=/data/data/2 -max=100 -mserver="localhost:9333" -ip="localhost" -publicUrl="localhost:8081" 2. save different sizes image use curl http://<host>:<port>/dir/assign?count=3 3. after some time,miss original image or thumbnail.
why?isn't me replication strategy setting wrong ? how to solve this problem?
What version of the product are you using? On what operating system? centos6 weed-fs 0.42
Comment #1
Posted on Sep 26, 2013 by Grumpy CatWhat's the url you use to upload the images? And after about how many files you will see this problem?
Comment #2
Posted on Sep 27, 2013 by Grumpy DogI use fid like http://localhost:8080/1,xxxxxx upload original image and use http://localhost:8080/1,xxxxx_(count) upload thumbnails. first time,i can see all these images.but after some time ,these will lose of one or more.
Comment #3
Posted on Sep 27, 2013 by Grumpy CatPlease be more specific so that I can reproduce it.
How much time is "some time"? Do you just leave it stale or you keep on uploading files to it? How many files you are dealing with? Are you sure you are not overwriting it?
If the files are not too many, you can upload the volumes files to the bug.
Comment #4
Posted on Sep 27, 2013 by Grumpy DogAre you Chinese? Do you have any other contact way?
Comment #5
Posted on Sep 27, 2013 by Grumpy CatWe are on different time zone. You can write in Chinese if you want. I can read it.
Comment #6
Posted on Oct 16, 2013 by Grumpy CatPlease provide more details.
Comment #7
Posted on Oct 17, 2013 by Grumpy DogAt present, there still exist this problems.I can't provide some like how much time is "some time"?but what was certain was that more frequently to lose data.I will continue to pay attention to this problem and give you useful information as soon as possible
Comment #8
Posted on Oct 31, 2013 by Grumpy CatVery likely this is related to issue 52, where file may be missing when concurrency is high at specific moments.
Status: Duplicate
Labels:
Type-Defect
Priority-Medium