What steps will reproduce the problem? 1. Run the attached file minimal.py on a system with PyV8. 2. See the output - in my case there's ~300 MB of additional memory used after every test run.
{{{ class TestMemoryWithJSContext(unittest.TestCase): def test_python_memory_management(self): def inner(): with JSContext() as ctx: log_memory_usage("before empty evals") for index1 in range(1000): for index2 in range(10000): ctx.eval("") log_memory_usage("after empty evals") JSEngine.collect() log_memory_usage("before JSContext memory tests") inner() JSEngine.collect() gc.collect() JSEngine.collect() gc.collect() log_memory_usage("after JSContext memory tests and gc") print "Py gc.garbage:", gc.garbage
class CardEngineTestSuite(unittest.TestSuite): def init(self): super(CardEngineTestSuite, self).init() self.addTests(unittest.TestLoader().loadTestsFromTestCase(TestPython)) self.addTests(unittest.TestLoader().loadTestsFromTestCase(TestMemoryWithJSContext)) }}}
What is the expected output?
I'd like to see a way to actually collect garbage in PyV8. Nothing works for me so far. As you can see, of 320MB of garbage that's generated (noteworthy: only using empty evals ctx.eval("")
), only 20MB gets collected, even after 2 consecutive calls of JSEngine.collect and gc.collect, both when ctx is reachable and when it's not reachable anymore. Note that the attached file includes a test of the Python garbage collector that appears to be working correctly when not interacting with PyV8.
What do you see instead? >> python minimal.py ... python minimal.py ... .2014-03-28 21:41:34,198 before JSContext memory tests process 110 now uses 14.1 MB resident 2014-03-28 21:41:34,199 before empty evals process 110 now uses 14.4 MB resident 2014-03-28 21:41:55,513 after empty evals process 110 now uses 348.8 MB resident 2014-03-28 21:41:56,926 after JSContext memory tests and gc process 110 now uses 322.3 MB resident Py gc.garbage: []
.
Ran 2 tests in 26.838s
OK ... .2014-03-28 21:42:01,103 before JSContext memory tests process 110 now uses 322.5 MB resident 2014-03-28 21:42:01,104 before empty evals process 110 now uses 322.5 MB resident 2014-03-28 21:42:25,714 after empty evals process 110 now uses 636.5 MB resident 2014-03-28 21:42:28,459 after JSContext memory tests and gc process 110 now uses 629.3 MB resident Py gc.garbage: []
.
Ran 2 tests in 31.532s
OK
What version of the product are you using? On what operating system? PyV8 revision 557 built using setup.py and v8 revision 19632 Ubuntu 12.04, running inside a Docker container.
- minimal.py 2.39KB
Comment #1
Posted on Mar 29, 2014 by Happy BearUpdate: I remember seeing warnings about dtrace missing when I was building v8. Could this make the difference? I've found this pyv8 binary maintainer's commit that seems to link the two issues together: https://github.com/taguchimail/pyv8-linux-x64/commit/aaae4b4c2cac88bc78a87e711c818bed7a6bccd6.
Comment #2
Posted on Mar 29, 2014 by Happy BearI've tested it with this binary distribution now - same effect. My solution for now will be to monitor the PyV8 process and restart it on OS level when the memory consumption has become too high.
Comment #3
Posted on Apr 2, 2014 by Happy BearSince taguchimail's binary repo comes with a 'stable' branch (PyV8 429 / V8 r10452), I had to try this out as well. This has been an improvement as in the empty evals at least didn't show a leak anymore - the whole application still leaks however, so I'm trying to isolate new test cases now. In other words, the reported behavior above must have come in somewhere between 429 and 557.
Comment #5
Posted on May 4, 2014 by Happy BearComment deleted
Comment #7
Posted on Jun 10, 2014 by Happy BearComment deleted
Comment #8
Posted on Jun 10, 2014 by Happy BearUpdate: Wrapping the JSContext inside the wrapper as posted below and using PyV8 r428 / V8 r10452 solves the memory leak issues for me. I have to say though that the garbage collection is relatively slow this way, so I have to do it in a background process.
- pyv8_wrapper.py 3.59KB
Status: New
Labels:
Type-Defect
Priority-Medium