It's well past time that I admitted something to myself: I am no longer actively maintaining any of my personal open-source projects.
As I was staring at my inbox this morning, noticing that it was full of github issue reports and thinking "I should really make time to respond to those" and then feeling ashamed that some are now several months old, I came to a surprising realisation – it's not that I can't make time to maintain those projects these days, it's that I no longer want to. I'm not "busy with family stuff" like I've been in the habit of telling myself, and I won't "get to that sometime soon". I'm getting my software fix on the job and I'm spending my personal time on other things, and I'm surprised to find myself OK with that.
While it was a lot of fun to see a web-based python interpreter beat my system python on a single carefully-tuned benchmark, that result obviously didn't say much about the usefulness of PyPy.js for any real-world applications. I'm keen to find out whether the web can support dynamic language interpreters for general-purpose use in a way that's truly competitive with a native environment.
Inspired by the PyPy speed center and the fine Mozilla tradition of publicly visualising performance metrics, I've been working on a benchmark suite and metrics-tracking site for PyPy.js. The initial version is finally live:
TL;DR: not really, not yet – but we're tracking slowly towards that goal.
Alternate title: reduce your compressed file size with this one weird trick!
The obvious approach is to reach for a higher-performance compression algorithm, perhaps bzip2 or LZMA. But these algorithms can suffer from slow decompression speeds and are not generally supported in today's web browsers. For shipping compressed content on the web today, gzip is the only game in town.
So can we do better while staying within the confines of gzip?
OK OK, I couldn't resist that title but it probably goes a bit far. Let me try for a little more nuance:
PyPy.js: Now faster than CPython, on a single carefully-tuned benchmark, after JIT warmup.
It has been the better part of a year since I first started hacking on PyPy.js, an experiment in bringing a fast and compliant python interpreter to the web. I've been pretty quiet during that time but have certainly been keeping busy. Some of the big changes since my previous update include:
- An asmjs-to-python converter, so that PyPy's comprehensive JIT testsuite can be run on the asmjs backend.
- Some new optimizations in the emscripten compiler, which greatly reduce compiled code size.
- A basic interactive console, so you can try PyPy.js straight from your browser.
- And even uncovering an apparent bug in an LLVM optimization pass.
- PyPy's powerful just-in-time compiler, which can optimize the hot loops of your program into efficient native code.
By translating the PyPy interpreter into asm.js code, and by having its JIT backend emit specialized asm.js code at runtime, it should theoretically be possible to have an in-browser Python implementation whose hot loops perform within a factor of two of native code.
I'm excited to report a small but important milestone on the road to making this a reality.
It's certainly not a full Python interpreter, and it comes with many caveats and question-marks and todos, but I have been able to produce a simple demo interpreter, with JIT, that approaches the theoretical factor-of-two comparison to native code under some circumstances. There's a long way to go, but this seems like a very promising start.
TL;DR? Feel free to jump straight to the important graph.