Greatest hits of PLDI 2018

I had a great time at PLDI 2018 last week. Here is my take on a few of the papers that stood out for me.

  • John Vilk presented a tool called BLeak for finding memory leaks in web browsers. One might think that leak detection is not important in a garbage-collected setting, but Vilk explained that actually it is. There may be situations where a data structure is growing out of control, but it cannot be garbage-collected because there is still a reference to it somewhere. Vilk’s tool is built on the observation that if a browsing session repeatedly returns to the same visual state, and if every time the browser returns to that state, its total memory usage has increased, then there is probably a memory leak. Vilk used his tool in this way to identify several memory leaks in popular websites, including Airbnb. A limitation in the utility of the tool at the moment is that the user must give the tool a browsing session that repeatedly returns to the same visual state — it would be nice if the tool could discover these sessions by itself, for then it could truly be set loose on the worldwide web!
  • Sam Baxter presented a tool called Stopify that automatically instruments Javascript code with periodic “yield” instructions that return control to the browser. The downside of doing so is a reduction in performance; the upside is an improvement in responsiveness — no longer will a long-running Javascript function hang your browser. Baxter proposed that his tool can be usefully employed by the various web services that allow programmers to write and test Javascript programs online (or indeed any language that can compile to Javascript), to ensure that executing badly-written code does not lead to the browser tab having to be killed. In the Q&A session, Baxter clarified that there is indeed an interesting trade-off to be made between inserting enough “yield” instructions to ensure responsiveness, and inserting so many that performance is ruined.
  • Ivan Radiček and Ke Wang both presented papers that tackle the problem of how to automatically generate feedback for students who have made an incorrect attempt at a programming exercise. This is an increasingly important problem now that we have online courses in which a single instructor aims to teach thousands of students to program. Both papers hinge on the insight that in a large class, many of the solutions will be correct. So, given an incorrect solution, the task is to find the syntactically-closest of the correct solutions, and generate a “diff” that explains to the student how their solution could be fixed to make it correct. The task is complicated by the fact that each solution might use different names for the same variable, so any comparison between programs must be done up to alpha-equivalence. These rather atypical papers generated quite a few questions from the audience, including whether the approach could be extended to give feedback not just about correctness issues, but also about issues related to good programming style. I wonder whether their technique could be adapted to become another useful tool in the teacher’s toolbelt: a plagiarism detector.
  • Stephen Dolan et al.‘s paper tackled the question of what guarantees programming languages should make about what will happen when a program contains a data race (i.e. two accesses to the same memory location, at least one of them a write, without proper synchronisation in between). Current languages such as C/C++ make no guarantees for racy programs. This is good for compiler-writers because it means that they only have to make their optimisations work correctly on race-free programs, but it’s bad for programmers wanting to reason about their code. The problem is that any data race — even in some library code that was written long ago — prevents any reasoning about any part of the rest of the program, and Dolan et al. argue that this is unsatisfactory. Their new semantics ensures that if some part of the program is free from races, then at least that part can be reasoned about. This is a pretty radical departure from how current languages work — perhaps too radical for the proposal to be adopted any time soon — but it seems eminently sensible, even if it comes at the cost of making the compiler-writers’ lives a tiny bit harder.
  • Thomas Wahl presented a paper with Peizun Liu about the static analysis of concurrent programs. Their paper introduced me to an interesting result established in 2001 by Ramalingam: that static analysis in a language that features both recursive procedures and inter-thread synchronisation is undecidable — even in the absence of any program variables. Ramalingam showed that the problem of deciding whether a given program point is reachable or not can be reduced to Emil Post’s “correspondence” problem, which is a classic undecidable problem. Liu and Wahl’s paper proceeds to develop a static analysis for such programs all the same, rightly undeterred by the fact that their analysis will never be able to handle all such programs.
  • Matthew Milano presented a language, called MixT, for programming distributed databases. The key idea is to allow each variable to be assigned a different consistency mode — whether that is a strong mode like “serializable” (in which database updates are immediately propagated to all replicas), a weak mode like “eventually consistent” (in which updates will eventually reach all replicas but possibly after a noticeable delay), or somewhere in between like “causally consistent”. Traditionally, consistency modes are associated with the whole database, and this makes it rather fiddly to program a system where some data needs strong consistency and other data only needs weak consistency. Milano et al.’s proposal reminded me a lot of the C++ memory model, in which each atomic operation can be associated with a different consistency mode (anywhere between “sequentially consistent” and “relaxed”). I wondered whether their work could be adapted to the C++ context, where it might ultimately lead to a more usable memory model for C++.
  • Magnus Själander presented SWOOP, an approach that combines compiler optimisation and architectural support to hide the latency of cache misses. The traditional way to hide memory latency is to employ out-of-order execution, but this complicates the design of processors, and hence makes them more power-hungry. SWOOP’s approach, which applies to code in a loop, works as follows. Execution proceeds as normal until a cache miss occurs. Via architectural support, this triggers the program to jump into an alternative execution mode, in which the first parts of the subsequent loop iterations begin executing. A compilation step has already identified which part of each loop iteration is independent of previous iterations, so executing ahead like this does not actually involve speculation — in other words, that code was going to execute anyway. Finally, the access to main memory is completed, and the remaining parts of each partially-executed loop iteration are executed.
  • David Koeplinger presented Spatial, a new language (and associated compiler) for designing FPGA-based accelerators. The motivation is that hardware languages like Verilog are too low-level for productive coding, and that high-level languages like C or OpenCL do not offer the right abstractions (and hence require clumsy pragmas all over the place). Spatial is essentially a high-level language, in that it supports structured programming (nested loops, etc.) and does not handle such low-level details as clock ticks. However, unlike C/OpenCL it gives programmers explicit control over where each variable should live (register, on-chip RAM, off-chip RAM, etc.). Moreover, it has built-in syntax for design space exploration, which means that the compiler will automatically explore a range of values for certain constants in the program, to discover which combination leads to the best-performing hardware. Spatial is clearly a promising language, as evidenced by its impressive speedups over tools like SDAccel. Myself, I would be interested to understand whether the various features in the Spatial language can be teased apart, and their impact on performance measured separately.
  • Kostas Ferles presented a tool called Expresso, which takes multi-threaded code that uses implicit signalling between threads, and turns it into equivalent code that uses explicit signalling. To expand on this: implicit signalling is where code is written using structures like critical regions, which simply mark “this piece of code must be executed in isolation”. The runtime is responsible for waking up other threads after a critical region has finished. With explicit signalling, the programmer is responsible for writing the code to wake up other threads. Explicit signalling can be more performant than implicit signalling because the programmer can invoke domain-specific knowledge to decide whether it’s better to wake up just a specific thread, or all threads, and can time the wake-up at just the right moment. However, explicit signalling is error prone. Thus, Expresso aims to offer the best of both worlds: the programming ease of implicit signalling and the performance of explicit signalling. I wonder whether Expresso could potentially be adapted from being a tool that can generate new explicit-signalling code, into being a tool that can verify the correctness of existing explicit-signalling code.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s