High-level synthesis, but correct

This post is about a paper by Yann Herklotz, James Pollard, Nadesh Ramanathan, and myself that will be presented shortly at OOPSLA 2021. High-level synthesis (HLS) is an increasingly popular way to design hardware. It appeals to software developers because it allows them to exploit the performance and energy-efficiency of custom hardware without having to learn… Continue reading High-level synthesis, but correct

Understanding the memory semantics of multi-threaded CPU/FPGA programs

If you've ever attended a seminar about weak memory models, chances are good that you've been shown a small concurrent program and asked to ponder what is allowed to happen if its threads are executed on two or three different cores of a multicore CPU. For instance, you might be shown this program: // Thread… Continue reading Understanding the memory semantics of multi-threaded CPU/FPGA programs

Introducing C4: the C Compiler Concurrency Checker

This post is about a paper by Matt Windsor, Ally Donaldson, and myself that will presented shortly at the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) conference, in the tool demonstrations track. Compilers are a central pillar of our computing infrastructure, so it's really important that they work correctly. There's been a… Continue reading Introducing C4: the C Compiler Concurrency Checker

Fuzzing High-Level Synthesis Tools

High-level synthesis – the automatic compilation of a software program into a custom hardware design – is an increasingly important technology. It's attractive because it lets software engineers harness the computational power and energy-efficiency of custom hardware devices such as FPGAs. It's also attractive to hardware designers because it allows them to enter their designs… Continue reading Fuzzing High-Level Synthesis Tools

Diagrams for Composing Compilers

T-diagrams (or tombstone diagrams) are widely used by teachers to explain how compilers and interpreters can be composed together to build and execute software. In this blog post, Paul Brunet and I revisit these diagrams, and show how they can be redesigned for better readability. We demonstrate how they can be applied to explain compiler… Continue reading Diagrams for Composing Compilers

Greatest hits of PLDI 2018

I had a great time at PLDI 2018 last week. Here is my take on a few of the papers that stood out for me. John Vilk presented a tool called BLeak for finding memory leaks in web browsers. One might think that leak detection is not important in a garbage-collected setting, but Vilk explained… Continue reading Greatest hits of PLDI 2018

Concurrency-aware scheduling for high-level synthesis

What follows is a summary of the main contributions of a paper by Nadesh Ramanathan, George Constantinides, and myself that will be presented at the FCCM 2018 conference. If you want to compute something, you have two broad options: do it in software, or do it in hardware. A custom piece of hardware can give you… Continue reading Concurrency-aware scheduling for high-level synthesis

What do you get if you cross Weak Memory with Transactional Memory?

What follows is a summary of the main contributions of a paper I wrote with Nathan Chong and Tyler Sorensen for the PLDI 2018 conference. This project studies two features of a modern computer, one called out-of-order execution and one called transactional memory. Out-of-order execution is where a computer chooses, for performance reasons, to perform its instructions in an order… Continue reading What do you get if you cross Weak Memory with Transactional Memory?

Ribbon Diagrams for Weak Memory

In their POPL'17 paper, Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathy Gray, Ali Sezgin, Mark Batty, and Peter Sewell describe a semantics of weakly-consistent memory that copes (for the first time) with mixed-size memory accesses. In this blog post, I describe how their semantics can be explained rather nicely with some graphical… Continue reading Ribbon Diagrams for Weak Memory

Translating lock-free, relaxed concurrency from software into hardware

What follows is a summary of the main contributions of a paper I wrote with Nadesh Ramanathan, Shane Fleming, and George Constantinides for the FPGA 2017 conference. Languages like C and C++ allow programmers to write concurrent programs. These are programs whose instructions are partitioned into multiple threads, all of which can be run at the same time.… Continue reading Translating lock-free, relaxed concurrency from software into hardware