FCCM 2022 in New York City has just drawn to a close, so I thought I’d put down some thoughts about my experience while it’s fresh in my mind. As my first in-person conference since covid, my expectations were sky high, and I’m very pleased to report that my expectations were well and truly met – I’ve had a fantastic time. It was really wonderful to re-connect with colleagues and collaborators from across the world, and to make new connections too. I really enjoyed the highly interactive “FLASHLIGHT” workshop on formal methods and high-level synthesis that I co-organised as a satellite event. The main conference had an interesting and varied programme, and it was so nice to be able to engage in direct conversation with the authors both in the Q&As after the talks and in the coffee breaks afterwards. Here is a handful of papers that have stuck in my mind:
- Aman Arora presented a new design of FPGA in which block RAMs are augmented with their own built-in processing elements. The processing elements are not particularly exciting in themselves – for instance, they can only manage to process two or three bits at a time – but they have fantastically low latency because the data doesn’t need to travel across the FPGA fabric, from the RAM to the logic blocks and back again. An interesting challenge for the hardware compilers of the near future will be to work out which parts of the computation should be mapped to these low-latency, low-throughput “CoMeFa” RAMs, and which parts should remain in the traditional logic blocks.
- Ecenur Üstün explained that there are several ways to decompose high-bitwidth integer multiplications into multiple lower-bitwidth operations, and presented her tool for exploring that design space using a technique called “equality saturation”.
- Lana Josipović explained how to add resource sharing to her dynamically-scheduled HLS tool, in order to make the circuits it produces consume less area on the FPGA fabric. This is easy in the static-scheduling setting because you can see at compile-time that a given pair of operations are never executed simultaneously and hence can be time-multiplexed onto the same functional unit; in the dynamic-scheduling setting, this information doesn’t become apparent until run-time, which is too late!
- Funnily enough, my student Michalis Pardalos‘s presentation was also about adding resource sharing to an HLS tool; on the one hand, his setting is easier than Lana’s because he assumes static scheduling, but on the other hand, his setting was more challenging because he is working on a computer-checked mathematical proof that the resource sharing is being carried out correctly.
- Another of my students, Jianyi Cheng, showed how to extend HLS with a technique called C-slow pipelining, which can be thought of as a hardware pipeline with a cycle in it. An audience member expressed appreciation for Jianyi’s carefully crafted Powerpoint animations of these cyclic pipelines in action, which was heartening to see.
- Nicholas Beckwith explained that HLS tools tend to generate designs with quite “spread out” memory access patterns; this is fine for ordinary RAM, but leads to strikingly poor performance when using persistent memory like Intel’s Optane DC, which accesses memory at the granularity of a “block” at a time. Nicholas showed how performance can be regained by re-mapping memory locations so as to minimise the number of blocks that need accessing.
FCCM 2022 was the first “hybrid” conference I’ve attended. Roughly half of its 275 participants were in-person and half were online. As for the talks themselves: about 15 were in-person and 10 were delivered live over Zoom. (One of the online talks was actually meant to be in-person, but the speaker self-quarantined upon arrival in NYC after coming down with flu-like symptoms.) There were just a handful of questions from remote participants throughout the whole conference, each of which was read out by the session chair.
It was notable that our venue (Cornell Tech) was extremely well equipped for a hybrid conference, with top-of-the-range sound systems, oodles of microphones of all shapes and sizes, lecterns with built-in Zoom support, and high-resolution cameras pointing at the speaker and at the audience. The system was fantastically complicated and required seemingly constant attention from the small army of hardworking volunteers. I suspect that a smaller-scale conference at a less modern venue would struggle.
The hybridisation was clearly an enormous effort for the organisers, but I was struck by how little “overhead” the hybridisation imposed on the in-person participants. We had to remember to speak into the microphone during the Q&As so that the remote participants could hear us, but other than that, there was barely any indication that this was a hybrid conference.