There has been a lot of excitement around the first picture of a black hole (and rightfully so!) and I’ve been trying to nail down the specifics of the compute infrastructure that was used (e.g., HPC? Machines? How much memory?) There was a mention of the amount of data used (in the Terabytes, I believe) but I couldn’t find any spec for how it was processed, and where. I can imagine given all the GUI and image processing needed, maybe they did it on local machines and waited it out. However, I can also imagine there was a lot of computation needed to generate the image, which would do well on a supercomputer.
here is the link to one of the codes used on GitHub (https://github.com/achael/eht-imaging), which also includes all kinds of references, maybe something is hidden in there.
As far as I understand it, they needed to do a lot of simulations to train their algorithms. According to the first article above some of this preprocessing was done on GPU resources of University of Arizona.
Thanks @rberger! I got a helpful response on Reddit too (the original post I linked) and I’ll summarize here:
Details of correlators are in paper II: 1000 cores with 25Gbps connectors.
Supercomputers are mentioned in paper III: including the following direct quote:
the simulations were performed in part on the SuperMUC cluster at the LRZ in Garching, on the LOEWE cluster in CSC in Frankfurt, and on the HazelHen cluster at the HLRS in Stuttgart.
Wow, 1,000 pounds… of data. I need a few minutes to really take that in.
Hi Vanessa
To elaborate on Richard’s point, one of our clusters at the University of Arizona was obtained through an NSF MRI grant and was used partly for simulating black holes, particularly Sagittarius A*, the one at the middle of the Milky Way galaxy (ours). Their simulated images are remarkably similar to the published image we saw.
I spoke recently to Junhan Kim who spent his last three Arizona winters in the balmy Antarctic summer (it was still warmer in Arizona) running the telescope and collecting data.
Chris