AMD's latest FPGA promises super low latency AI for Flash Boy traders

Letting more advanced ML loose on the stock market? What could possibly go wrong?

AMD has refreshed its Alveo field-programmable gate arrays (FPGAs), promising a sevenfold improvement in operating latency and the ability to run more complex machine learning algorithms on the customisable silicon.

FPGAs are often used by high-frequency traders, an industry in which a delay of a few fractions of a second can be the difference between profit or loss on algorithm-arranged trades. The ability to reprogram FPGAs with faster or more refined trading algorithms that speed transactions makes the machines valuable. Faster and more flexible FPGAs have obvious appeal.

AMD's Alveo UL3524 FPGAs claim reduced latency and support for AI inferencing via the FINN framework to accelerate high-frequency trading.

AMD's Alveo UL3524 FPGAs claim reduced latency and support for AI inferencing via the FINN framework to accelerate high-frequency trading ... Click to enlarge

AMD's Alveo UL3524 is the company's latest FPGA developed for this market. The card is based on Xilinx's 16nm Virtex UltraScale+ FPGA and features 64 transceivers, 780 thousand lookup tables, and 1,680 digital signal processing (DSP) slices on which customers can deploy their algorithms. It's these transceivers, which AMD says were "purpose built" for low latency trading, that are responsible for driving latency down 7x compared to the previous generation at less than 3ns.

However, at the end of the day, these FPGAs are really just a vessel for the proprietary software responsible for triggering trades when certain market conditions are met.

As with previous-gen FPGAs, AMD provides software support by way of its Vivado Design Suite, which includes various reference designs and benchmarks to help customers develop new applications for the platform. But for those looking to employ AI/ML in their trading algorithms to eke out an advantage, the card also supports the open-source FINN development framework.

The FINN project explores deep neural network inference on FPGAs. According to the project’s site, the framework has proven effective at classifying images at sub microsecond latencies.

AMD’s new toys for traders are fast and powerful. However, using machine learning to guide stock purchases isn't always a sure bet. In a paper published early in 2022, a group of researchers at three universities and IBM demonstrated how share-trading bots could be manipulated with something as simple as a single re-tweet.

At launch AMD's UL3524 is available from a number of OEMs specializing in infrastructure for the financial sector, including Alpha Data, Exegy, and Hypertech. ®

More about

TIP US OFF

Send us news


Other stories you might like