![]() System builders and system buyers want to be able to have some kind of fast links and coherence between CPUs and various kinds of accelerators and storage class memories – and they want it yesterday, which is how we ended up in this alphabet soup in the first place. Intel itself doesn’t expect to get products out the door supporting PCI-Express 5.0 until 2021.) Oh wait, it is already with AMD’s Rome and will very likely be with IBM’s Power10, which definitely supports PCI-Express 5.0 controllers and will almost certainly have a chiplet architecture. It is a pity that the I/O is not all in a central hub in a chiplet architecture that could swap the I/O out without messing up the cores. PCI-Express 4.0, which came out in 2017, is still only available with two processors – IBM’s Power9 and AMD’s “Rome” Epyc 7002 – and while we are all excited that the PCI-Express 5.0 spec is coming out sometime this year and PCI-Express 6.0 is expected to be ratified in 2021, it has taken far too long to get these faster buses into new chips. The only problem that we see initially with CXL, which was shown off in detail at the recent Hot Interconnects conference, is that it is tied to the PCI-Express 5.0 protocol, which is not yet available. Significantly, Nvidia has also joined up even though it does not have a seat on the CXL board. Alibaba, Cisco Systems, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei Technology, and Microsoft all jumped on the CXL bandwagon early, and together, these companies represent a big portion of the systems ecosystem when gaged by capacity sold or bought. But it sure doesn’t look like it, not with Steve Fields, chief engineer of Power Systems at IBM who also spearheads OpenCAPI, and Gaurav Singh, corporate vice president at Xilinx and who spearheads CCIX, plus Dong Wei, standards architect at ARM Holdings and Nathan Kalyanasundharam, senior fellow at AMD, being four of the five members of the board of the new CXL Consortium, which was launched this week. ![]() At some point, these may resolve into a smaller set of transports and protocols that achieve the collective goals of these interconnects. ![]() It can be used to hook anything from DRAM to flash to accelerators in meshes with any manner of CPU.Īt this point, all of these interconnects but Nvidia’s NVLink and AMD’s Infinity Fabric has an independent consortium driving their specifications, and more than a few hyperscalers and vendors participate in multiple consortia to keep a hand in all of the different games. The Gen-Z interconnect from Hewlett Packard Enterprise links out from PCI-Express on servers to silicon photonics bridges and switches that hold out the promise a memory centric – rather than compute centric – architecture for systems. OpenCAPI, which is supported on Big Blue’s Power9 processors, relies on special SERDES communication units on the chip that run at 25 Gb/sec and that can support a variant of the CAPI protocol or the NVLink protocol to attach Power9s to Nvidia Tesla GPU accelerators that also support NVLink – and do so in a coherent fashion across these different devices. Other interconnects try to get around some of the limitations of bandwidth or latency inherent in the PCI-Express bus, such as the NVLink interconnect from Nvidia and the OpenCAPI interconnect from IBM. These include the Compute Express Link (CXL) from Intel, the Coherent Accelerator Interface (CAPI) from IBM, the Cache Coherence Interconnect for Accelerators (CCIX) from Xilinx, and the Infinity Fabric from AMD. Others are coming up with their own electrical or optical signaling. Such as doing some form of memory sharing across devices, usually though some sort of coherency mechanism. There are a number of competing and complementary standards that span this middle ground between the processor and adjacent systems, many of which run atop the PCI-Express bus transport but which do more interesting things with it than just hanging storage or networking off the bus. And that is, oddly enough, going to turn out to be a good thing in the long run. The dividing lines between system buses, system intraconnects, and system interconnects are getting more blurry all the time.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |