High Performance Computing 25 SP Heterogeneous Computing
Heterogeneous Computing is on the way!
GPU Computing Ecosystem
CUDA: NVIDIA's Architecture for GPU computing.

Internal Buses
HyperTransport:
Primarily a low latency direct chip to chip interconnect, supports mapping to board to board interconnect such as PCIe.
PCI Expression
Switched and point-to-point connection.
NVLink

OpenCAPI
Heterogeneous computing was in the professional world mostly limited to HPC, in the consumer world is a "nice to have".
But OpenCAPI is absorbed by CXL.
CPU-GPU Arrangement

First Stage: Intel Northbrige

Second Stage: Symmetric Multiprocessors:

Third Stage: Nonuniform Memory Access
And the memory controller is integrated directly in the CPU.

So in such context, the multiple CPUs is called NUMA:

And so there can be multi GPUs:

Fourth Stage: Integrated PCIe in CPU

And there is such team integrated CPU, which integrated a GPU into the CPU chipset.

And the integrated GPU can work with discrete GPUs:

Final Stage: Multi GPU Board

如果觉得不错的话,可以支持一下作者哦~
2021 - 2025 © Ricardo Ren ,由 .NET 9.0.10 驱动。
Build Commit # 58ba4b2a2f