Heterogeneous System Architecture

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.[1] The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective,[2]: 3 [3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).[4]

CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5] Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles.[6] HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.[7]

Rationale edit

The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.

Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.

Overview edit

Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.

Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units.[2]: 6–7  To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.

So far, the HSA specifications cover:

HSA Intermediate Layer edit

HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs

HSA memory model edit

  • compatible with C++11, OpenCL, Java and .NET memory models
  • relaxed consistency
  • designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
  • will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, et al.

HSA dispatcher and run-time edit

  • designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
  • any core can schedule work for any other, including itself
  • significant reduction of overhead of scheduling work for a core

Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]

Block diagrams edit

The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.

Software support edit

AMD GPUs contain certain additional functional units intended to be used as part of HSA. In Linux, kernel driver amdkfd provides required support.[9][10]

Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on 8 February 2015.[10] Programs do not interact directly with amdkfd[further explanation needed], but queue their jobs utilizing the HSA runtime.[11] This very first implementation, known as amdkfd, focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.

Additionally, amdkfd supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD's IOMMU, was accepted into the Linux kernel mainline version 4.14.[12]

Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.[13]

AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[14]

GPUOpen comprehends a couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler.[15]

Hardware support edit

AMD edit

As of February 2015, only AMD's "Kaveri" A-series APUs (cf. "Kaveri" desktop processors and "Kaveri" mobile processors) and Sony's PlayStation 4 allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.[citation needed]

Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.[citation needed]

The following table shows features of AMD's processors with 3D graphics, including APUs (see also: List of AMD processors with 3D graphics).

PlatformHigh, standard and low powerLow and ultra-low power
CodenameServerBasicToronto
MicroKyoto
DesktopPerformanceRaphaelPhoenix
MainstreamLlanoTrinityRichlandKaveriKaveri Refresh (Godavari)CarrizoBristol RidgeRaven RidgePicassoRenoirCezanne
Entry
BasicKabiniDalí
MobilePerformanceRenoirCezanneRembrandtDragon Range
MainstreamLlanoTrinityRichlandKaveriCarrizoBristol RidgeRaven RidgePicassoRenoir
Lucienne
Cezanne
Barceló
Phoenix
EntryDalíMendocino
BasicDesna, Ontario, ZacateKabini, TemashBeema, MullinsCarrizo-LStoney RidgePollock
EmbeddedTrinityBald EagleMerlin Falcon,
Brown Falcon
Great Horned OwlGrey HawkOntario, ZacateKabiniSteppe Eagle, Crowned Eagle,
LX-Family
Prairie FalconBanded KestrelRiver Hawk
ReleasedAug 2011Oct 2012Jun 2013Jan 20142015Jun 2015Jun 2016Oct 2017Jan 2019Mar 2020Jan 2021Jan 2022Sep 2022Jan 2023Jan 2011May 2013Apr 2014May 2015Feb 2016Apr 2019Jul 2020Jun 2022Nov 2022
CPU microarchitectureK10PiledriverSteamrollerExcavator"Excavator+"[16]ZenZen+Zen 2Zen 3Zen 3+Zen 4BobcatJaguarPumaPuma+[17]"Excavator+"ZenZen+"Zen 2+"
ISAx86-64 v1x86-64 v2x86-64 v3x86-64 v4x86-64 v1x86-64 v2x86-64 v3
SocketDesktopPerformanceAM5
MainstreamAM4
EntryFM1FM2FM2+FM2+[a], AM4AM4
BasicAM1FP5
OtherFS1FS1+, FP2FP3FP4FP5FP6FP7FL1FP7
FP7r2
FP8
?FT1FT3FT3bFP4FP5FT5FP5FT6
PCI Express version2.03.04.05.04.02.03.0
CXL
Fab. (nm)GF 32SHP
(HKMG SOI)
GF 28SHP
(HKMG bulk)
GF 14LPP
(FinFET bulk)
GF 12LP
(FinFET bulk)
TSMC N7
(FinFET bulk)
TSMC N6
(FinFET bulk)
CCD: TSMC N5
(FinFET bulk)

cIOD: TSMC N6
(FinFET bulk)
TSMC 4nm
(FinFET bulk)
TSMC N40
(bulk)
TSMC N28
(HKMG bulk)
GF 28SHP
(HKMG bulk)
GF 14LPP
(FinFET bulk)
GF 12LP
(FinFET bulk)
TSMC N6
(FinFET bulk)
Die area (mm2)228246245245250210[18]156180210CCD: (2x) 70
cIOD: 122
17875 (+ 28 FCH)107?125149~100
Min TDP (W)3517121015105354.543.95106128
Max APU TDP (W)10095654517054182565415
Max stock APU base clock (GHz)33.84.14.13.73.83.63.73.84.03.34.74.31.752.222.23.22.61.23.352.8
Max APUs per node[b]11
Max core dies per CPU1211
Max CCX per core die1211
Max cores per CCX482424
Max CPU[c] cores per APU481682424
Max threads per CPU core1212
Integer pipeline structure3+32+24+24+2+11+3+3+1+21+1+1+12+24+24+2+1
i386, i486, i586, CMOV, NOPL, i686, PAE, NX bit, CMPXCHG16B, AMD-V, RVI, ABM, and 64-bit LAHF/SAHF
IOMMU[d]v2v1v2
BMI1, AES-NI, CLMUL, and F16C
MOVBE
AVIC, BMI2, RDRAND, and MWAITX/MONITORX
SME[e], TSME[e], ADX, SHA, RDSEED, SMAP, SMEP, XSAVEC, XSAVES, XRSTORS, CLFLUSHOPT, CLZERO, and PTE Coalescing
GMET, WBNOINVD, CLWB, QOS, PQE-BW, RDPID, RDPRU, and MCOMMIT
MPK, VAES
SGX
FPUs per core10.5110.51
Pipes per FPU22
FPU pipe width128-bit256-bit80-bit128-bit256-bit
CPU instruction set SIMD levelSSE4a[f]AVXAVX2AVX-512SSSE3AVXAVX2
3DNow!3DNow!+
PREFETCH/PREFETCHW
GFNI
AMX
FMA4, LWP, TBM, and XOP
FMA3
AMD XDNA
L1 data cache per core (KiB)64163232
L1 data cache associativity (ways)2488
L1 instruction caches per core10.5110.51
Max APU total L1 instruction cache (KiB)2561281922565122566412896128
L1 instruction cache associativity (ways)23482348
L2 caches per core10.5110.51
Max APU total L2 cache (MiB)424161212
L2 cache associativity (ways)168168
Max on--die L3 cache per CCX (MiB)416324
Max 3D V-Cache per CCD (MiB)64
Max total in-CCD L3 cache per APU (MiB)4816644
Max. total 3D V-Cache per APU (MiB)64
Max. board L3 cache per APU (MiB)
Max total L3 cache per APU (MiB)48161284
APU L3 cache associativity (ways)1616
L3 cache schemeVictimVictim
Max. L4 cache
Max stock DRAM supportDDR3-1866DDR3-2133DDR3-2133, DDR4-2400DDR4-2400DDR4-2933DDR4-3200, LPDDR4-4266DDR5-4800, LPDDR5-6400DDR5-5200DDR5-5600, LPDDR5x-7500DDR3L-1333DDR3L-1600DDR3L-1866DDR3-1866, DDR4-2400DDR4-2400DDR4-1600DDR4-3200LPDDR5-5500
Max DRAM channels per APU21212
Max stock DRAM bandwidth (GB/s) per APU29.86634.13238.40046.93268.256102.40083.200120.00010.66612.80014.93319.20038.40012.80051.20088.000
GPU microarchitectureTeraScale 2 (VLIW5)TeraScale 3 (VLIW4)GCN 2nd genGCN 3rd genGCN 5th gen[19]RDNA 2RDNA 3TeraScale 2 (VLIW5)GCN 2nd genGCN 3rd gen[19]GCN 5th genRDNA 2
GPU instruction setTeraScale instruction setGCN instruction setRDNA instruction setTeraScale instruction setGCN instruction setRDNA instruction set
Max stock GPU base clock (MHz)60080084486611081250140021002400400538600?847900120060013001900
Max stock GPU base GFLOPS[g]480614.4648.1886.71134.517601971.22150.43686.4102.486???345.6460.8230.41331.2486.4
3D engine[h]Up to 400:20:8Up to 384:24:6Up to 512:32:8Up to 704:44:16[20]Up to 512:32:8768:48:8128:8:480:8:4128:8:4Up to 192:12:8Up to 192:12:4192:12:4Up to 512:?:?128:?:?
IOMMUv1IOMMUv2IOMMUv1?IOMMUv2
Video decoderUVD 3.0UVD 4.2UVD 6.0VCN 1.0[21]VCN 2.1[22]VCN 2.2[22]VCN 3.1?UVD 3.0UVD 4.0UVD 4.2UVD 6.0UVD 6.3VCN 1.0VCN 3.1
Video encoderVCE 1.0VCE 2.0VCE 3.1VCE 2.0VCE 3.1
AMD Fluid Motion
GPU power savingPowerPlayPowerTunePowerPlayPowerTune[23]
TrueAudio [24]?
FreeSync1
2
1
2
HDCP[i]?1.42.22.3?1.42.22.3
PlayReady[i]3.0 not yet3.0 not yet
Supported displays[j]2–32–433 (desktop)
4 (mobile, embedded)
42344
/drm/radeon[k][26][27]
/drm/amdgpu[k][28] [29] [29]
  1. ^ For FM2+ Excavator models: A8-7680, A6-7480 & Athlon X4 845.
  2. ^ A PC would be one node.
  3. ^ An APU combines a CPU and a GPU. Both have cores.
  4. ^ Requires firmware support.
  5. ^ a b Requires firmware support.
  6. ^ No SSE4. No SSSE3.
  7. ^ Single-precision performance is calculated from the base (or boost) core clock speed based on a FMA operation.
  8. ^ Unified shaders : texture mapping units : render output units
  9. ^ a b To play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
  10. ^ To feed more than two displays, the additional panels must have native DisplayPort support.[25] Alternatively active DisplayPort-to-DVI/HDMI/VGA adapters can be employed.
  11. ^ a b DRM (Direct Rendering Manager) is a component of the Linux kernel. Support in this table refers to the most current version.

ARM edit

ARM's Bifrost microarchitecture, as implemented in the Mali-G71,[30] is fully compliant with the HSA 1.1 hardware specifications. As of June 2016, ARM has not announced software support that would use this hardware feature.

See also edit

References edit

External links edit