Lines Matching +full:2 +full:gb
163 It allows for a 2 level set of hierarchy.
198 The MI200 accelerators are data center GPUs. They have 2 data fabrics,
201 HBM2e (2GB) channel (equivalent to 8 X 2GB ranks). This creates a total
204 While the UMC is interfacing a 16GB (8high X 2GB DRAM) HBM stack, each UMC
205 channel is interfacing 2GB of DRAM (represented as rank).
241 mc6 |- GPU card[2] => node 0(mc5), node 1(mc6)
255 GPU card 1 # Each MI200 GPU has 2 nodes/mcs
259 │ │ ├── channel 1 # size of each channel is 2 GB, so each UMC has 16 GB
260 │ │ ├── channel 2
279 ├── mc 2 # GPU node 1 == mc2
280 │ ├── .. # each GPU has total 64 GB
282 GPU card 2