The Vega Architecture: AMD’s Brightest Day

From an architectural standpoint, AMD’s engineers consider the Vega architecture to be their most sweeping architectural change in five years. And looking over everything that has been added to the architecture, it’s easy to see why. In terms of core graphics/compute features, Vega introduces more than any other iteration of GCN before it.

Speaking of GCN, before getting too deep here, it’s interesting to note that at least publicly, AMD is shying away from the Graphics Core Next name. GCN doesn’t appear anywhere in AMD’s whitepaper, while in programmers’ documents such as the shader ISA, the name is still present. But at least for the purposes of public discussion, rather than using the term GCN 5, AMD is consistently calling it the Vega architecture. Though make no mistake, this is still very much GCN, so AMD’s basic GPU execution model remains.

So what does Vega bring to the table? Back in January we got what has turned out to be a fairly extensive high-level overview of Vega’s main architectural improvements. In a nutshell, Vega is:

  • Higher clocks
  • Double rate FP16 math (Rapid Packed Math)
  • HBM2
  • New memory page management for the high-bandwidth cache controller
  • Tiled rasterization (Draw Stream Binning Rasterizer)
  • Increased ROP efficiency via L2 cache
  • Improved geometry engine
  • Primitive shading for even faster triangle culling
  • Direct3D feature level 12_1 graphics features
  • Improved display controllers

The interesting thing is that even with this significant number of changes, the Vega ISA is not a complete departure from the GCN4 ISA. AMD has added a number of new instructions – mostly for FP16 operations – along with some additional instructions that they expect to improve performance for video processing and some 8-bit integer operations, but nothing that radically upends Vega from earlier ISAs. So in terms of compute, Vega is still very comparable to Polaris and Fiji in terms of how data moves through the GPU.

Consequently, the burning question I think many will ask is if the effective compute IPC is significantly higher than Fiji, and the answer is no. AMD has actually taken significant pains to keep the throughput latency of a CU at 4 cycles (4 stages deep), however strictly speaking, existing code isn’t going to run any faster on Vega than earlier architectures. In order to wring the most out of Vega’s new CUs, you need to take advantage of the new compute features. Note that this doesn’t mean that compilers can’t take advantage of them on their own, but especially with the datatype matters, it’s important that code be designed for lower precision datatypes to begin with.

Vega 10: Fiji of the Stars Rapid Packed Math: Fast FP16 Comes to Consumer Cards
Comments Locked

213 Comments

View All Comments

  • HollyDOL - Tuesday, August 15, 2017 - link

    Thank you, I already did. Not everywhere is cheap electricity.
  • Gigaplex - Tuesday, August 15, 2017 - link

    A little over $30 per year extra. I tend to upgrade on a 3 year cadence. That's around $100 extra I can use to bump up to the Nvidia card.
  • Outlander_04 - Tuesday, August 15, 2017 - link

    The highest cost for electricity I can see in the US is 26 cents per kilowatt hour.
    The difference in gaming power consumption is 0.078 Kilowatts hour Meaning it would take 12.8 hours to burn that extra kW/H
    Two hours of full load gaming every day adds up to 730 hours a year means 57 kW/H's extra for a total cost of $14.82 per year .
    In states with electricity cost of 10 cents kW/H the difference is about $5.70 a year

    You might have to save a bit longer than you expect .
  • Yojimbo - Wednesday, August 16, 2017 - link

    Why did you assume he was interested in the 1070/Vega 56? Comparing the 1080 FE with the Vega 64 air cooled, the difference is .150 kilowatts. At your same assumption of 2 hours a day and 26 cents a kilowatt-hour it comes to $28.50 a year, right in line with his estimate. It's not a stretch to think he would game more than 730 hours a year, either.
  • Outlander_04 - Thursday, August 17, 2017 - link

    The BF1 power consumption difference between Vega 64 and the 1080 FE is 0.08 kW/H.
    Not sure where you get your numbers from , but it is not this review .
    The numbers are essentially the same as I suggested above . 0.078 vs 0.080 .

    Less than $6 a year in states with lower utility costs and as much as $15 a year in Hawaii .
    Yes you could game more than 14 hours a week . Its also not a stretch to think you might game a lot less . What was your point?
  • HollyDOL - Friday, August 18, 2017 - link

    I don't know where you look, but 1080 FE system is taking 310W, Vega 64 then 459W, which is 149W for no gain whatsoever.
  • Outlander_04 - Friday, August 18, 2017 - link

    379 vs 459 watts for the 1080 fe vs Vega 64.
    delta is 0.08 kW/H
    Those figures are right here in this review on the gaming power consumption chart.
  • HollyDOL - Saturday, August 19, 2017 - link

    Lol man, you need to reread that chart. 379W is 1080Ti FE, not 1080FE.
  • FourEyedGeek - Tuesday, August 22, 2017 - link

    What if you live in a hot part of the world? Extra heat equals extra throttling, during the summer I reduce my OCs due to this. Slap on the air conditioning and it'll run a bit extra too to compensate costing more.

    I'd look at undervolting if possible a VEGA 56
  • ET - Tuesday, August 15, 2017 - link

    So Vega 72 yet to come? Page 2 says that there are 6 CU arrays of 3 CU's each. That's 18 CU's, with only 16 enabled in Vegz 64.

Log in

Don't have an account? Sign up now