CPU Benchmark Performance: Simulation And Rendering

Simulation and Science have a lot of overlap in the benchmarking world, however for this distinction we’re separating into two segments mostly based on the utility of the resulting data. The benchmarks that fall under Science have a distinct use for the data they output – in our Simulation section, these act more like synthetics but at some level are still trying to simulate a given environment.

We are using DDR5 memory at the following settings:

  • DDR5-4800(B) CL40

Simulation

(3-1) DigiCortex 1.35 (32k Neuron, 1.8B Synapse)

(3-2a) Dwarf Fortress 0.44.12 World Gen 65x65, 250 Yr

(3-2b) Dwarf Fortress 0.44.12 World Gen 129x129, 550 Yr

(3-2c) Dwarf Fortress 0.44.12 World Gen 257x257, 550 Yr

(3-3) Dolphin 5.0 Render Test

(3-4a) Factorio v1.1.26 Test, 10K Trains

(3-4b) Factorio v1.1.26 Test, 10K Belts

(3-4c) Factorio v1.1.26 Test, 20K Hybrid

When it comes to simulation, the combination of high core frequency and better IPC performance gives Intel's 12th Gen Core series the advantage here in most situations.

Rendering

(4-1) Blender 2.83 Custom Render Test

(4-2) Corona 1.3 Benchmark

(4-3a) Crysis CPU Render at 320x200 Low

(4-3b) Crysis CPU Render at 1080p Low

(4-3c) Crysis CPU Render at 1080p Medium

(4-4) POV-Ray 3.7.1

(4-5) V-Ray Renderer

(4-6a) CineBench R20 Single Thread

(4-6b) CineBench R20 Multi-Thread

(4-7a) CineBench R23 Single Thread

(4-7b) CineBench R23 Multi-Thread

Looking at performance in the rendering section of our test suite, both the Core i7 and Core i5 performed creditably. The biggest factor to consider here is a higher core and thread count plus IPC performance will equal more rendering power.

CPU Benchmark Performance: Power, Office, And Science CPU Benchmark Performance: Encoding and Compression
Comments Locked

196 Comments

View All Comments

  • mode_13h - Tuesday, April 5, 2022 - link

    > Intel would collapse under the weight of its own cost structure built around those fabs

    It wouldn't have to be overnight, and obviously they'd have to rationalize some aspects of the business. However, it seems like the right thing to do, especially if there are activities they couldn't undertake without manufacturing in-house. That just screams either "inefficiency" or, more likely, "unfair advantage".

    The one thing I don't accept is that "it has to be this way, because it always was". That's almost never a good reason not to change something.
  • Mike Bruzzone - Tuesday, April 5, 2022 - link

    "That just screams either "inefficiency" or, more likely, "unfair advantage".

    I agree, both, there are many inefficiencies in enterprise and industry relations, and governance and oversight.

    "Always was", the inefficiency is being addressed for a very long time! It's just the way things have worked out over 24 years to resolve Intel inefficiencies that are not effective under democratic capitalism caught in associate network conundrums. mb
    .

  • Spunjji - Friday, April 1, 2022 - link

    "Because it quite literally cannot be both."

    It literally can given that Intel only started adding more than 4 cores after Ryzen launched and then, subsequently, had to blow their power requirements out just to keep up... and you're reaching all the way back to 'dozer - a CPU designed long after AMD had relinquished leadership - to try to bat back the valid accusation that Intel have always abused their leadership position to rinse consumers.
  • ballsystemlord - Saturday, April 2, 2022 - link

    And CUDA isn't vendor lock-in?
    GPU compute is a great idea -- and that's not just my opinion. AMD failed to deliver in a big way when it came to getting CPU/ GPU sharing compute capabilities off of the ground. They're still working on it (CDNA...). But it's unlikely at this point to be available to us -- which is what I dislike.
  • mode_13h - Sunday, April 3, 2022 - link

    > AMD failed to deliver in a big way when it came to getting
    > CPU/ GPU sharing compute capabilities off of the ground.

    Yeah, HSA really fizzled and even the original APU & fusion concept as some kind of synergistic processing unit went sideways.

    Then, AMD got distracted by AI and became consumed by chasing Nvidia in that market and HPC. The consumer platform has been largely neglected by them, since.
  • Khanan - Friday, April 8, 2022 - link

    Nonsense. Fusion culminated into APUs and is one of the biggest successes of AMD ever, please talk and comment less, you’re a huge wannabe.
  • mode_13h - Monday, April 11, 2022 - link

    > Fusion culminated into APUs and is one of the biggest successes of AMD ever,

    What I mean is that "Fusion" turned out to be a marketing thing. The idea of using iGPUs as a compute accelerator didn't really go anywhere.

    AMD jumped from backing OpenCL to HSA, thinking that would spur industry adoption, but it fizzled even worse than OpenCL (which has continued plodding along, in spite of loss of interest/support).

    Microsoft is even discontinuing C++ AMP.

    > please talk and comment less, you’re a huge wannabe.

    Please troll less. News comments were fine without you. I have yet to see you add anything of value. Mostly, you just seem to antagonize people, which is the very definition of trolling.
  • mode_13h - Monday, April 11, 2022 - link

    > OpenCL (which has continued plodding along, in spite of loss of interest/support).

    I meant AMD's loss of interest/support. Heck, even Nvidia has gotten on board with 3.0!
  • Kangal - Tuesday, March 29, 2022 - link

    It's hard not to agree.
    These Intel 12th-gen products are a 2022 product and should be compared with a 2022 alternative. Besides they're somewhat of a paper launch, anyway. AMD has a lot of headroom to turbo boost solo core, add more cores, increase thermal headroom, add faster memory..... without having to do major overhaul on Zen3 architecture. They have somewhat rested on their laurels with Zen3, but I suspect that Zen4 is going to be a very distinct uplift. The way the companies stack is:

    2017 Zen1 vs Intel 7th-gen
    2018 Zen+ vs Intel 8th-gen
    2019 Zen2 vs Intel 9th-gen
    2020 Zen3 vs Intel 10th-gen
    2021 Zen3. vs Intel 11th-gen
    2022 Zen4 vs Intel 12th-gen

    PS: both AMD Zen2 and Intel 10th-gen are significantly slower in single-core, multi-thread, and use much more energy than Apple M1 chips. Things look a bit more even with Zen3 and Intel 11th-gen. But the Apple M2 chips will likely "humiliate" the likes of Intel 12th-gen and AMD Zen4. But then again this is comparing Apple's to Windows, so a moot point.
  • theMillen - Wednesday, March 30, 2022 - link

    Except, 12th-gen launched in 2021. And 13th-gen will launch in 2022... soooo

Log in

Don't have an account? Sign up now