Intel Core i3-12300 Performance: DDR5 vs DDR4

Intel’s 12th generation processors from the top of the stack, including the flagship Core i9-12900K) and the more affordable and entry-level offerings such as the Core i3-12300, allow users to build a new system with the latest technologies available. One of the main elements that make Intel’s Alder Lake processors flexible for users building a new system is that it includes support for both DDR5 and DDR4 memory. It’s no secret that DDR5 memory costs (far) more than the already established DDR4 counterparts. One element to this includes an early adopter’s fee. Having the latest and greatest technology comes at a price premium.

The reason why we have opted to test the difference in performance between DDR5 and DDR4 memory with the Core i3-12300 is simply down to the price point. While users will most likely be looking to use DDR5 with the performance SKUs such as the Core i9-12900K, Core i7-12700K, and Core i5-12600K, users building a new system with the Core i3-12300 are more likely to go down a more affordable route. This includes using DDR4 memory, which is inherently cheaper than DDR5 and opting for a cheaper motherboard such as an H670, B660, or H610 option. Such systems do give up some performance versus what the i3-12300 can do at its peak, but in return it can bring costs down signfiicantly.

Traditionally we test our memory settings at JEDEC specifications. JEDEC is the standards body that determines the requirements for each memory standard. In the case of Intel's Alder Lake, the Core i3 supports both DDR5 and DDR4 memory. Below are the memory settings we used for our DDR5 versus DDR4 testing:

  • DDR4-3200 CL22
  • DDR5-4800(B) CL40

CPU Performance: DDR5 versus DDR4

(1-2) AppTimer: GIMP 2.10.18 (DDR5 vs DDR4)

(2-1) 3D Particle Movement v2.1 (non-AVX) (DDR5 vs DDR4)

(2-2) 3D Particle Movement v2.1 (Peak AVX) (DDR5 vs DDR4)

(2-5) NAMD ApoA1 Simulation (DDR5 vs DDR4)

(4-1) Blender 2.83 Custom Render Test (DDR5 vs DDR4)

(4-2) Corona 1.3 Benchmark (DDR5 vs DDR4)

(4-4) POV-Ray 3.7.1 (DDR5 vs DDR4)

(4-6a) CineBench R20 Single Thread (DDR5 vs DDR4)

(4-6b) CineBench R20 Multi-Thread (DDR5 vs DDR4)

(4-7a) CineBench R23 Single Thread (DDR5 vs DDR4)

(4-7b) CineBench R23 Multi-Thread (DDR5 vs DDR4)

(5-1a) Handbrake 1.3.2, 1080p30 H264 to 480p Discord (DDR5 vs DDR4)

(5-1b) Handbrake 1.3.2, 1080p30 H264 to 720p YouTube (DDR5 vs DDR4)

(5-1c) Handbrake 1.3.2, 1080p30 H264 to 4K60 HEVC (DDR5 vs DDR4)

(5-4) WinRAR 5.90 Test, 3477 files, 1.96 GB (DDR5 vs DDR4)

(8-1c) Geekbench 5 Single Thread (DDR5 vs DDR4)

(8-1d) Geekbench 5 Multi-Thread (DDR5 vs DDR4)

In our computational benchmarks, there wasn't much difference between DDR5-4800 CL40 and DDR4-3200 CL22 when using the Core i3-12300. The biggest difference came in our WinRAR benchmark which is heavily reliant on memory to increase performance; the DDR5 performed around 21% better than DDR4 in this scenario.

Gaming Performance: DDR5 versus DDR4

(b-7) Civilization VI - 1080p Max - Average FPS

(b-8) Civilization VI - 1080p Max - 95th Percentile

(b-5) Civilization VI - 4K Min - Average FPS (copy)

(b-6) Civilization VI - 4K Min - 95th Percentile (copy)

(g-7) Borderlands 3 - 1080p Max - Average FPS (copy)

(g-8) Borderlands 3 - 1080p Max - 95th Percentile (copy)

(g-5) Borderlands 3 - 4K VLow - Average FPS (copy)

(g-6) Borderlands 3 - 4K VLow - 95th Percentile (copy)

(i-7) Far Cry 5 - 1080p Ultra - Average FPS (copy)

(i-8) Far Cry 5 - 1080p Ultra - 95th Percentile (copy)

(i-5) Far Cry 5 - 4K Low - Average FPS (copy)

(i-6) Far Cry 5 - 4K Low - 95th Percentile (copy)

On the whole, DDR5 does perform better in our gaming tests, but not enough to make it a 'must have' in comparison to DDR4 memory. The gains overall are marginal for the most part, with DDR5 offering around 3-7 more frames per second over DDR4 memory, depending on the titles game engine optimization.

LGA1700: Reports of Bending Sockets CPU Benchmark Performance: Power, Office, And Science
Comments Locked

140 Comments

View All Comments

  • CiccioB - Friday, March 4, 2022 - link

    You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.

    All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables.
    Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.

    All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.

    Printing app and format converters are monothread. CADs are also.
    And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.

    At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core.
    Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market.
    BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
  • mode_13h - Saturday, March 5, 2022 - link

    > Compilers are not multi-thread.

    There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.

    > if you have mane many cores it just end up being I/O limited.

    I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.

    > At the end of the compiling process, however,
    > you'll have the linked which is a single threaded task.

    There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.

    > At the end, there are many many task that cannot be parallelized.

    There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
  • CiccioB - Saturday, March 5, 2022 - link

    You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code.

    "You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes."
    Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.

    "There are more that could. They just aren't because... reasons."
    Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit.
    The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.

    Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one.
    You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
  • mode_13h - Tuesday, March 8, 2022 - link

    > Try compiling something that is not "Hello world" and you'll see

    My current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.

    As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.

    I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.

    > You can easily see this in the fact that multi core CPU in consumer market has been
    > existed for more than 15 years and still only a minor number of applications, mostly
    > rendered and video transcoders, do really take advantage of many cores.

    Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.

    Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.

    Oh, and don't forget games. Most games are pretty well-threaded.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    "analyzing web browser performance and revealing they're quite heavily multi-threaded"

    I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.
  • mode_13h - Wednesday, March 9, 2022 - link

    Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.

    Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
  • GeoffreyA - Wednesday, March 9, 2022 - link

    On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.
  • mode_13h - Thursday, March 10, 2022 - link

    So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.

    Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.

    Finally, here you'll find some fantastic C++ infographics:

    https://hackingcpp.com/cpp/cheat_sheets.html

    Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!

    China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.

    https://summer.iscas.ac.cn/#/org/projectlist?lang=...
  • GeoffreyA - Thursday, March 10, 2022 - link

    I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.

    That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.

    Again, much obliged for the leads and references.

Log in

Don't have an account? Sign up now