Intel Core i3-12300 Performance: DDR5 vs DDR4

Intel’s 12th generation processors from the top of the stack, including the flagship Core i9-12900K) and the more affordable and entry-level offerings such as the Core i3-12300, allow users to build a new system with the latest technologies available. One of the main elements that make Intel’s Alder Lake processors flexible for users building a new system is that it includes support for both DDR5 and DDR4 memory. It’s no secret that DDR5 memory costs (far) more than the already established DDR4 counterparts. One element to this includes an early adopter’s fee. Having the latest and greatest technology comes at a price premium.

The reason why we have opted to test the difference in performance between DDR5 and DDR4 memory with the Core i3-12300 is simply down to the price point. While users will most likely be looking to use DDR5 with the performance SKUs such as the Core i9-12900K, Core i7-12700K, and Core i5-12600K, users building a new system with the Core i3-12300 are more likely to go down a more affordable route. This includes using DDR4 memory, which is inherently cheaper than DDR5 and opting for a cheaper motherboard such as an H670, B660, or H610 option. Such systems do give up some performance versus what the i3-12300 can do at its peak, but in return it can bring costs down signfiicantly.

Traditionally we test our memory settings at JEDEC specifications. JEDEC is the standards body that determines the requirements for each memory standard. In the case of Intel's Alder Lake, the Core i3 supports both DDR5 and DDR4 memory. Below are the memory settings we used for our DDR5 versus DDR4 testing:

  • DDR4-3200 CL22
  • DDR5-4800(B) CL40

CPU Performance: DDR5 versus DDR4

(1-2) AppTimer: GIMP 2.10.18 (DDR5 vs DDR4)

(2-1) 3D Particle Movement v2.1 (non-AVX) (DDR5 vs DDR4)

(2-2) 3D Particle Movement v2.1 (Peak AVX) (DDR5 vs DDR4)

(2-5) NAMD ApoA1 Simulation (DDR5 vs DDR4)

(4-1) Blender 2.83 Custom Render Test (DDR5 vs DDR4)

(4-2) Corona 1.3 Benchmark (DDR5 vs DDR4)

(4-4) POV-Ray 3.7.1 (DDR5 vs DDR4)

(4-6a) CineBench R20 Single Thread (DDR5 vs DDR4)

(4-6b) CineBench R20 Multi-Thread (DDR5 vs DDR4)

(4-7a) CineBench R23 Single Thread (DDR5 vs DDR4)

(4-7b) CineBench R23 Multi-Thread (DDR5 vs DDR4)

(5-1a) Handbrake 1.3.2, 1080p30 H264 to 480p Discord (DDR5 vs DDR4)

(5-1b) Handbrake 1.3.2, 1080p30 H264 to 720p YouTube (DDR5 vs DDR4)

(5-1c) Handbrake 1.3.2, 1080p30 H264 to 4K60 HEVC (DDR5 vs DDR4)

(5-4) WinRAR 5.90 Test, 3477 files, 1.96 GB (DDR5 vs DDR4)

(8-1c) Geekbench 5 Single Thread (DDR5 vs DDR4)

(8-1d) Geekbench 5 Multi-Thread (DDR5 vs DDR4)

In our computational benchmarks, there wasn't much difference between DDR5-4800 CL40 and DDR4-3200 CL22 when using the Core i3-12300. The biggest difference came in our WinRAR benchmark which is heavily reliant on memory to increase performance; the DDR5 performed around 21% better than DDR4 in this scenario.

Gaming Performance: DDR5 versus DDR4

(b-7) Civilization VI - 1080p Max - Average FPS

(b-8) Civilization VI - 1080p Max - 95th Percentile

(b-5) Civilization VI - 4K Min - Average FPS (copy)

(b-6) Civilization VI - 4K Min - 95th Percentile (copy)

(g-7) Borderlands 3 - 1080p Max - Average FPS (copy)

(g-8) Borderlands 3 - 1080p Max - 95th Percentile (copy)

(g-5) Borderlands 3 - 4K VLow - Average FPS (copy)

(g-6) Borderlands 3 - 4K VLow - 95th Percentile (copy)

(i-7) Far Cry 5 - 1080p Ultra - Average FPS (copy)

(i-8) Far Cry 5 - 1080p Ultra - 95th Percentile (copy)

(i-5) Far Cry 5 - 4K Low - Average FPS (copy)

(i-6) Far Cry 5 - 4K Low - 95th Percentile (copy)

On the whole, DDR5 does perform better in our gaming tests, but not enough to make it a 'must have' in comparison to DDR4 memory. The gains overall are marginal for the most part, with DDR5 offering around 3-7 more frames per second over DDR4 memory, depending on the titles game engine optimization.

LGA1700: Reports of Bending Sockets CPU Benchmark Performance: Power, Office, And Science
Comments Locked

140 Comments

View All Comments

  • Alistair - Friday, March 4, 2022 - link

    actually a quad core is great for 360hz gaming also, the problem is the locked clock speed

    if Intel would release an unlocked quad core that can run at 5ghz+ it would be a dream chip

    that's why they don't release it, they want gamers to buy useless 16 core CPUs for gaming, as the game FPS is higher from cache and clock speed, not core count
  • mode_13h - Saturday, March 5, 2022 - link

    I definitely agree that you shouldn't have to buy more cores just to get higher peak clock speeds.

    With Intel's Xeon CPUs, it would typically be the case that models with fewer cores would have higher base & peak clock speeds. I think that started to change when AMD setup their product stack so that each step enabled more cores and/or higher clock speeds. As Intel moved to 6- and 8-core mainstream CPUs, they did the same thing.

    Where AMD sort of bucked the trend was with the 3300X. That little screamer was an absolute performance bargain. I almost bought one, a couple times - first, when it launched, and then I passed on it because it was selling above list price when it came back in stock in late 2020 or early 2021.

    Anyway, I wish AMD would do something like that with a Zen3 or Zen3+, though it's looking unlikely.
  • Mike Bruzzone - Friday, March 4, 2022 - link

    Hi Werewebb,

    Agreed, modern quads work great for Office essentials and home essentials including facility management and security.

    I'm speaking English, are we communicating I think so, on what dialect on practice area can however lead to interpretative earning to confer in another practice area for comprehension cross practice cross functions achieving dialogue and I think so.

    "An Alder Lake quad-core is equal or better than a Ryzen 5 2600. All benchmarks also show substantial improvements in 1% lows. What matters most is overall performance, not simply the number of cores."

    Encoding, transcoding and compiling for octa centric advantages,

    AMD with 3300x went after Ivy Bridge EE quad and won and there are plenty of priced right E5 1600 v2 quad and a bunch v2 hexa plus Haswell EE all cores just entered used market plenty of good choices especially if you have a board that can be upgraded.

    Channel this last week;

    Core Haswell desktop returns to secondary market + 46.6%, and mobile + 7.5% in prior eight weeks and the replacement trend is from Haswell forward in time.

    Ivy Bridge EE + 161.11% octa/hexa return to used market prior eight weeks presents a telling indicator.

    Haswell Extremes all SKUs + 180% in the prior eight weeks is a strong desktop upgrade indicator.

    i7 Refresh + 14%, 4790 comes back to secondary + 17.8% and 4790K + 14% that is 10% of 90_

    i5 Refresh 4590 comes back to secondary + 403% and 4690 sells down < 69% at 19% of 4590
    i3 Refresh + 81% and 4150 comes back + 161%

    Pentium Refresh + 5.5%
    Celeron Refresh + 14.5%

    i7 Original 4770 + 131% and K + 217% that is 24.1% of 70_

    i5 Original + 169% and 4570 + 98%, 4570S + 952%, 4570T + 49.7% and 4570T is 26.3% of 4570_ all varients

    i3 Original + 4% and 4130T + 13.7%

    Pentium Original + 18.8% and G3220 comes back to secondary + 21.8% followed by 3420 + 19.2%

    More in in comment line, several comments actually keep scrolling down until you find last week's Intel channel data and sales trend;

    https://seekingalpha.com/instablog/5030701-mike-br...

    mb

    The vast majority of people are just fine running a modern quad-core.
  • nandnandnand - Thursday, March 3, 2022 - link

    The explanation from here should be mentioned for AppTimer: GIMP since the results are so weird:

    https://www.anandtech.com/show/16214/amd-zen-3-ryz...

    Maybe the test should be dropped entirely.
  • Slash3 - Thursday, March 3, 2022 - link

    It is, in fact, a deeply stupid test with no value.
  • mode_13h - Thursday, March 3, 2022 - link

    "As it turns out, GIMP does optimizations for every CPU thread in the system, which requires that higher thread-count processors take a lot longer to run."

    Holy cow. I don't believe that. There's something else going on there, like maybe code using a stupid spinlock or something... which could actually be the case if some plugins or the core app used libgomp.

    At the time that article was written, the only Big.Little CPUs were in phones (okay, let's forget Lakemont - nobody was running GIMP on a Lakemont). There was absolutely no reason for it to do per-thread optimizations!
  • lmcd - Friday, March 4, 2022 - link

    No one ran anything on a Lakemont, as no one ran a Lakemont.
  • mode_13h - Saturday, March 5, 2022 - link

    Right. I was just noting that for completeness.
  • nandnandnand - Sunday, March 6, 2022 - link

    Lakefield, you mean! Although Intel does appear to have had a Lakemont, Google "Intel Lakemont" to find another deceased product.

    I have used GIMP on RPi4 (which can be rough but usable) so I can imagine Lakefield would be better. Lakefield was too expensive for relatively bad performance (couldn't run all 5 cores at once apparently). Intel gets another swing at it with the Pentium 8500 and other Alder Lake chips.
  • mode_13h - Tuesday, March 8, 2022 - link

    Thanks for the correction.

    Yeah, I get the feeling Lakefield was testing out a few too many new technologies to be executed well. At least it served as a test vehicle for Big+Little and their die-stacking tech.

Log in

Don't have an account? Sign up now