CPU Benchmark Performance: Power, Office, and Science

Our previous set of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real-world performance.

For the Core i3-12300, we are running DDR5 memory at the following settings:

  • DDR5-4800(B) CL40

Power

(0-0) Peak Power

As expected from a 4C/8T processor, the Core i3-12300 has a lower power draw than the 6C/12T and 8C/16T models.

Office

(1-1) Agisoft Photoscan 1.3, Complex Test

(1-2) AppTimer: GIMP 2.10.18

Compared to previous generations of Intel's architecture, Alder Lake (Core i3-12300) is above everything else in regards to variable/lightly-threaded loads.

Science

(2-1) 3D Particle Movement v2.1 (non-AVX)

(2-2) 3D Particle Movement v2.1 (Peak AVX)

(2-3) yCruncher 0.78.9506 ST (250m Pi)

(2-4) yCruncher 0.78.9506 MT (2.5b Pi)

(2-4b) yCruncher 0.78.9506 MT (250m Pi)

(2-5) NAMD ApoA1 Simulation

(2-6) AI Benchmark 0.1.2 Total

(2-6a) AI Benchmark 0.1.2 Inference

(2-6b) AI Benchmark 0.1.2 Training

In any scenario where AVX-based workloads or in multi-core and multi-threaded applications, the Core i3-12300 lags behind the chips with higher core and thread count.

Intel Core i3-12300 Performance: DDR5 vs DDR4 CPU Benchmark Performance: Simulation And Rendering
Comments Locked

140 Comments

View All Comments

  • CiccioB - Friday, March 4, 2022 - link

    You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.

    All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables.
    Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.

    All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.

    Printing app and format converters are monothread. CADs are also.
    And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.

    At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core.
    Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market.
    BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
  • mode_13h - Saturday, March 5, 2022 - link

    > Compilers are not multi-thread.

    There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.

    > if you have mane many cores it just end up being I/O limited.

    I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.

    > At the end of the compiling process, however,
    > you'll have the linked which is a single threaded task.

    There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.

    > At the end, there are many many task that cannot be parallelized.

    There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
  • CiccioB - Saturday, March 5, 2022 - link

    You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code.

    "You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes."
    Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.

    "There are more that could. They just aren't because... reasons."
    Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit.
    The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.

    Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one.
    You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
  • mode_13h - Tuesday, March 8, 2022 - link

    > Try compiling something that is not "Hello world" and you'll see

    My current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.

    As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.

    I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.

    > You can easily see this in the fact that multi core CPU in consumer market has been
    > existed for more than 15 years and still only a minor number of applications, mostly
    > rendered and video transcoders, do really take advantage of many cores.

    Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.

    Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.

    Oh, and don't forget games. Most games are pretty well-threaded.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    "analyzing web browser performance and revealing they're quite heavily multi-threaded"

    I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
  • GeoffreyA - Tuesday, March 8, 2022 - link

    As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.
  • mode_13h - Wednesday, March 9, 2022 - link

    Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.

    Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
  • GeoffreyA - Wednesday, March 9, 2022 - link

    On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.
  • mode_13h - Thursday, March 10, 2022 - link

    So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.

    Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.

    Finally, here you'll find some fantastic C++ infographics:

    https://hackingcpp.com/cpp/cheat_sheets.html

    Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!

    China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.

    https://summer.iscas.ac.cn/#/org/projectlist?lang=...
  • GeoffreyA - Thursday, March 10, 2022 - link

    I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.

    That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.

    Again, much obliged for the leads and references.

Log in

Don't have an account? Sign up now