CPU Tests: Office

Our previous set of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real world performance.

Agisoft Photoscan 1.3.3: link

Photoscan stays in our benchmark suite from the previous benchmark scripts, but is updated to the 1.3.3 Pro version. As this benchmark has evolved, features such as Speed Shift or XFR on the latest processors come into play as it has many segments in a variable threaded workload.

The concept of Photoscan is about translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the final 3D model in both spatial accuracy and texturing accuracy. The algorithm has four stages, with some parts of the stages being single-threaded and others multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

For the update to version 1.3.3, the Agisoft software now supports command line operation. Agisoft provided us with a set of new images for this version of the test, and a python script to run it. We’ve modified the script slightly by changing some quality settings for the sake of the benchmark suite length, as well as adjusting how the final timing data is recorded. The python script dumps the results file in the format of our choosing. For our test we obtain the time for each stage of the benchmark, as well as the overall time.

The final result is a table that looks like this:

(1-1) Agisoft Photoscan 1.3, Complex Test

The new v1.3.3 version of the software is faster than the v1.0.0 version we were previously using on the old set of benchmark images, however the newer set of benchmark images are more detailed (and a higher quantity), giving a longer benchmark overall. This is usually observed in the multi-threaded stages for the 3D mesh calculation.

Technically Agisoft has renamed Photoscan to MetaShape, and is currently on version 1.6.2. We reached out to Agisoft to get an updated script for the latest edition however I never heard back from our contacts. Because the scripting interface has changed, we’ve stuck with 1.3.3.

Application Opening: GIMP 2.10.18

First up is a test using a monstrous multi-layered xcf file we once received in advance of attending an event. While the file is only a single ‘image’, it has so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time.

For this test, we’ve upgraded from GIMP 2.10.4 to 2.10.18, but also changed the test a bit. Normally, on the first time a user loads the GIMP package from a fresh install, the system has to configure a few dozen files that remain optimized on subsequent opening. For our test we delete those configured optimized files in order to force a ‘fresh load’ each time the software in run.

We measure the time taken from calling the software to be opened, and until the software hands itself back over to the OS for user control. The test is repeated for a minimum of ten minutes or at least 15 loops, whichever comes first, with the first three results discarded.

The final result is a table that looks like this:

(1-2) AppTimer: GIMP 2.10.18

Because GIMP is optimizing files as it starts up, the amount of work required as we increase the core count increases dramatically.

Ultimately we chose GIMP because it takes a long time to load, is free, and actually fits very nicely with our testing system. There is software out there that can take longer to start up, however I found that most of it required licences, wouldn’t allow installation across multiple systems, or that most of the delay was contacting home servers. For this test GIMP is the ultimate portable solution (however if people have suggestions, I would like to hear them).

The CPU Overload 2020 Suite CPU Tests: Science
Comments Locked

110 Comments

View All Comments

  • ballsystemlord - Tuesday, July 21, 2020 - link

    @Ian, I love your 30,000 datapoints per article. Thanks for benching all these things.
    The AMD Phenom II 1090T (the original consumer 6 core!!!) is the CPU I'd like to see in the new suite.
  • Samus - Tuesday, July 21, 2020 - link

    Can you build an automated (filtering/categorizing) submission form for donations. I have many Xeon’s, especially the v3’s you have a shortage of, that I would be willing to donate for the cause.
  • ballsystemlord - Tuesday, July 21, 2020 - link

    @ian @Samus Use email to contact each other.
  • Dragonsteel - Tuesday, July 21, 2020 - link

    I'd like to see comparisons for the mainstream 300$ to 400$ CPUs starting with the i7 series.

    I'd really like to see i72600k on those benchmarks. Both stock and OC performance. I do run this CPU, bit am looking at upgrading soon to a comparable model. It just hasn't made sense until now with the new platform updates and more powerful GPUs.
  • Slaps - Tuesday, July 21, 2020 - link

    Would it be possible to add Counter-Strike Global Offensive? You can use the in-game console to load a demo (replay) of a professional match and let it run to get very real and consistent results.
  • ET - Tuesday, July 21, 2020 - link

    What an amazing project. Great and detailed article, too. I'm looking forward to seeing the results. I appreciate Bench, and often when I see someone on Reddit ask about an upgrade from, say, a Phenom II 1055T to FX 6120, I go to Bench to make a comparison (though of course can't often find the exact models).

    Hopefully the UI for Bench will be improved. Search and auto-completion, comparing more than 2 CPUs, these are things I'd expect.
  • DanNeely - Tuesday, July 21, 2020 - link

    y-cruncher sprint graphs are missing.
  • 137ben - Tuesday, July 21, 2020 - link

    This is an ambitious project, and it is the reason I enjoy coming to Anandtech.
  • ozzuneoj86 - Tuesday, July 21, 2020 - link

    This is amazing work! Thank you for doing this!

    One suggestion though... and I've mentioned this in past comments... please please please rename the lowest of the four quality settings for gaming benchmarks. The "IGP" setting is unnecessarily confusing to those looking at CPU benchmarks being run on a top of the line GPU. No IGP is involved. Just call it " VERY LOW" or something.
  • Meteor2 - Monday, August 3, 2020 - link

    Yes x1000!

    (What does IGP even stand for in this context?!)

Log in

Don't have an account? Sign up now