Response Time

Our current virtualization stress tests are focused on measuring the maximum throughput of a certain server CPU or server platform. As a result we try to maximize CPU load by using high amounts of concurrent (simulated) users. In other words, the concurrencies are quite a bit higher than what the (virtual or physical) machine can cope with, in order to attain 95-100% CPU load. As a result, the response times are inflated above what would be acceptable in the real world. Still, it would be interesting to get an idea of the response times for our server versus the cloud server.

Our vApus Mark II test starts off with 400 concurrent users and ends with 800 concurrent users. The 400 concurrent users still cause very high CPU load on most machines (80-90%), but that should still give us a "worst case" response time scenario. In the next graph we list the response time at the lowest concurrency. Terremark's data center was in Amsterdam, and we had an 11 to 20 ms round trip delay from our lab, so to be fair to Terremark you should deduct 11 to 20 ms from the Terremark numbers.

vApusMark

Both the "in house" server with 10GHz resource pool and the 10GHz Terremark "cloud server" get hit very hard by 400 concurrent MS SQL server connections, but the 10GHz resource pool that we get from the Terremark cluster is less powerful as it needs up to 85% more time to respond. The reason for that is twofold.

First, while the limit of the resource pool is at 10GHz, only 5GHz is reserved. So depending on how hard the parent resource pool is loaded by other VMs, we probably get somewhere between 5 and 10GHz of CPU power. Second, there is some extra overhead coming from the firewalls, routers, and load balancers between us and the VM. The infrastructure of Terremark is--for safety and security reasons--quite a bit more complex than our testing environment. That could add a few tens of ms too.

We also noticed that some parts of the Terremark cluster were using slightly older drivers (vmxnet 2 instead of vmxnet 3 for example), which might result in a few lost CPU cycles. But we are nitpicking and Terremark told us it is just a matter of time before the drivers get updated to the latest version.

And there is good news too. If you are willing to pay the premium for "bursting", the Terremark cluster scales very well and is capable of giving response times similar to what an "in house" server delivers. That is the whole point of cloud computing: pay and use peak capacity only when you actually need it.

The Results Conclusion
Comments Locked

29 Comments

View All Comments

  • benwilber - Friday, June 3, 2011 - link

    this is a joke, right?

    there is not one bit of useful information in this article. if i wanted to read a Terremark brochure, i'd call our sales rep.

    speaking as an avid reader for more than 12 years, it's my opinion that all these braindead IT virtualization articles are poorly conceived and not worthy of anandtech.
  • krazyderek - Friday, June 3, 2011 - link

    submit a better one then
  • DigitalFreak - Friday, June 3, 2011 - link

    I guess it's a good thing then that your opinion doesn't matter.
  • HMTK - Monday, June 6, 2011 - link

    Yeah, I also prefer yet another vidcard benchmark fest.

    Not.
  • Shadowmaster625 - Friday, June 3, 2011 - link

    Still waiting for that $100 tablet that can provide me a remote desktop that is so responsive you cant even tell it is a remote desktop. I want it to be able to stream video at 480p. With good compression, this only requires a 1 mbps connection. I dont think this is too much to ask for $100. I dont care that much about HD. Streaming a desktop at 30 fps should only require a small fraction of my bandwidth.
  • tech6 - Friday, June 3, 2011 - link

    As you mentioned, Terremark cloud benchmarks vary greatly depending on the underlying hardware. We did some tests on their Miami cloud last year and found the old AMD infrastructure to be a disappointing performer. The software is very clever but, like all clouds, some benchmarking and asking the right questions is essential before making a choice.
  • duploxxx - Sunday, June 5, 2011 - link

    as usual this is very debatable information you provide. How did you bench and what storage platform? what is your compare a 2008 vs 2010? What kind of application did you bench? Specint? :) Just like Anandtech has greatly shown in the past is that appplications performance can be influenced by the type of cpu (look at the web results within the vApp that is clearly showing it likes faster cache architecture and to certain extend influences the final vApp result to much) you need to look at the whole environment and applications running in the environment, this requires decent tools to benchmark your total platform. (We have more code written by dev to automaticaly test any functional and performance aspect then the applications by themselves) everything in a virtual layer can influence the final performance.

    Our company has from 2005 till now always verified the platforms between intel and AMD on virtualization every 2 and 4 socket machine. Currently approx 3000 AMD servers on line all on Vmware private clusters from many generations. They are doing more then fine. The only timeframe that the Intel was faster and a better choice was just at launch time of the Nehalem Xeon for a few months. Offcourse one also need to look at the usecase for example the latest Xeon EX is very interesting with huge amount of small vm's, but requires way more infrastructure to handle for example load and the failure of a server. (Not to mention license cost from some 3th party vendors like Oracle.....)
  • lenghui - Friday, June 3, 2011 - link

    A very well thought-out comparison betwen the in-house and IaaS environments. Even those who have the in-house resources would need to spend a lot of research time to reach a conclusion. In that sense, your review is most invaluable -- saving hundreds of hours or otherwise guess work for your readers. You probably can include a price analysis as the other readers have suggested.

    Thanks, Johan, for the great article.
  • brian2p98 - Friday, June 3, 2011 - link

    This is, imo, the biggest unknown with cloud computing--and the most critical. Poor performance here could result in degradation of performance on the scale of several orders of magnitude. Website hosting, otoh, is rather straightforward. Who cares if 5Ghz of cloud cpu power is equivalent to only 1Ghz of local, so long as buying 25Ghz still makes economic sense?
  • duploxxx - Sunday, June 5, 2011 - link

    depends on how good or bad your app can scale with cpu cores.....

    if it doesn't and you need more vm's to handle the same load you also need other systems to spread the load between apps.

Log in

Don't have an account? Sign up now