Last year we ran a little series called Ask the Experts where you all wrote in your virtualization related questions and we got them answered by experts at Intel, VMWare as well as our own head of IT/Datacenter - Johan de Gelas.

Given the growing importance of IT/Datacenter technology we wanted to run another round, this time handled exclusively by Johan. The topics are a little broader this time. If you have any virtualization or cloud computing related questions that you'd like to see Johan answer directly just leave them in a comment here. We'll be picking a couple and will answer them next week in a follow up post.

So have at it! Make the questions good - Johan is always up for a challenge :)

Comments Locked

55 Comments

View All Comments

  • Guspaz - Thursday, March 17, 2011 - link

    If you're developing a purely hosted solution, the use of MySQL and other GPL'd software does not require you to GPL your solution. The GPL only covers distribution, and there's no distribution in a hosted scenario.

    In fact, you can even distribute GPL'd software with your proprietary software without GPLing the entire thing as long as you're not directly linking. A client/server relationship (distributing and using MySQL, but connecting through a socket) would not be a problem. Similarly, a lot of libraries are licensed under the LGPL, which *does* allow linking LGPL'd software into non-LGPL software without LGPLing the whole thing.
  • Jeff7181 - Thursday, March 17, 2011 - link

    I think one of the main benefits of virtualization for a small company is disaster recovery. You can take snapshots of a machine's disks and store them off the box/network or even offsite so in the event of a major virus outbreak or some other system failure, restoring your equipment to a working state is as simple as copying the virtual disk images back to the host. Hell, you could even completely rebuild the host and as long as you have the virtual disk images, you can be right back to where you were previously in a matter of minutes.

    You'll spend a bit more on the hardware, but the ease of recovering from something catastrophic will make you happy you spent it.

    Currently, the VM hosts I work with are around $20,000. They're HP DL380's with two 6-core Xeons, 48 GB of RAM, 6 NIC's, two HBA's and I think these have 6 hard drives in RAID1+0 but not much local storage is used, all the virtual disks are on the SAN. However, you could easily get 6 disks in RAID1+0 to house your virtual disks locally. You could even step down to a DL360 and accomplish the same thing.
  • hoverman - Thursday, March 17, 2011 - link

    Lunan - I started and owned an IT consulting company in the Raleigh NC area until was bought out by a slightly larger company in 2008, whom I now work for. Most of our clients are in the 10 to 300 users range. I currently maintain about 50 clients in the area. We virtualize all of our customers moving forward, even if they only have one physical server. We made it a company policy about two years ago.

    Please feel free to contact me externally to this forum, and i will help you in any way I can. My company's name is Netsmart INC. My contact info is on our website. Just look at my handle and match it up with my contact info on the site. (I dont want to post any emails, etc. here)

    Ever since I learned about virtualization, I have embraced it 100% and it has paid big dividends for our staff and our customers.
  • handle.goes.here - Thursday, March 17, 2011 - link

    There are plenty of articles comparing servers (both discrete and blades), but what I haven't seen (or have I just missed them) are good reviews testing the various interconnect fabrics. For example, I can build a cluster with HP DL380s and Arista 10gig switches, or I can build a cluster with an HP c7000 blade chassis and its integrated switch. (Likewise, LSI's SAS switch vs the SAS switch available for the blade server.)

    What are the performance tradeoffs between best of breed and an integrated solution?
  • VMguy - Thursday, March 17, 2011 - link

    Does VMware plan to implement similar functionality to the recently introduced RemoteFX for their virtualization platform? With package-integrated GPUs, having tens of VMs with real GPU capacity doesn't seem too far-fetched.

    Does VMware have a focus on any particular storage technology? It seems, from a functionality standpoint, that NFS is king. Going forward, would we be best served purchasing NFS-capable storage devices over block-level? Will block-level storage always be the performance king?

    Thanks
  • bmullan - Thursday, March 17, 2011 - link

    I don't know about Vmware but Amazon's AWS EC2 cloud already offers large GPU based VM Clusters.

    Per AWS URL: http://aws.amazon.com/ec2/#instance

    Cluster GPU Instances

    Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.

    Cluster GPU Quadruple Extra Large 22 GB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

    EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
  • ffakr - Friday, March 18, 2011 - link

    I'm not sure I'm qualified to say whether VMWare has a "focus on any particular storage technology" but there are definitely advantages and disadvantages to various technologies.
    In our case, we looked at 4Gb FibreChannel versus GigE iSCSI because we had both infrastructures in place. We opted for GigE because it allowed us to set up the hosts in a clustered, hot-failover configuration.

    In our setup we've got Two dual-socket quad-core Xeons as the host servers and our storage resides on an Equalogic iSCSI SAN. We boot our VMs off the SAN and VMWare allows us to easily move running VMs from one head-node to another.. and to fail-over if one goes down.
    With two Gig-E switches and port aggregation you can get quite a bit of bandwidth and still retain fail-over in the network fabric.

    The problem with FC is that it's a point to point connection. The server wants to 'own' the storage and it just isn't well suited to a clustered front end. We could boot our VMs off a FC box, but the problem arrises when we try to give two different boxes access to the same storage pool.

    We're currently running something like 15 servers on our system, with one 5000 series SATA Equalogic box on the back end and we're not seeing any bottle-necks. On the CPU side, we've got loads of spare cycles. We run file servers, mail, web.. Typical University Departmental loads (from chatty to OMG Spam Flood).

    As for NFS, I'd certainly prefer an iSCSI SAN solution if you can afford it. Block Level all the way.
    We're very happy with Equalogic, and we get a good discount in Edu. In fact, I'm lining up money to buy another device.
    Promise just came out with a proper SAN but it's still pricy. The advantage there is that it appears to be a lot cheaper to expand and their storage boxes tend to be very inexpensive and they've proven well made to us. I've got a number of them with zero issues over several years. Whatever you do, check with VMWare's compatibility list for storage.
  • marc1000 - Thursday, March 17, 2011 - link

    in the news-story about Intel bringing atom to servers (published here a few days ago). in the comments, a certain link appeared. http://www.eetimes.com/electronics-news/4213963/Ca...

    and using the power numbers in that link, I've done a math that shows that a 2U server with 480 ARM cores will consume roughly the same amount of power that a 2U server with 4 quad(or six)-core xeons.

    so, when you put LOTS of small cpus in the same space they end up consuming the same power as normal cpus. what is the advantage in using this hardware for cloud computing then?
  • bmullan - Thursday, March 17, 2011 - link

    You asked what are the advantages?

    Cost of power is one huge one. HVAC is the largest operating expense of most datacenters. Its not just the power to run the servers but the AC to remove the heat they generate.

    Well, its not advantageous for every "use case".

    But remember that not every application/process requires an 3Ghz cpu core.

    There are many applications where an ARM is more than powerful enough to accomplish things. On most home computers people only use 10% of their cpu as it is and they already think they have fast computers.

    In today's world we are more I/O bound than CPU bound.

    Run some linux OS on the ARMs, load balance them and you can do incredible computing.

    Read this past AnandTech article:

    http://www.anandtech.com/show/3768/seamicro-announ...
  • bmullan - Thursday, March 17, 2011 - link

    I should have mentioned that I also thought ZT Systems 1RU 16 core ARM product sounds great.

    Because it uses less than 80W of power it doesn't even require a fan for cooling so its totally silent.

    http://www.thinq.co.uk/2010/11/30/zt-systems-launc...

    I am not sure that their product is available yet but they had announced it a couple months ago.

Log in

Don't have an account? Sign up now