Troubleshooting virtualization complexity: Four experts weigh in

Four experts outline the major virtualization challenges, the pros and cons of various virtualization platforms and features and more in this panel discussion.

This Content Component encountered an error
Jo Maitland: Hello, everybody and welcome to the first session of our Advanced Enterprise Virtualization virtual trade show. We're starting the day with a panel discussion that includes some of the top virtualization experts in the industry.

I'd like to go ahead and introduce them. We have Chris Wolf with us, who is a senior analyst with the Burton Group. Over a couple sessions throughout the day, Chris is going to talk about how to analyze your virtual infrastructure to tune and manage it on a large scale. Chris has logged over 14 years in the IT trenches and focused on enterprise virtualization since 2000. He's also authored Virtualization from the Desktop to the Enterprise, which is the first book published on this topic. Hi, Chris.

Chris Wolf: Hi, Jo.

Maitland: Next up is David Davis who is director of infrastructure with Train Signal. David created the "Train Signal VMware Server" video training course and five other training courses. He has a number of certifications including CCIE, MCSE, CISSP and VMware Certified Professional and is author of hundreds of articles on this topic. He served as an IT manager for many years and his personal websites are happyrouter.com and vmwarevideos.com. Hi, David.

David Davis: Good morning, Jo. Thanks for having me.

Maitland: You're welcome. Next is Nelson Ruest. Nelson is a senior enterprise IT architect with Resolutions Enterprises. Nelson's here to discuss Windows Server Hyper-V and how it measures up to the competition. He has over 20 years experience in migration planning and network PC and infrastructure design. He's an MCSE, Microsoft Certified trainer and Microsoft MVP in failover clustering. Nelson's also co-authored several books including the Complete Reference for Windows Server 2008, The Definitive Guide to Vista Migration, Deploying Messaging Solutions with Microsoft Exchange Server 2007, and Configuring Windows Server 2008 Active Directory. And finally on the panel…Oh, hi, Nelson.

Nelson Ruest: Hi.

Maitland: And finally, on the panel we have Rick Vanover with us. Rick is systems admin with Belron and is here to discuss pain points in virtualized network environments and the best practices for configuring and testing co-administered networks. Rick has been working in the information technology world for over 10 years and with virtualization technologies for at least seven years. He's also published many articles on this topic. Hey, Rick.

Rick Vanover: Hey Jo, how you doing?

Maitland: Good, thanks .How are you?

Vanover: Doing well, thanks.

Maitland: So, guys, I was just going to plow into some questions here. First of all, to you, Chris, I'm interested to find out: What are some of the roadblocks that you see, that exist, when running production applications in virtual machines?

Wolf: That's a good question, Jo and it's really starting to become more and more of an issue as technology moves along. Performance has historically been a major barrier with – especially tier-one and tier-two applications and VMs [virtual machines]. And often times, that's directly related to either memory or CPU scheduling overhead. Some hypervisors will not support more than a 1:1 virtual-CPU-to-physical-CPU core, oversubscription ratio which can limit scalability and also consolidation densities.

Outside of the performance --and I'm going to get to some of the memory issues a little bit later in our talk because those can get pretty substantial -- but, troubleshooting complexity is one more that's becoming more and more of a challenge for organizations. And that's because virtualization today is starting to introduce multiple layers of abstraction. And that includes not just a server virtualization hypervisor, but you're seeing single-root I/O virtualization starting to take shape, you have multiroot I/O virtualization on the horizon.

There are a number of organizations that are using storage virtualization today, as well. So when you have all of these layers of abstraction, actually finding the data path – the physical data path – when you're troubleshooting an application, it can be quite challenging. And just one last one to wrap up here, vendor support is another one. You almost laugh at the idea that there are vendors that do not support server virtualization today, but they're out there. Oracle is probably the largest offender today; they have some large enterprises that they do officially support that have negotiated to have that with them on a case by case basis. But in terms of the broad IT community, Oracle still does not officially support most of the largest virtualization platforms, including VMware's ESX Server.

Maitland: Wow, they need to catch up, huh?

Ruest: Well, they do support their own virtualization platform and they're very happy to promote that.

Wolf: And that is very Oracle of them.

Ruest: That's true.

[laughter]

Maitland: Is it your sense, Chris, that there are enough tools now that can see through this abstraction layer, particularly when you're running production apps on VMs that can spot where there are stresses in the environment?

Wolf: Uh, no. When you ask the vendors, they're going to tell you that they can do the whole kitchen sink. But there are vendors that are very good at isolating and analyzing SAN issues, such as Akorri, for example. There's vendors that are doing a pretty good job with application issues, a startup by eG Innovations with their VM Monitor product as well as a number of application-specific troubleshooting

[offerings]. There are companies like Netuitive that do very good work with the LAN [local area network]. But if I have an application problem and I need to know the entire data path and all of the dependencies to resolve it, I don't want three or four tools to do it for me. I want one console.

So either I need a product that can do all that --which is extremely difficult to do -- or I need a top-level product that can tie all of these in, that can give me that level of visibility. So, to me, it's becoming more of a problem on the horizon. I don't think you've heard about it as much so far simply because there are still a small percentage of organizations that are running a number of enterprise-type workloads in VMs. By that, I mean tier-one or tier-two applications.

Maitland: David, a couple of questions for you here. So we know everyone is excited about ESX 4 on the roadmap for the roadmap for this year or for release this year. And, you know, there are a lot of things we can't talk about. But are there any exciting features that you see coming that we can discuss to wet people's appetites?

Davis: Yeah, Jo. We can talk about some of the features, of course, that they demonstrated or that VMware demonstrated at VMworld 2008. They demonstrated vLock, their new fault tolerance solution, which is going to offer a big improvement over the current VM HA [High Availability]. You know, currently with a VM HA, if an ESX host goes down, all the guests in the VM HA, high-availability cluster have to be rebooted. And of course, that's not the case with their new fault tolerant solution. So I think that's pretty exciting.

They have a new data recovery solution, which is essentially a backup and recovery application that's built right into the VMware/VI client. I found that to be really interesting. They have a lot of new storage features and they have even a search functionality built right into I client for large virtual infrastructures that you can sorta Google through your virtual infrastructure. So, a lot of new and exciting features. I'm just waiting for the date when they announce the public beta so we can really start talking about everything they are going to offer.

Maitland: What about different, new things with V3 [VMware Infrastructure 3]that make it worth checking out? Anything in particular?

Davis: V3, you know it has a lot of new features that, you know, I think they're really starting to bring their whole VDI [virtual desktop infrastructure] solution together. They authorized thin printing from ThinPrint, so now they have a universal print driver that's going to provide printing across all the VDI clients.

They have an application called View Composer, which allows you to create virtual machines in there, you can have a single instance of a virtual machine and you can have the changes, the changes will be stored in a way so that you don't have a single image for every virtual machine. They have a new experimental offline desktop feature, which I think is really cool for us to be able to take our virtual desktops offline and take them with us.

Ruest: They have a lot of work to do to catch up to XenDesktop, because they didn't have any tools like virtual machine provisioning, all virtual machines running from one single differential image and such features. So it's really good to see that there's competition in the field to promote new features coming out, such as those available in V3.

Maitland: There's a question here from the audience guys, from … Steve Drew of TradeGen. He says, "What are the key questions I should ask of hosting companies that offer dedicated virtual servers?" Anyone want to volunteer?

Vanover : This is Rick. I can answer that. One important thing to think about is, you know, if you look at the roadmap of Windows Server 2008, the R2 edition that's coming out will only be available x64. So in my opinion, I would not want to do anything that would lock me into the pre-R2 state. So that makes that decision pretty clear.

Ruest : And besides -- this is Nelson -- there's Microsoft, who recommends to virtualize Windows Server 2008 x64 over X32, simply because, first of all, there's no upgrade path from x86 to x64. And second, the x64 edition, because it's the edition that includes Hyper-V, is the edition that has been optimized to run in a virtual environment. So as long as you have an updated version of Windows Server 2008 x64, with the update for Hyper-V, then you know your workload is going to operate better. Whether it's on Hyper-V or on any other virtualization platform, definitely x64 is the way to go.

Maitland: So while we're on Microsoft, Nelson, do they really stand a chance with Hyper-V? You know, there's a 10-year lag here, and Microsoft, you know, Microsoft is pretty famous these days for pushing its way into new spaces, but not necessarily, you know, [for] getting there. They made a big effort to get into the data protection market; they've tried hard to get into the archiving space, what's your sense here?

Ruest: Well, everybody sees virtualization as this huge competition market, and really, it's a commodity, it's a tool that everybody needs to use on an ongoing basis going forward because there's just simply more than … there are no other better reasons to move forward with physical infrastructures. Everybody needs to move forward with a virtual infrastructure. So there's no doubt that virtualization is an absolute in our future.

And for Microsoft to provide a hypervisor integrated into the Windows product, it certainly responds to needs that their current customers have. After all, we have to remember that the reason why VMware created the virtualization market is because of Windows. Most people are virtualizing Windows well over any platform today. The fact that Windows is the most virtualized operating system makes it an obligation for Microsoft to offer their own hypervisor. Hyper-V is a very stable hypervisor. It runs very well, it doesn't have the same feature set as the other hypervisors on the market. But keep in mind that XenServer doesn't have the feature set of VMware either. So there's lots of room for opportunity here.

Maitland: Is there anything about Hyper-V right now that you think makes it sort of technically, I mean does it have any technical advantages right now or … over the existing [offerings] ?

Ruest: Well it's a two 64-bit hypervisor; VMware's still operating with a 32-bit hypervisor -- though their virtual memory manger is a 64-bit engine, so that mitigates the issue of having a 32-bit hypervisor. But Hyper-V is based on x64 code, it's an X-generation hypervisor, it's actually on par -- or pretty close to being on par -- with XenServer; it's a very stable platform.

For people that are used to working with Windows, it's an excellent opportunity for them, because it's a familiar platform, it's easy to implement, it's a little harder to implement in Server Core, but lots of people are providing information about the Server Core implementation. So the instructions are fairly straightforward to follow. There are some issues: It doesn't support live migration today, but Chris mentioned a little while ago about organizations that were providing support for virtualization, and actually one of the things that's really cool about the release of Hyper-V is that Microsoft, Windows Server teams and Windows workload teams have begun to publish a whole series of support policies for virtualizing their workloads.

One of the most famous is probably the Exchange support policy for supporting workloads and the Exchange team does not want you to use technologies like live migration with their product because they claim that if you use Live Migration, the migration occurs at the hypervisor level as opposed to occurring at the application level. And when it occurs at the hypervisor level, the hypervisor is not aware of the actual state of the application during the migration. So, it's possible with Exchange to lose some email messages when you perform a live migration. So instead, the Exchange team prefers for organizations to actually create an Exchange cluster in the virtual layer, so Exchange manages its own migration from one host server to another. So given that, Exchange is managing its own migration at the virtual layer, then it doesn't matter that Hyper-V does not have a live migration feature. In fact, you could even run Exchange on the free Hyper-V server that Microsoft offers that doesn't include the capability for even doing failover clustering of your host. Because really what you need is a standalone hypervisor.

So I think it's really important for organizations that want to migrate workloads to virtual infrastructures to look at the support policy for that particular workload to understand exactly how they need to build their host servers to support them. And, in this case, when you're working with Windows technologies and the Windows technologies already have their own fault tolerant technologies built right into them, then you rely on those fault -olerant technologies so you don't have to worry about that. …

Maitland: Right, I'm with you. I guess though the thing is who necessarily wants to have a different standard for every application? You know, if I have half a dozen different apps and each one has a different fault tolerance. …

Nelson: Well, it's interesting because I think Microsoft wants to, because if you look at the Exchange policy versus the SQL Server policy, they're completely different.

Maitland: Right. Ruest: And SQL Server, for example, does not even talk about fault tolerance in the virtual layer, they talk about fault tolerance at the hypervisor level, whereas Exchange has a completely different policy. So, I think it's really important for organizations to look at the workloads they want to run in the virtual layer and make sure they use the proper support policy for that. Otherwise, they're not going to get support from the team, and then they'll face a whole series of other issues I'm sure Rick would agree with that.

Maitland: Right. Have you seen Hyper-V, anyone seen Hyper-V running in production environments yet? I'm curious to know what the workload was and how large the environment was.

Ruest: We've seen Hyper-V run in quite a few production environments. They range from very small organizations to fairly large organizations. It's a stable, very very stable, hypervisor. There are lots of workarounds that people need to use. If you're relying on the migration feature that's built into Hyper-V, you have to use what's called Quick Migration; which means when you move a virtual workload from one host server to another, when you do a manual movement, it will pause or save the state of the workload and re-open the state on another machine, so there is an interruption in service.

But when you're talking about a failure in one of your host servers, no matter which technology you're talking about, except of course ESX 4 which David was mentioning, there is always a stop in service when there is a hardware failure on the hypervisor layer and your workloads have to be moved to another machine. So, until we get technologies like ESX 4, which will require more hardware because you're going to have to run images of the same applications on various servers so if there is a failure on one, it automatically moves to the other running instance. We will all be faced with the same issues.

Maitland: Right. We have a question here sort of related to a question you've already answered ,I think, from Tyler Woods at Clarkson Consulting. He asks, "What about SAP live migration in Hyper-V?"

Ruest: Um, I don't have any experience with SAP live migration in Hyper-V. Chris? David?

Wolf: Just to jump in here, the live migration issue certainly is platform-specific. There are some scalability issues with it. But we have a couple of clients that are doing some early virtualization of SAP workloads. It's nothing in terms of enormous load, but they are heading down that path. We have several that are using Exchange as well. And those are on VMware environments.

The live migration is important for a few issues here. Vendors will position it as a checkbox, and it is far from a checkbox. The efficiency of the memory replication, the way the actual workload is quest and restarted and application state is maintained can vary by platform. And it's important when you're looking at the platforms to do a thorough load test and conduct live migrations during the load test. The other thing too to look out for is when you're evaluating the platforms, as I mentioned before, CPU scheduling can vary quite a bit, so you do need to do it when you're looking at doing a validation; don't take the vendor recommendation and run one application and one VM on the host, you really need to scale it out to at least eight VMs. And that would be on at least a two-way quad-core server to give you a semi-realistic look at how the VM will perform in a typical enterprise deployment scenario.

To me, any vendor references that just show one VM on one host are … I just simply dismiss them because they're misrepresentative of the CPU scheduling overhead and memory management I/O management overhead that you get as soon as you have multiple VMs on a host.

Ruest: That's a good point, Chris. You mentioned a little while ago the concept of resource overcommitment, and when you're using live migration that the whole concept of working with a dynamic data center of having multiple virtual machines running on top of the host and when those resources are running at a low resource virtualization then they're fine together, but when one of the resources requires a … or one of the workloads requires more resources, the whole concept of live migration is being able to take that workload and moving it over to a another host server so that another host server can provide more resources to that particular workload. That really only works when your hypervisor supports this concept of resource overcommitment. Right now the only hypervisor that does that is VMware. XenServer and Hyper-V are dedicated resources to the virtual machines. So if you need 4 GB of RAM on a virtual workload on top of Hyper-V, well, you have to allocate that 4 GB of RAM anyway. So the point of moving it over to another machine in a dynamic movement is a moot point.

Maitland: So Rick, do you want to chime in here?

Vanover: Well yeah, this is a good feed. One question that I get a lot is how does an organization -- especially an organization that writes its own software at the tier-one or tier-two level -- how do those organizations embrace new versions of software, because virtualization software specifically, this version of the hypervisor or this version of the management software from the QA test process because a lot of organizations are generally on board with virtualization but as you go up the ladder to the more importance of the application, the scrutiny gets very fine.

One of the things I've had experience with, kind of to Chris and Nelson's comments both of doing a test of a hypervisor, of the performance of the application and the management software. In the case of VMware, making sure the DRS [Distributed Resource Scheduler] doesn't choke the application, but you have to really represent a quality workload. And beyond that, you also have to represent the right equipment. Specifically if you take a less-capable system for the test and don't have that good of a workload on it, or if you do have a good workload on it that's not fully representative of a very high powered four-way processor or four cores and good amount of RAM that's not the same type of test, and likewise you want to make sure that the application under those loads in the fully representative environment, the performance criteria are met. And also, you know, this is one of those things that a lot of people overlook and that's the storage system.

You know, how does the, if you're testing a virtualization system's performance and if you're only using local disc, that's not as representative as if you were using network-attached storage or a SAN or iSCSI environment for the housing of the virtual machines that are also contending with other workloads. So anything you can do from the network to the storage to the version of the hypervisor to be fully representative of the end state is good. Simply saying that it works like a virtual machine is really an unqualified statement, kind of to what Chris had said. And in the case of internal applications, or homegrown applications, the sell process as far as proving that virtualization, virtualizing these apps is the way to go may be a little bit more work and simply looking for a vendor's checkbox, like what was said earlier, so anything you can do to fully represent the environment can push the solution in the right direction.

Ruest: That's a really good point, actually. We've had some customers virtualize internal workloads without telling the customer that the workload was virtualized and running it in a virtual instance over a period of time and then going back to the customer and saying, "Well, do you have any comments about the performance of your workload and any issues that have come up?" And then after that, they tell them that the workload has been virtualized for quite awhile, and it's quite interesting to see what the reaction is.

Maitland: You know guys, this does actually bring up an issue, I was at a VMware user show recently, and so many of the questions from the audience were around how to handle the politics once you start virtualizing different applications. What advice do you guys have about that? Especially when people don't really know that it's happened and something goes wrong.

Ruest: Well, one issue that we face a lot is the demand for virtual machines. As soon as people get into the idea that you can generate a virtual machine within less than 20 minutes, the demand for machines increases expediently. And so, we've always told our customers that it's very important for them to continue having an official administrative process for the authorization of machines. VMware has a great product called Lifecycle Manager, and that's a great tool, because it goes through an authorization process before a virtual machine is allowed to be created. And that's something that people have to watch for. They have to watch for VM sprawl because they're just so easy to create.

Maitland: So you still … the advice is really you still need all the sort of usual policies and checks and balances in places [as] you would with a physical infrastructure.

Ruest: Absolutely.

Maitland: Um, switching gears a little bit, I'm kind of curious; I know that the server virtualization environments throws a lot of spammers in the works on the storage side. Chris, any thoughts on new technologies and trends in 2009 for data protection and recovery?

Wolf: Well, there's quite a bit happening in the data protection space from vendors. There's some products that are maturing, really one of the big things is the feasibility of server-less backups is really starting to take shape. So far a lot of organizations have done things like array-level snapshots, but have had to live with application inconsistent or crash-consistent snapshots as a result; which means, these applications may be recoverable. It's basically the equivalent of a power outage – recovering from a power outage – but there are some issues there. And again, not everybody wants to bet their data protection architecture on what is not necessarily guaranteed. So there is quite a bit of work happening in regards to platform supporting VSS Volume Shadow Copy Service], which is a very good example so that I can queues a Windows application inside a virtual machine prior to doing an array level snapshot.

VMware Consolidated Backup [VCB] is starting to scale quite a bit better. The first iteration could do about four concurrent jobs per physical host, and now it's up to about eight. Some backup vendors are even doing things like adding serialization of backup jobs to prevent too many VCB jobs, for example, from running on a physical host.

The other thing that's happening here too in the data protection space is you're seeing vendors start to get pretty clever with not just traditional backup and agents but also array level integration if you saw the CommVault Simpana 8 release announced, I think it was earlier this week or last week perhaps, they can now manage VMware and Hyper-V environments, but they can also manage your array-level snapshot along with your backups and other images all under a single interface. In addition to that, they're doing things like giving organizations the ability to do incremental virtual disc backups and recover individual files. That's nothing new; there are a lot of vendors that have been doing that. And Vizioncore and NetApp having been doing that for several years … which is interesting … you're seeing vendors leveraging VSS database backups and giving you the ability to do offline recovery of individual database objects.

So for example, I can do a VSS backup of an Exchange database, and actually from that database backup within a virtual disc image recover an individual user mail message. Both CommVault and NetBackup 6.5 offer those types of features today, and I expect more backup vendors to head down that path.

Maitland: One question I have about backup is, I guess, so, you know, VMware is increasingly, and I hear a lot from users actually, is that VMware is obviously now branching out into other areas of the IT infrastructure and themselves just recently announced an SMB [small and medium-sized business] backup tool. At what point do you decide, 'OK this tool is kind of good enough, and I'm going to stick with VMware here,' or do you continue using your existing backup software? It sounds like that's going to get more complicated.

Wolf: For my money, I think you'd need to use your existing tools. Especially since there's so many of them that are -- again, most of my clients are large enterprises -- I think it's probably a different answer in the SMB space. But, if I'm a help desk administrator, and I'm responsible for recovering user files, I don't want to have to know if the system's a VM or a physical server and then have to figure out which tool I should use to recover the file. That's just adds a lot of unnecessary complexity. I would much rather prefer to have a single tool that I can use that can manage both my virtual and physical environments. And again, the virtualization can be transparent to the users.

There are some organizations that have a very substantial virtualized deployments. Some are anywhere between 60% and 80%, but the vast majority of enterprises are more in the 25% range or so, or maybe approaching 30% or 40%. That still means that the majority of their systems are protecting our physical assets. And again, to me, I'd much rather have the tool that can do both and make that transparent to say my help desk operators that are having to use the tool. It just makes for easier operations.

Maitland: While we're on backup, there's a gentleman Tyler Woods, from Clarkson Consulting, who asks "Any experience with Symantec Backup Exec and Hyper-V?"

Wolf: I don't have direct experience. I'll leave that for the other panelists. I know it's supported in terms of doing a full evaluation, I'm not quite sure. What I can tell you is the most advanced Hyper-V backup product I've seen to date is EMC NetWorker, which is the first to support transportable VSS snapshots. It's basically a Microsoft equivalent to what VMware has with Consolidated Backup. I haven't seen either Backup Exec or NetBackup offer those yet, but there, I know those products have another release coming up, and that type of feature might be in there at that point.

Ruest: Right now the current version does not have that kind of feature, but when you expose the Backup Exec agent to each one of the virtual machines as well as to the Hyper-V host, it does work very well and provides you with file-level backups of each one of your applications. So there's absolutely no issues with, working with, the latest version of Symantec Backup Exec with Hyper-V.

Maitland: Rick, this might be one for you, from a gentleman Kent Windsor of Shoppers Drug Mart, he asks, "Does anyone have any experience with managing VMware ESX hosts over WAN links with VirtualCenter and, specifically, small WAN links?"

Rick: Yeah, I do have experience with that. His question also asked about VMware SRM [Site Recovery Manager], which I have not used, but the short answer is, it's a little rough. You know, 768 K is roughly equivalent to what I had had with a very small number of hosts -- three at one time as a maximum. There's a lot of, in the case of VMware administrators, the host not disconnected, but I know that it was up, it's just that the WAN link is not as reliable as a local link.

I was talking to a VMware field engineer and he said you can do it, I think he mentioned that it's either not recommended or not supported. I don't remember. But, it's a little rough, so my recommendation is to keep it as a short-term thing. As far as SRM, I think that's something where you really need a reliable connection in the case of managed failover. So I don't think that I would really scale that upward.

Maitland: Rick, I think there's a question here for you. Or rather, sorry, for David. I'm curious how you would prepare for growth when starting out in a new virtual infrastructure?

David: Good question. One of the things I would recommend, of course, is to get to know your applications. I know I say that a lot in past presentations, but I think it's really important for system admins -- and especially admins that are about to virtualize applications - -to understand the applications in the sense of are they discontent with the CPU-intensive, RAM-intensive, network-intensive applications and to allocate the resources for that workload appropriately. There are a lot of great tools out there, that you know, performance monitoring tools such as, like Veeam's monitoring suite that will allow you to manage your virtual infrastructure and monitor that performance and, I think, give you some extra insights into the applications, and as they grow over time you can use that information to better tweak and allocate those resources. And that's just an example of one of the many applications.

Ruest: I was going to say, we have a new book coming out in February that's called Virtualization: A Beginner's Guide. It's a really great place for people to start from when they want to move to a new virtual infrastructure. But in that, actually, Chris was one of our technical reviewers, and in that, we point out that it's very important for organizations to perform a proper assessment and to make sure that that assessment is current when they actually move forward with virtualization.

Assessing the actual resource utilization of your infrastructure is probably the most important aspect of a first step towards moving forward with virtualization. Because if you prepare your hardware infrastructure for virtualization and you don't prepare it properly because you did not assess the requirements -- especially the peak requirements for your applications -- then you're going to run into no end of problems. So it's definitely one of the very first things that you need to do, is look at this assessment. And there are lots of great tools on the marketplace that support those assessments.

Vanover: And one more point to that Nelson. And if you don't run into those problems, you'll end up wasting a lot of money if you overprepare.

Ruest: Good point, very good point! Yeah, we've been touring the U.S. for the past two years giving presentations on virtualization and everywhere we ran into lots of SMBs that are interested in moving forward with virtualization. And every time talked to hardware vendors and asked them if they had good small business or medium business infrastructure that they would propose. Because when you want to virtualize you need to look at shared storage, you need to look at proper host server configurations and there's no vendor that has a specific server-in-a-box concept, which has maybe a couple of host servers with appropriate resources inside them and shared storage integrated together. In a form factor, and in a cost factor, that is sufficient for small business, we did find some really great servers from HP -- which are the ML115G5 -- they only have one single processor, but it's a quad-core processor inside them. And they're under $500 to acquire, so bringing that together with the NAV, that's about $1,000 per under $3,000 you can get a very good two-node cluster that runs VMware and that provides a really good starting point.

Maitland: There's a question from the audience, "Where can I use Xen or Linux instead of implementing VMware?"

Ruest: I can take that. The Xen infrastructure is a very solid infrastructure. I mean, look at Citrix spent a millions of dollars acquiring XenServer because it's one of the most solid platforms for virtualization. You can easily implement a post-server infrastructure based on a Xen implementation and Linux. Xen is a little bit behind in terms of some of the features that VMware has, there's no memory or overcommitment capabilities, there's no memory-pooling capability, but they're working on that. They also don't have a technology such as VMsafe that allows you to load an anti-virus on the host server and scan virtual machines, but they're also working on that. So Xen is definitely a great starter infrastructure if that's the way you want to go.

[Wolf and Vanover talk over each other.]

Vanover: Yeah, go ahead Chris.

Wolf: I was just going to jump in real quick. Another good use case for XenServer today is Windows VM and Citrix XenServer, [which] are fully compatible with Hyper-V. When you install the VM tools or the paravirtualized drivers, the bits for both platforms actually get installed simultaneously. And that means that if you ever wanted to go from XenServer to Hyper-V, it's just literally copying a VM, setting up a new storage repository, and the way you go there's absolutely no work that has to be done.

The other vendor that's interesting in the Xen space that's been kind of under the radar, actually, is Novell. Novell has some pretty advanced virtualization management capabilities, and they have some interesting integrations with things like N_Port ID Virtualization, they have some good partnerships with vendors like SAP. And I think, of course, you can't really overlook a company like Virtual Iron either, which feature for feature is still the closest vendor to VMware today. They run the open source Xen hypervisor, support a number of Windows platforms as well and are very comparable in cost to some solutions out there.

Vanover: The one follow-up point I wanted to make to Nelson's comments was that memory overcommit for Xen is available, I believe, on the XenSource, the free open source version; which is typically the community development to lead up to the polished product. So I think that's pretty much a confirmed roadmap item.

Ruest: Oh, there's no doubt about it. I spoke with Simon Crosby of Citrix [Systems], and there's no doubt about it. So all of those features are on the radar for this year. So we're looking forward to some great changes in XenServer.

Maitland: Guys, we have a ton more questions here, but unfortunately we have run out of time. But thank you all very much, this is really, really useful. For all the questions we were unable to get to, we will definitely answer those by email, everybody, in the next 24 hours or so. Also, if you go over to our editorial booth here, then you can chat with the team there, there are experts over in the booth that can point you to places to get answers as well. Thank you very much, guys.

All four: Thank you.
This was first published in September 2009

Dig deeper on Preventing virtual machine sprawl

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close