BOSTON -- Red Hat’s Enterprise Virtualization management server (RHEV-M) version 3.0 is paving the way for an all-Linux environment. RHEV-M is now set to move off its
RHEV-M manages Red Hat’s version of the open source Kernel-based Virtual Machine hypervisor (KVM) and is based on Microsoft .NET. The software will be ported to a Java code base in the next version, which will be out in beta later this year, according to previews at this year’s Red Hat Summit. The Microsoft SQL database on RHEV-M’s back end will be replaced by PostgreSQL.
Users at the recent Red Hat Summit conference said that this change can’t come soon enough. Travis Tiedemann, systems engineer at Union Pacific Railroad, said he’s sticking to Xen virtualization until RHEV runs on Linux. “We’re waiting to bring in the RHEV product,” he said. “When it’s fully Linux, we’ll start looking at KVM.”
When asked what he was most looking forward to in RHEV 3.0, Ryan Murray, infrastructure architect at Ganart Technologies Inc., said “Linux. In my infrastructure, there’s one Windows box, and it’s the RHEV management node.”
There are, however, components of the RHEV architecture that will remain Windows-based until RHEV 3.1 or later, said Itamar Heim, the director of software engineering at Red Hat, in a presentation. For example, the administrator portal component of RHEV-M will remain based on Windows Presentation Foundation (WPF). ActiveX controls will remain, at least temporarily, for Windows clients. A Python-based command line interface (as opposed to PowerShell) is also still in upstream development.
RHEV 3.0’s RESTful API
RHEV 3.0 will also support a new RESTful API against which users can write custom scripts to add advanced features to RHEV. The features come from the command-line based KVM management utility libvirt, which can be called by the RHEV API and execute functions such as CPU pinning, single-root I/O virtualization (SR-IOV), direct LUN access from the virtual host, putting switch ports into promiscuous mode for monitoring, watchdog monitoring, and integration with Cisco’s VN-Link. According to Andrew Cathrow, senior project manager for Red Hat -- these features will eventually be rolled natively into the graphical user interface (GUI), but not in version 3.0.
A large RHEV early adopter, Qualcomm, appears to have been behind some development of the RESTful API hooks. During a presentation at the conference, Qualcomm engineers said Red Hat consultants helped create a custom RESTful API that enables the creation of a self-service portal dubbed AutoLinux.
AutoLinux began as a kind of skunkworks project within Qualcomm that “exceeded our expectations for its popularity,” according to Qualcomm engineer Michael Waltz. AutoLinux ran on legacy hardware with a 3,500% storage overcommit, glued together with internally developed scripts, and it was becoming overloaded. That’s when Red Hat stepped in, according to Waltz, and “changed our scripts around” to support a RESTful API.
“Who wants to see AutoLinux open-sourced?” Zak Berrie, a Red Hat senior solution architect, asked attendees at the session, to a strong show of hands. The Qualcomm reps at the session said this idea has been under discussion internally, but for now, other users will need to customize RESTful APIs to create something like it in RHEV.
The storage situation
Another addition to RHEV 3.0 is support for storage directly attached to a server; previously, RHEV required shared storage. This can cut costs in small environments or create an affordable setup for RHEV proof-of-concept testing. At a deep-dive session on RHEV 3.0, users asked whether the feature will support active/active high-availability clustering between hosts with direct-attached storage, but Red Hat officials said that feature won’t be available in version 3.0.
When it comes to shared storage integration, some users say RHEV also has to catch up with competitors. RHEV 3.0 won’t support attaching a guest to multiple storage volumes through the GUI, for example, which means that some users are still waiting on Red Hat to support the automated tiering features in their storage arrays.
“Our OS lives on one disk, database files on faster storage, and other files on cheaper storage,” said Joseph Hoot, lead programming analyst for Buffalo State College. The college has automated tiering set up on both its Hewlett-Packard Co. XP24000 Fibre Channel and Dell Inc. EqualLogic iSCSI arrays. But the school can’t use it conveniently in the RHEV environment. While technically feasible through scripting, “We’re looking to be able to present multiple block devices to a guest through the GUI,” he said.
Finally, the RHEV GUI will get an overhaul in version 3.0, including new topology “tree” views of the underlying infrastructure. Qualcomm’s engineers said that these topology views are most important among the interface changes.
“There can be confusion when you see data centers, clusters and host groups [in the GUI] and you don’t know where to click to add storage [to a virtual machine],” said Bob Gerling, IT staff engineer for Qualcomm’s Unix infrastructure.
Beth Pariseau is a senior news writer for SearchServerVirtualization.com. Write to her at firstname.lastname@example.org.