When it comes to hybrid clouds, there are a lot of factors that you should consider before making any serious decision....
Determining which cloud server is best for your hybrid cloud is just one decision you'll need to make. Hybrid clouds should be based on commercial off-the-shelf gear, specifically using an X64 architecture. The ARM64 is also a possible contender, but it is new and still needs some proving in the hypervisor environment. As for other approaches, using UNIX systems complicates life while the idea of a mainframe-based hybrid cloud is pretty much an oxymoron.
Having said all of this, there is clearly a large spectrum of alternatives to select from. In many ways, the choice of commercial off-the-shelf server is more likely to be driven by networking and storage decisions, which have substantial impact on how the hybrid will behave.
It's no exaggeration to say that the hybrid cloud hinges around data movement and data management. This is a direct consequence of the U.S. telecommunication industry's decision to delay rollout of fiber solutions. In general, the U.S. has very poor wide area network (WAN) infrastructure, and it makes for a chokepoint between the in-house private cloud and the public segment.
The inability to move data between public and private segments at anything approaching local area network (LAN) speeds forms the hybrid cloud into a topology that might be envisioned as an hourglass, with the tight neck being the WAN bottleneck. With that in mind, let's try to make some sense of how to construct a useful hybrid system.
There are two alternatives for drives. The first option is to have drives in each server while the other choice is to have media-less servers. The argument for drives is that network storage is slow, so a local instance store should speed things up a lot (Note we are talking about solid-state drives here). For a typical server with 128 instances, an HDD would deliver only 150 IOs per second, or about one per instance, which obviously isn't enough. For big data instances, a PCIe SSD might be needed but most other use cases can live with SATA SSD.
One problem with local instance stores is that users only have a single copy. This can be resolved if a virtual SAN (vSAN) approach is used, where a copy is automatically made across the network. This can slow the local instance writing a lot, though it does allow data sharing across the pool. However, there is a major issue with vSAN in the hybrid cloud. If those write copies are placed in the public cloud, the write latency can go up tremendously. Likewise, poor data distribution can mean a lot of slow public cloud accesses.
The alternative to vSAN in the hybrid cloud is networked storage. This won't be the same as that old, slow HDD SAN. All-flash arrays deliver millions of IOPS, and while they are relatively small, they are plenty big enough to hold your "active" data, with the rest on inexpensive bulk storage.
Both the vSAN approach and the networked storage approach need bandwidth in the network. 10 GbE is the minimum speed needed today, and each server should separate storage traffic from standard traffic, implying two 10 GbE links. Note that 25 GbE links will supplant 10 GbE in 2016. For the ultimate network experience, consider RDMA Ethernet to the storage pool.
Now, let's look at servers, where there are two schools of thought. One is to use powerful, full feature x64 server engines and virtualize them with containers or a hypervisor. The other is to use a cluster of low-power micro-servers with orchestration software and possibly with a hypervisor.
The traditional x64 approach has a wide variety of choices, depending on use cases. Unfortunately, today's typical cloud isn't homogeneous, but there are a couple of broad categories. General purpose computing seems to find its sweet spot with dual-CPU 1U servers. If no disks are needed, these can even be twin servers with two units in 1U. The twin approach allows for power supply sharing in clusters and this gives much better power efficiency.
For in-memory databases and big data analytics, the memory capacity of 1U offerings isn't adequate. One better option is a 2U quad-CPU server with 512 GB or greater memory space. Since these servers typically need local instance stores, the 2U size allows for a bunch of SSD to be added. GPU-based parallel computing is becoming a popular alternative in the analytics and HPC space. Again memory size and disk capacity may be the factor that determines either a 1U or 2U offering.
The alternative approach to microservers heads in the opposite direction towards x64 -- small. Low-powered and inexpensive, boxes with 40 or more processors are available. The approach is more like "hosting in a box" rather than what most of us recognize as the cloud. It fits with use cases such as web serving and media delivery, but complicates life for bigger instances.
There are two things to bear in mind when buying a cloud-in-a-box appliance. Clouds evolve over time, so avoiding vendor lock-in is important. As an example, vSANs tend to lock you in, since they are proprietary to the hardware vendor in most cases. Some software offerings carry a hidden lock, by requiring all components to be pre-certified to a somewhat restricted list. Being locked-in is going to cost more over time, as cheap commercial off-the-shelf server units combined with commodity-priced storage drives become more available.
The second consideration when buying a cloud server is system sourcing. Those mega-cloud service providers buy direct from original device manufacturers and avoid the middleman. This may be an option for corporate America, too, since standardization in the commercial off-the-shelf server world is superb. For those tempted to consider microservers, calculate the cost per instance of the alternative approaches before committing. It's going to be a close race.
Determining if hybrid cloud lives up to the hype
VMware vCloud Connector boosts hybrid cloud management
Achieve greater flexibility with hybrid cloud