Drop everything and jump on the SMB 3 bandwagon

Windows Server 2012 is picking up momentum, spurred on by SMB 3 improvements. But, could a simple network share change the VM storage game?

I'm reminded every morning when I get up that I'm getting slower -- creaking and popping -- as I make my way to the coffee machine. If only I aged as gracefully as Windows Server. You've seen the latest marketing -- Windows Server 2012 is faster, more reliable and more resilient. Before you disregard it as normal hype, make the effort to take a look, because it's true.

In a recent article I discussed how you can accelerate live migrations in Hyper-V with Server Message Block (SMB) 3.0. But, there's more to SMB 3 than improved live migrations. The world of storage and access is changing, and Microsoft has hit it out of the park with SMB 3.

To be technically accurate, Windows Server 2012 ships with SMB 3.0 and the new release of Windows Server 2012 R2 includes SMB 3.02. What does all this mean? Imagine using a network share to connect to your data, regardless if it's a SQL database or (for us virtualization folks) a virtual hard disk. Instead of using iSCSI or fiber to connect to a SAN, imagine connecting your virtual machines to a storage device of your choice all over a network share. You will still have the performance, reliability and failover that expensive SANs can provide, but it'll be over an easy-to-configure simple network share.

Wouldn't it be cool to be able to put that mission-critical VM storage on the expensive high-speed storage and the less critical stuff on lower-cost storage without the configuration mess and hardware requirements? Did I mention this is a simple network share? No fiber, no iSCSI, just a simple "NET USE" command away. I'm sure you're assuming the performance wouldn't be enough, that you can't get the IOPS, but you would be wrong.

This isn't just a cute Microsoft feature; SMB 3 deserves your attention. Other vendors are beginning to adopt it. NetApp, EMC, Samba and even Apple are looking to implement it. Here's why:

  • SMB 3 is specifically designed for server applications (including VMs) to store their data on file shares. This allows VMs to store their virtual drives on lower-cost storage but still maintain the best performance received by that storage. In simple terms, you can put VMs running low-use applications in a cluster connecting to shared low-cost (albeit slower) storage over network shares without the expense of a SAN and without the complications of fiber or iSCSI.
  • SMB uses RDMA (remote direct-memory access) for direct memory-to-memory access, requiring very limited CPU time. This means you can host more VMs and access more storage points without losing performance.
  • SMB gains from multichannel -- or multiple network interface cards (NICs). If you have multiple NICs teamed for this, then you benefit from the aggregate performance of all of them.
  • Multiple NIC support also gives SMB the chance to seamlessly fail over between NICs or in the case of other network issues. In other words, you get the power of throughput by teaming NICs and the resiliency of instant failover.
  • SMB also offers total encryption of data, so you don't need to worry about the hacker with a protocol analyzer sniffing your confidential data.

There is more to SMB 3, but the key takeaway for server virtualization admins is that it allows us to put the virtual disks on storage that better matches performance demand. I don't mind configuring iSCSI and Fibre Channel to a SAN, but if I can slap a few virtual disks on a server to house the lower-tier virtual disks, and put them on a cluster that connects over a network share, that would be so much better.

So, SMB 3 is something you should consider, investigate and try out. Perhaps you might find that the old ways of storing your virtual disks (and other large data) can be accessed with a method you never thought of -- over a simple network share. These improvements can help your virtualization deployment continue to move forward and mature gracefully.

Meanwhile, I will still crawl out of bed, creak and pop, to get my coffee.

This was last published in November 2013

Dig Deeper on Microsoft Hyper-V and Virtual Server

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

6 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What do you think of SMB 3?
Cancel
Easy of deployment, resilience and security... it's hard to beat the top three items all admins and engineers look for in any environment.
Cancel
Especially Multi-Channel is a disruptive Technology. Instead of jumping to FC for higher throuhput, we now consider SMB multi-channel. Noh HW change required, cost, know-how ...
Cancel
Looking fwd to putting SMB 3.0 through it's paces.
Cancel
Although I have had some difficulties getting it right (and I hope I have) the fact that you can deploy a shared SMB 3.0 server as a dedicated VM share changes the scope of things. In SMB environments the SAN is dead (?). You manage Windows servers and shares ('SAN') from the same environment with the same tools.
Cancel
While I commend Microsoft for releasing similar features in Hyper-V and SMB 3.0 that Vmware has had in ESX server by using NFS to host Virtual Machines for the past 10 years, Microsoft marketing some times does a disservice by over selling a feature so that the average IT systems Administrator and their management don't have the time to research or undersatnd that this version of SMB 3.0 is not going to perform as fast as using Fiber Channel on a HP 3Par, or Hatachi SAN, the iOpps just can't be in the same ballpark FC is at 16GB, and 32Gb will be released in 2015.

ZJose Barreto's Throughput numbers on his MSFT blog, seem to be only based on an actual division of 8 - Gigabits vs Gigabytes. Do you have any real world test results such as by using IOmeter?

A high end Fibre channel SAN is going to have controllers that have much higher iOPPS of through put then a SAS / SATA / SCSI card, will it not?   Doesn't TCP/IP have overhead that Fibre Channel does not? 
http://www.demartek.com/Demartek_Interface_Comparison.html

Brocade, Emulex, and Qlogic , Fiber Channel vendors is releasing 32gbs in 2015, and you can bond / trunk,  8Gb / 16Gb and 32gb Fibre Channel into two bonded channels to increase throughput.
http://fibrechannel.org/library/2013/12/fibre-channel-industry-association-advances-32-gigabit-per-second-fibre-roadmap-with-completion-of-s.html

Jose F. Medeiros
408-256-0649
The Unemployed IT Guy!
https://www.linkedin.com/in/josemedeiros
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close