Is the world finally ready for HP Virtual Connect?

HP Virtual Connect is not a new technology. A quick Google search for when it was first released showed a version 1.3 release note in October of 2008. If you apply Moore’s Law to something that was released originally in early 2008, it should have advanced 4x by now with it doubling every 2 years. In reality, the technology was built to grow and expand and change as connectivity changed. The problem with Virtual Connect previously was that a standard user normally had a very hard time understanding what HP was trying to accomplish.

Traditional blade servers were all too often complicated and limited by the server and external switching surrounding them.  The list below is just a few of the problems that exist with traditional blade infrastructures.

  • Too many switches
  • Too many cables with pass-through modules
  • Switching limited to within a chassis
  • Burnt in MAC and WWN numbers
  • Licenses tied to serial numbers
  • No resiliency without double hardware and often the double cost

As shown in the graphic to the right, virtual connect helps to eliminate many of these problems.

  • Virtual Connect maintains pools of MAC/WWN/server serial number allowing for server portability, for upgrades or failures
  • Blade admin can manage server environment without the need for san or network admin to make changes if a server failure happens
  • Virtual Connect was designed by part of the  Tandem platform server group, high availability zero downtime systems – virtual connect was built for high availability
  • Active loop prevention stops users from their own mistaken configurations
  • Near-zero downtime firmware updates
  • Standard based – allows for full interoperability with HP, Cisco,  Brocade and other switches.
  • One management console for all VC domains up to 4 enclosures or with the enterprise edition you can manage the whole datacenter with 100s of enclosures
  • Less power – less switches – less cablaes – less cooling

The first reaction most people have to this entire solution seems to be “Isn’t that the same as the Cisco UCS Solutions?”  It really is amazing the power of a little time and some marketing.  The two solutions are very similar and that is in my opinion a really good thing.  Competition breed innovation.  The biggest difference is the mindset that the two companies went about attacking the problems in the datacenter.  Cisco comes from a networking background and the UCS solutions was built around the Nexus platform with servers added.  This in no way means that the developers and architects at Cisco were not thinking about the servers all along, but they are after all a networking company.  HP looked at it with the need to make the life of a server admin easier.  A clear example of this is the wire-once connection management.  The ability to profile a system and all it’s components so that you never have to recable a datacenter is a huge benefit for admins.  Most admins have been told by their managers more than once that they need to clean up the cabling or the whole datacenter.  If they don’t need to go in the datacenter then there is no reason to make it messy.  If a server fails there are no new cables, just a blade that you switch out and apply the old profile.  If they want to upgrade a server, they just switch the blade out and apply the old profile.  At this point the Cisco guys are about to comment that they do the same thing.  I know they do but it is still a great feature to highlight.

The virtual connect platform was built not just for servers but also as way to minimize upfront costs but still future proof the c7000 chassis users. HP has found a way to make Virtual Connect a singular management console for your new or existing fiber and ethernet networking. Since the c7000 has been available for years and is actively being used by users worldwide, virtual connect had to be phased project. Below are a few examples and the products you would need to get an existing c7000 to be part of a virtual connect domain.

  • Have existing ethernet switching but want get rid of all the extra cables of pass-through
    • Flex-10 Modules allow for wire-once connectivity and eliminate the added cabling of pass through modules
  • Have an existing fiber channel infrastructure and want to manage the ports without extra interfaces
    • VC-FC Modules are built on traditional fiber channel standards and can be added directly to a cisco or brocade fiber switch
  • An existing 10 Gbps infrastructure is in place for all the rest of the servers and convergenged networking is the baseline for your infrastructure
    • Flex-10 Modules in conjunction with Gen 7 HP blades will allow for a single 10Gbps uplink for storage and ethernet
  • The existing c7000 chassis is full of servers that already have fiber or ethernet and often older systems but you want to move forward as you grow.
    • Flex Fabric Modules are designed to be a replacement for the existing interconnects and allow existing connections to be added to a VC domain with only interconnect changes.

You all know that I like to add a few notes in on things I think are missing on just about every solution I look at.  This one is no exception.  The difference for me with the Virtual Connect is that it was built for a blade solution and I don’t see any major holes with it.  I would like to see it manage the rest of the physical server infrastructure as well though.  If you have HP DL series servers it would be great to be able to move between a rack server and a blade; not every environment has just blade servers.
HP has often been the innovator with the worst timing.  Much like a fine whiskey, Virtual Connect has been able to grow and mature into the platform we see today.  Sometime you just have to add something to the mix to make it that much better, and the release of the UCS platform has done just that. To get more info on Virtual Connect visit HP’s website but make sure to vote below first.
This is post number three for Thomas Jones’ Blogger Reality Show sponsored by HP and Ivy Worldwide. Your votes and comments are a significant portion of the contest and help each of us become better bloggers and hopefully bring better content to the public.  Please vote  and comment on my blog and take a look at all the other bloggers that are listed on in the top right under Blogger Reality Show

3 Replies to “Is the world finally ready for HP Virtual Connect?”

  1. Excellent post! I too am very happy with Virtual Connect, and find it to be a great way to provide connectivity to my blade infrastructure, especially my vSphere hosts. I’m especially excited for the changes coming to Flex-10 that you mentioned above, and the iSCSI enhancements coming out in future server revisions.

    However, I have found a number of shortcomings that, while not deterring me from using Virtual Connect (in any way), keep me from using it at what HP considers to be its full potential.

    Case in point, a number of months back there was a firmware “feature” that utilized a public block of unallocated IPs. When this IP block was allocated to an ISP in Asia/Pacific, my Virtual Connect management interfaces started attempting to connect to those IPs on port 22 (ssh) ever few seconds. While this didn’t impact production, it did concern the security team. HP’s fix was to remove DNS from the Flex-10 switches. Unfortunately, doing this resulted in a reboot of the entire virtual connect fabric – all switches, simultaneously, across all blade enclosures…FC was not impacted though…thankfully! While HP did acknowledge this issue, it wasn’t until days after the advisory was posted.

    Don’t get me wrong – Virtual Connect is an excellent technology – however until it is implemented in a fashion where I cannot cause the above to happen…I won’t be utilizing a single fabric across all enclosures. Multiple VC domains for me! And from talking to other peers it’s the same story…

    What keeps me on Virtual Connect? Besides the usual blade-based advantages (wire-once, simplicity in lights-out environments, etc.), Flex-10 is a killer technology. Being able to control the amount of bandwidth I allocate to each Flex-NIC, which then corresponds to my service console / management network, VMotion network, FT, Virtual Machine, etc. eliminates the potential waste we typically see in blade environments. And, after designing around the potential for outages similar to the one noted above, the seamless firmware upgrades (which can be fully scripted and automated) are wonderful.

    Thanks for an excellent overview!

    DavidADav

  2. One thing I forgot to note – it did take me a while to wrap my head around the fact that the Flex-10 switches essentially act like hosts when connected to their up level switch, i.e. no risk of spanning tree loops, etc. Thoroughly reading the documentation was a huge help in this area, and in the end, I left with the understanding that the Flex-10 switches were like a virtual switch, presenting the servers just like the vSphere host presents a VM.

    Hopefully my understanding isn’t TOO far off…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.