The Meteoric Rise and Slow Fall of a Community

Over better part of the last ten years, I have been involved and benefited from one of the largest business communities that I have ever seen.  Revolving primarily around the VMware ecosystem, this community has provided me with access to some of the smartest minds in systems administration, business leaders and I have made numerous friends, but alas I think it is all about to fall.

To even start to explain where I am coming from lets first take a step back and explain some history.  I started working with VMware based products somewhere around 2004 or 2005 with GSX Server. As time progressed I worked more and more with virtualization and was even offered a job after and early VMware User Group(VMUG) meeting.  That turned into a stepping stone that began to escalate my career path.  I moved from a standard admin to management and then shifted over into being a sales engineer.  During my first role as a sales engineer for a regional reseller I began to participate heavily in the Community Roundtable Podcast, a very early weekly podcast hosted by John Troyer.  Combined with the VMware community forums I had access to tons of resources and people.  Things we progressing great and I moved into more VMware centric roles and eventually moved over to the vendor side.  This move was all because of relationships I had built with this VMware community.  The community had grown and so had my involvement, 5 years ago I started hosting a daily podcast at VMworld with some of these community member that I still do today.

It was during one of these podcasts this year in San Francisco that I had a few other guys confirm feelings that I have been having for the past year, that the VMware Community has gone too commercial and lost the camaraderie and independence that made it so special.  Vendors used to spend small amounts of money to help get people together with the knowledge that their name would be shared and these influential bloggers and social media activists would in turn support them (if their product was worthwhile).  Events were created by individuals and anyone could show up and be welcomed regardless if they worked for a vendor.  If you were active with a certain group it did not matter if you filled out a form on a website to give the sponsoring company a fresh lead, you were welcomed into events with open arms.

Times have changed and while I still have many great friends and contacts I dont think the community as I knew it exists anymore.  We have groups of people recognized for their participation in the community, vExperts, that many even with the group have become too entitled and it is no longer a group that provides feedback but rather just wants free swag.  We have events that were once only ran by the community and were open for anyone that are ran by corporate marketing teams and are no different than if a customer appreciation party.  There are very few new blogs or known technical talents that are being publicly lauded for their work and those that remain are often now so entrenched in their circle of friends that it is nearly impossible for a regular customer to make a connection with them enough to feel like they are becoming part of something.

There are still a few events that are driven for the community and managed in a way that any sponsored money and invitation goes back into the event but the feel and the vibe is slowly dying.  This VMworld I was lucky to be involved with VMUnderground which while much bigger than the founders or any of us involved ever expected still goes back to making sure people meet each other and see sponsors logos and hopefully is mutually beneficial.  Community packages were built for sponsors around participation in the vBrownbags, a series of hopefully no FUD short talks, Spousetivities, the definition of work-life balance and of course VMUnderground and vRockStar and the sponsors seemed to feel they are getting good value by participating.  I also got to see a much smaller event like The Gathering with only about 35 people be very well attended and produce great value for all involved.  Something that allowed community members and sponsors intertwine and even allowed community members that had not meet were able to.

I am not going to list off any of the events that I heard were not as accommodating or as focused on the attendees as I only hope they already received that feedback.  What I will say is that I hope as we move towards VMworld EMEA in a short 5 weeks everyone involved in the community finds a way to meet someone new, find a vendors they think is amazing and support them, and welcome that new talent that has allowed our industry to grow.  The community is not about a number of leads but rather about a network of people helping people.  Lets not let that fail!

vExpert Daily LineUp set for VMworld San Francisco 2015

For what I believe is the 5th year in a row(might be the 6th, I have lost count), I am excited to be hosting a daily talk show and podcast with some of the most influential and intelligent people in virtualization.  The VMware vExpert program recognizes the individual contributions of geeks, nerds and the like to support the virtualization community.  Starting from a small corner of the community area with the help of John Troyer(@jtroyer), the show the next year turned to the anchor program of the vBrownBag Live celebrating other community members and supporters.

This year is no exception, we have locked in the line up for San Francisco and will have some great guests, this is a light hearted and casual review of the days activities and announcements that follows the keynote.  If you want an honest opinion with less FUD and sometimes even a little disgust with announcements make sure to watch the live stream at 10:30 PDT each day.

Without further ado.. heres the guests for each day:

Monday August 31

Tim Antonowicz @timantz
Gabriel Chapman @Bacon_Is_King
James Bowling @vsential
Shane Williford @coolsport00

Tuesday September 1

Marc Crawford @uber_tech_geek
Emad Younis @Emad_Younis
Jonathan Frappier @jfrappier
Kyle Ruddy @ruddyvcp

Wednesday September 2

Tim Antonowicz @timantz
James Bowling @vsential
Tony Foster @wonder_nerd
Shane Williford @coolsport00

Thursday September 3

Emad Younis @Emad_Younis
Kyle Ruddy @ruddyvcp
Jacob Pennington @JakeAPennington
James Bowling @vsential

 

If you want to join the party for Barcelona make sure to fill in your info on the Barcelona spreadsheet (vExperts only please…)

How to emulate 10 Gbps NIC in a VMware Fusion VM

I am working on deploying some new VMs to demo some of the latest Nexenta products but I found one issue instantly.  The deployment requires 10 Gbps networking and since it is all internal to my macbook I assumed this would be easy.  Unfortunately VMware Fusion 7 does not have a graphical way that I can find to change the type NIC.  Turns out this is very easy and I thought I would share the process so here goes..

  1. Build the VM with whatever number of NICs you need.
  2. Power off the VM
  3. Quit VMware Fusion
  4. Go to the location of the virtual machines
  5. Right-click on the VM you want to edit
  6. Select “Show Package Contents”
  7. Right-click on the .vmx file and open with TextEdit
    • Make sure you open Preferences and uncheck “Smart Quotes” if you dont it adds stuff into the VMX fileText Edit Preference
  8. By default all the NIC will show as e1000, you need to change them to vmxnet3.  Find this line: ethernet0.virtualDev = “e1000” and change the e1000 to vmxnet3Screen Shot 2015-05-30 at 12.12.24 PM
  9. Save the file
  10. Open Fusion and Start the VM

Thats it!

OpenStack Summit Vancouver is Coming!

This May, the OpenStack Summit if headed to Vancouver, B.C.  If you haven’t already been looking at the info, it is a great city with lots of great sessions, not to mention the people attending that you can learn so much from.  If you need it and are too lazy to google it, heres the main link. https://www.openstack.org/summit/vancouver-2015/

 

What matters right now though is that you can vote for sessions you are interested in.  Much like so many other big conferences, this give the attendees a chance to pick what they really want to see.  Voting has opened here.

I have submitted two sessions to speak this year and would appreciate the vote, it always makes it easier to attend these when you have a chance to pass along things you have learned along the way.  Please click the links and go vote for me!

Product Review: Sony VCT-R100 Tripod

I was looking for a tripod for my Samsung NX2000 camera and had a few criteria, the first was travel friendly.  I don’t need a big professional tripod if it is too bulky to carry anywhere, I needed something that condenses down to a reasonable size.  A travel bag would be helpful as well but was not a requirement.  Check! and Check!

2015-01-02 14.39.59

2015-01-02 14.40.59The second thing was making sure it was stable when barely open all the way through full extension.  Check!

2015-01-02 14.39.482015-01-02 14.37.54 The last part I needed was to make sure it had all the pan tilt features and screwed easily onto my camera.  Again.  Check!

2015-01-02 14.38.14

Overall I am very happy with this so far.

Product Review: Bose QC20i Noise Canceling Earbuds

Traveling around the world, every road warrior I know, myself included, have a few things that always end up in their safe.  Many will say it is a laptop or tablet, maybe your passport, but one of the most important things to me when taking long flights are my noise canceling headphones.  I tried quite a few different pairs of headphones over the years.  I tried them from just about every vendor, trying to avoid what I thought was a $300 tax from Bose for their headphones.  There were pairs from $50-200 but none of them really blocked out the noise, or remained comfortable for long flights.  I finally ended up with the Bose QC15.  I have been using them for almost three years on flights, and for conference calls and I have been very happy with them.  In the three years, I have had to replace the connection cable once (I lost it on a place) and I just replaced the earpads, overall small pieces.  You are probably asking why I am reviewing something else then if I was happy. Well I was with one exception, you cant lay your head to the side and sleep with a large bulky basket around your head, much less lay flat.  So in come the QuietComfort 20i earbuds.

T2015-01-02 15.22.12o give you an idea of the size, the picture to the left is the QC15 case beside the new QC20i travel case.  To give you an idea, the earbuds case is about 4″ by 2″ or just about the size of an iPhone 5, not even the iPhone 6 plus!

Now it is time to dig into the actual headphones.  qc20iAs you can tell in the pic, they look like a normal pair of earbuds with the exception of the box attached to the wires.  This is the rechargeable battery and control module.  2015-01-02 14.42.19It using a micro USB cable to charge and according to Bose will keep a charge for 16 hrs on about 2 hrs of charge time.  This will be a big test since with the QC25, I always carry extra batteries as well.  The other part you see is the inline microphone, 2015-01-02 14.41.59just like the factory iPhone headphones, but with one exception.  There is also a button that turns off the noise canceling temporarily.  There is a version without the “i” that comes with a straight through cable as well.  This was lacking on the QC25 and until I learned about this I didnt think about the fact that I almost always slide one ear off to hear the flight attendants.

Now the question comes up for the comfort.  The QC20 comes with three different size eartips.  The medium were installed and fit my ears fine.

Sound quality on these seems on par if not better than the QC15.  You get the minimal white noise when turning on the noise canceling.  One major upgrade from the QC15 is that if the battery dies you can still listen to music!

Overall I am looking forward to a much smaller package to carry around in my carry on and the ability to sleep through nice long flights.

Is traveling for work worth it?

As I embark on third week of a business trip away from home and family with what can feel like endless conferences and speaking engagements or training sessions, I thought I would take a moment and reflect on life as a road warrior. I never thought I would really use that term, but I think anyone who has known me for the past few years would say I am. I think the first thing to point out is that I have probably the most understanding and supportive wife on the planet. Continue reading “Is traveling for work worth it?”

NexentaStor 4.0 is released.. time for new storage at home and work!

After waiting for a few years, Nexenta has released the 4.0 version of their software defined storage solution.  This long awaited release was tested more than any other release ihave seen from Nexenta.  As most know I have worked for Nexenta for the last couple years and got to see testing from a QA team through sales engineering and support and some of the largest service providers.  Looking forward to seeing customers running on the latest version.  A community edition is out also!!!

Some of the new features of NexentaStor 4.0 are:

Simpler

  • Scaling to petabyte system capacities
  • Seamless handling of intermittently faulty devices
  • Guided wizards for system setup and configuration

Faster

  • Over 50% reduction in HA failover times
  • 4X faster performance
  • 512GB dynamic read cache per head
  • Native SMB 2.1 support

Stronger

  • Redesigned Auto-Sync replication
  • Illumos kernel integration
  • Deeper VMware, Windows, and OpenStack integration

Nexenta have been working hard to improve many areas of NexentaStor, including improving security, manageability, availability, reliability, TCO and scalability. NexentaStor 4.0 embodies all that hard work and shows particular improvement for operational performance and ease of administration.

Go see the site to get both Enterprise and Community Version

http://nexenta.com/products/nexentastor

 

The IT Skill Set SinkHole

I have been socializing this concern for quite some time now and recently I started doing more interviewing for hiring and have found that my concern is quite valid.  What is this concern you ask..the lack of mid range IT talent.  The introduction of virtualization to the datacenter and then the small server room has fundamentally changed the growth path for IT professionals.  Historically we had levels of IT staff with a growth path that looks like the one below.

Career Progression

We still have the help desk tech that can help out at the end user, although that is being phased out with the use of virtual desktops, that person now needs to know the back end architecture also. When a new graduate starts in IT, they need to walk in and learn the basics of an enterprise IT system, placing them firmly in the systems admin role.  They manage what already exists but they do not know the inner workings or how to design the system.  They are the virtualization user.  With the explosion of cloud, even this position is getting lost with self-service portals like CloudStack and vCloud Director.  These systems are built by the group of IT professionals that grew up through the chain above and many are now in the senior level.  These are also the same IT pros that are moving into management and away from hands on every day work.  Where does that leave us?

From systems administrator to systems engineer there is a dire lack of positions that give administrators the chance to learn and grow and hone their craft to the level of cross functional knowledge.  Who will build the next large datacenter for your enterprise company?  The public cloud says that you should just put your data in the cloud and not worry about building your own.  Seems simple enough but the service providers also have the same issue as your company does, a lack of trained mid to high talent.

How do we fix this?  Start with the technology we look at for our datacenter, do your end users need to provision their own VMs? Or could you have a trained mid level staffer that know the compute and storage requirements and then uses the portal to deploy the right solution.  Does your end use know the IOPs of storage needed for that VM they just built from the portal?  Probably not and when it is slow they will call and complain.  The systems in the datacenter need to be easy to manage to keep the staff efficient but not so easy that it is mind dumbing.

lego-mona-lisa-2Build your IT solutions like a lego kit, each building block is crucially important, be it the storage and the way disk are provisioned, or the compute and the number of CPU cycles.  If you make sure your staff knows how each piece fits with the next you can grow that single piece into a beautiful work of art, hopefully stopping the sinkhole that is a lack of knowledge and experience in the middle layers of IT professionals.

 

 

Are your SSDs the weakest link or is it your file system?

The latest in storage trends today rely on flash or solid state storage, which seems a great method to speed up applications, speed up virtual machines and overall, to make your system run faster; but what happens when that same fast SSD fails? Jon Toigo does a good job explaining SSD failures here – http://goo.gl/zDXd2T. The failure rate on SSD, because of the cell technology involved is of huge concern. Many companies have looked for ways to solve this: from adding in wear leveling, or cell care, or even adding capacity that is not advertised just to have cells to put data writes that are new, while deleting the old blocks in the background. This is completely dependent on the drive manufacturer to save your disk.

Hare & TortoiseNow the important question – Did you get a choice when you bought your enterprise storage as to what manufacturer’s SSD were your “fast” drives? Unlikely, and, without it you wouldn’t know if your drives will be the fast rabbit that never slows down to win the race, or the one that stops on the side of the road which could easily be overtaken by the tortoise.

This is a situation in which using a ZFS based file system like Nexenta could help not only solve, but let you know exactly what you have, and how you need to manage the life span and speed of your enterprise class storage. Nexenta is based on the ZFS file system, and uses commodity drive solutions, so the first problem of not knowing what drive you have is instantly solved, because you can now use best of breed, SSD or flash, and replace them as newer technology arises.

The real secret sauce comes into play when you combine the best in class SSD protection with a file system built to optimizethe usage of the SSD, opting to use DRAM as much as possible and isolate the read and writes needed for normal usage. ZFS utilizes the hybrid storage pool for all data storage. ZFS inherently separates the read and write cache, each using its own SSD such that it should be selected specifically for the use case. SSD wear is more commonly known for write operations, in ZFS, the ZIL, or ZFS Intention Log handles this. For ZIL drives it is recommended to use SLC (Single Layer Cell) drives or RamSSD, like ZeusRAM. SLC drives have a much lower wear rate. To see an analysis of different types of SSD look here –http://goo.gl/vE87s. Only synchronous writes are written to the ZIL, and only after they are first written to the ARC, (Adaptive Replacement Cache) or the server’s DRAM. Once data blocks are written to the ZIL a response is sent to the client, and data is asynchronously written to the spinning disk. The writes from the client are not the only SSD writes done.  When using a tiered storage methodology, blocks of data must be written to the read cache prior to being read. This is the case with ZFS and hybrid storage pools also, however the differentiator is how often the blocks are written to the L2ARC, Layer 2 Adaptive Replacement Cache. The L2ARC is normally placed on MLC or eMLC SSD drives and is the second place that the system looks to find data blocks that are commonly used after the ARC/DRAM. It is not uncommon for other files systems to use a similar approach, however they use least recently used (LRU) algorithm. This does not account for if the data blocks may be used on a frequent basis but a large data read is done, from a backup for instance, and the blocks are shifted. The algorithm used for ARC and L2ARC accounts for these blocks and maintains data blocks based on both most recently as well as most frequently used data. Specifics are found here – http://goo.gl/tIlZSv.

pyramidThe way that data is moved from and to SSD with ZIL and L2ARC is impactful not just for the wear time on the SSD but also on power consumption, that is paramount in the datacenter of the future.  Using this approach allows systems to be built using all SSD footprints and minimal power, or even slower drives for the large capacity, while maintaining high level performance.

In many ways, the tortoise and hare analogy plays well in what we’ve been discussing. Leveraging the power of SSD, and the proper ones at that, will allow you to place the sheer power and lean nature of the Hare, while employing the economy and efficiency of the Tortoise. This, in a way, is the nature of ZFS. Power and economy wrapped up in one neat package. The real magic comes from the ability to tune the system either upward or down in an effort to get it to perform just the way you’d like it to. This is easily done simply by adding or removing SSD to the mix either in ZIL or L2Arc capacities.

There’s nothing inherently wrong with being a tortoise, but, to be a hare, well designed, and performing at peak efficiency, but also enduring for the entire race, really seems like the best way to go, doesn’t it?