As I embark on third week of a business trip away from home and family with what can feel like endless conferences and speaking engagements or training sessions, I thought I would take a moment and reflect on life as a road warrior. I never thought I would really use that term, but I think anyone who has known me for the past few years would say I am. I think the first thing to point out is that I have probably the most understanding and supportive wife on the planet. Continue reading “Is traveling for work worth it?”
After waiting for a few years, Nexenta has released the 4.0 version of their software defined storage solution. This long awaited release was tested more than any other release ihave seen from Nexenta. As most know I have worked for Nexenta for the last couple years and got to see testing from a QA team through sales engineering and support and some of the largest service providers. Looking forward to seeing customers running on the latest version. A community edition is out also!!!
Some of the new features of NexentaStor 4.0 are:
- Scaling to petabyte system capacities
- Seamless handling of intermittently faulty devices
- Guided wizards for system setup and configuration
- Over 50% reduction in HA failover times
- 4X faster performance
- 512GB dynamic read cache per head
- Native SMB 2.1 support
- Redesigned Auto-Sync replication
- Illumos kernel integration
- Deeper VMware, Windows, and OpenStack integration
Nexenta have been working hard to improve many areas of NexentaStor, including improving security, manageability, availability, reliability, TCO and scalability. NexentaStor 4.0 embodies all that hard work and shows particular improvement for operational performance and ease of administration.
Go see the site to get both Enterprise and Community Version
I have been socializing this concern for quite some time now and recently I started doing more interviewing for hiring and have found that my concern is quite valid. What is this concern you ask..the lack of mid range IT talent. The introduction of virtualization to the datacenter and then the small server room has fundamentally changed the growth path for IT professionals. Historically we had levels of IT staff with a growth path that looks like the one below.
We still have the help desk tech that can help out at the end user, although that is being phased out with the use of virtual desktops, that person now needs to know the back end architecture also. When a new graduate starts in IT, they need to walk in and learn the basics of an enterprise IT system, placing them firmly in the systems admin role. They manage what already exists but they do not know the inner workings or how to design the system. They are the virtualization user. With the explosion of cloud, even this position is getting lost with self-service portals like CloudStack and vCloud Director. These systems are built by the group of IT professionals that grew up through the chain above and many are now in the senior level. These are also the same IT pros that are moving into management and away from hands on every day work. Where does that leave us?
From systems administrator to systems engineer there is a dire lack of positions that give administrators the chance to learn and grow and hone their craft to the level of cross functional knowledge. Who will build the next large datacenter for your enterprise company? The public cloud says that you should just put your data in the cloud and not worry about building your own. Seems simple enough but the service providers also have the same issue as your company does, a lack of trained mid to high talent.
How do we fix this? Start with the technology we look at for our datacenter, do your end users need to provision their own VMs? Or could you have a trained mid level staffer that know the compute and storage requirements and then uses the portal to deploy the right solution. Does your end use know the IOPs of storage needed for that VM they just built from the portal? Probably not and when it is slow they will call and complain. The systems in the datacenter need to be easy to manage to keep the staff efficient but not so easy that it is mind dumbing.
Build your IT solutions like a lego kit, each building block is crucially important, be it the storage and the way disk are provisioned, or the compute and the number of CPU cycles. If you make sure your staff knows how each piece fits with the next you can grow that single piece into a beautiful work of art, hopefully stopping the sinkhole that is a lack of knowledge and experience in the middle layers of IT professionals.
The latest in storage trends today rely on flash or solid state storage, which seems a great method to speed up applications, speed up virtual machines and overall, to make your system run faster; but what happens when that same fast SSD fails? Jon Toigo does a good job explaining SSD failures here – http://goo.gl/zDXd2T. The failure rate on SSD, because of the cell technology involved is of huge concern. Many companies have looked for ways to solve this: from adding in wear leveling, or cell care, or even adding capacity that is not advertised just to have cells to put data writes that are new, while deleting the old blocks in the background. This is completely dependent on the drive manufacturer to save your disk.
Now the important question – Did you get a choice when you bought your enterprise storage as to what manufacturer’s SSD were your “fast” drives? Unlikely, and, without it you wouldn’t know if your drives will be the fast rabbit that never slows down to win the race, or the one that stops on the side of the road which could easily be overtaken by the tortoise.
This is a situation in which using a ZFS based file system like Nexenta could help not only solve, but let you know exactly what you have, and how you need to manage the life span and speed of your enterprise class storage. Nexenta is based on the ZFS file system, and uses commodity drive solutions, so the first problem of not knowing what drive you have is instantly solved, because you can now use best of breed, SSD or flash, and replace them as newer technology arises.
The real secret sauce comes into play when you combine the best in class SSD protection with a file system built to optimizethe usage of the SSD, opting to use DRAM as much as possible and isolate the read and writes needed for normal usage. ZFS utilizes the hybrid storage pool for all data storage. ZFS inherently separates the read and write cache, each using its own SSD such that it should be selected specifically for the use case. SSD wear is more commonly known for write operations, in ZFS, the ZIL, or ZFS Intention Log handles this. For ZIL drives it is recommended to use SLC (Single Layer Cell) drives or RamSSD, like ZeusRAM. SLC drives have a much lower wear rate. To see an analysis of different types of SSD look here –http://goo.gl/vE87s. Only synchronous writes are written to the ZIL, and only after they are first written to the ARC, (Adaptive Replacement Cache) or the server’s DRAM. Once data blocks are written to the ZIL a response is sent to the client, and data is asynchronously written to the spinning disk. The writes from the client are not the only SSD writes done. When using a tiered storage methodology, blocks of data must be written to the read cache prior to being read. This is the case with ZFS and hybrid storage pools also, however the differentiator is how often the blocks are written to the L2ARC, Layer 2 Adaptive Replacement Cache. The L2ARC is normally placed on MLC or eMLC SSD drives and is the second place that the system looks to find data blocks that are commonly used after the ARC/DRAM. It is not uncommon for other files systems to use a similar approach, however they use least recently used (LRU) algorithm. This does not account for if the data blocks may be used on a frequent basis but a large data read is done, from a backup for instance, and the blocks are shifted. The algorithm used for ARC and L2ARC accounts for these blocks and maintains data blocks based on both most recently as well as most frequently used data. Specifics are found here – http://goo.gl/tIlZSv.
The way that data is moved from and to SSD with ZIL and L2ARC is impactful not just for the wear time on the SSD but also on power consumption, that is paramount in the datacenter of the future. Using this approach allows systems to be built using all SSD footprints and minimal power, or even slower drives for the large capacity, while maintaining high level performance.
In many ways, the tortoise and hare analogy plays well in what we’ve been discussing. Leveraging the power of SSD, and the proper ones at that, will allow you to place the sheer power and lean nature of the Hare, while employing the economy and efficiency of the Tortoise. This, in a way, is the nature of ZFS. Power and economy wrapped up in one neat package. The real magic comes from the ability to tune the system either upward or down in an effort to get it to perform just the way you’d like it to. This is easily done simply by adding or removing SSD to the mix either in ZIL or L2Arc capacities.
There’s nothing inherently wrong with being a tortoise, but, to be a hare, well designed, and performing at peak efficiency, but also enduring for the entire race, really seems like the best way to go, doesn’t it?
My normal readers know, I decided a while ago to expand my blog from just technical to other things including things from my motorcycle. Since I have been spending alot fo time traveling for work, I decided it might be nice to give short sweet reviews of the books I have been reading as well. Heres the first, as II finish the books I will put new posts up.
Dan Brown’s latest book Inferno follows a familiar path for Dan Brown fans, a respected scientist who gets caught up in an unsuspecting adventure. This time we travel to Italy and follow the theme of Dante’s Inferno. The hardback copy runs 480+ pages so this is not a quick read at all. As in normal form, the first ~100 pages are rather slow, but it then picks up. This is not going to be the book you take for the quick weekend beach getaway, unless you plan to get a sunburn while reading it. I consider myself a reasonably fast reader and it still took me quite a few hours. He does a good job of develop the characters in the beginning so you start to get a good feel for each character, or at least you think you do. You find some great twists int he characters around page 300, and then again around 400. Overall a pretty good read and in my opinion, better written than both Digital Fortress: A Thriller and The Lost Symbol. Still not quite as good as his go to The Da Vinci Code
I started doing the Team Beachbody Combat program this past week thanks to Chris and Julie Colotti and their group Virtual Fitness. Beach body uses their own program scheduler on the SuperGym site. This is fine, but I did not want to have to go to that site everyday to look up which program I was supposed to be doing. Currently there is no way to sync the calendar from SuperGym to your iphone via Google Calendar or even Outlook. This was not going to work for me so I started doing some research and came up with a way to get the schedule on my iphone. Hopefully it will help more of you also. Heres the steps:
Every once in a while you come across a small item that seems potentially frivolous but at the same time very handy. The Seagate GoFlex Satellite Mobile Wireless Storage is in fact one of those things.
It is only 500 Gb which is pretty small for a portable hard drive now a days, especially when it is so easy to pick up a 128 GB USB Flash Drive but this little guy has lots of cool features. The one I wanted the most was the ability to have it use the internal wifi hotspot. You simply fill up the drive via a drag and drop just like every other USB hard drives, then when you unplug, the fully charged drive (yes it has a 5 hours battery life also) you can have it start its own wife hotspot. You can only connect to the device but when you do you can see the pictures, files and movies that you installed. You then stream directly from it. My plan is to leave it turned on in my bag int he overhead on the airplane then stream to my ipad which has alot less space. If 500GB is not enough for you, Seagate has just released a 1TB version, that is $189 on Amazon now. It also allows for streaming from 3 devices where the 500 GB only allows from one, but I did not need that and 500 GB was more than enough for me. Continue reading “Wireless Streaming to my Ipad and Computer, yes please!”
I was fortunate to spend the last couple days at the Charlotte VMware User Summit. For those of you not familar, the User Summits or Conferences are larger regional and all day conferences for the VMware Users Group. I happen to be one of the leaders for the Washington DC group, and have been heavily involved in the Potomac Regional VMware User Conference that is coming up next week. If you are in the area and have not registered, then you should using this link http://www.vmug.com/p/cm/ld/fid=1742. These summits are a great way to interact both professionally and often socially with some of the brightest and most innovative minds in the virtualization community. Continue reading “Charlotte VMware User Summit and a little restaurant review”
EMC today announced the their latest entry into the Software Defined Storage (SDS) market, VIPR. They’ve coined it the “World’s first Software Defined Storage Platform” (http://www.emc.com/about/news/press/2013/20130506-03.htm). I have to say, I am a little put off by this initial push and need to be first when they are clearly not. I could list a few that have claimed to be a SDS platform first, DataCore, Nexenta, and when looking at some of the capabilities, I think IBM beat them out with the SVC Director.
Performance and scale testing of virtual desktop infrastructures has always been a challenge. The known standard has been the testing suite from LoginVSI, and today they released their 4.0 product. I have used LoginVSI quote a bit in the past and was given the opportunity to try a pre release of the product and I have to say that as always it has not disappointed. I thought about a nice long description but they provided a nice bullet list that I did not have to write, so heres the plagiarism part of my post.
- Improved ease of installation
The test image footprint of Login VSI has been reduced by almost 90%. This makes the tool not only easier to install, but also easier to integrate and deploy. Centralization of management, updates and logging makes the use of Login VSI more efficient than ever. Direct Desktop Launch (DDL) mode enables large-scale testing with minimal infrastructure.
- Improved ease of test creation
The new intuitive and workflow oriented user interface of Login VSI 4.0 offers step-by-step test creation and wizard based test configuration for all important brokers and languages. The new workload editor introduces a new meta language which makes the customization of workloads very transparent and efficient. The new benchmarking mode enforces strict testing standards, providing industry standard results that are objective, comparable and repeatable.
- Improved test realism
The duration of the standard workloads has been increased from 14 to 48 minutes loops. Also the way in which segments and applications start has been improved to better reflect real world user behaviour. The datasets used in the workloads now offer 1000 different documents per type, more and larger websites, and a video library in every format, all to ensure a real world variety in data usage. The execution of the workloads is improved through the introduction of phasing, allowing for real world production user scenarios.
- Improved test insight
The new dashboard offers real-time test feedback, including progress, launched and active sessions, elapsed time and time left of the test in progress. The industry standard index VSImax has been further refined, enriching scalability results (max number of users), with objective baseline performance results (independent of tested load). Automated reporting with out of the box report ready graphs for all used settings, response times, and other data enhances the level, and choice, of information generated by the Login VSI analyzer.
Heres a nice gallery of screenshots for you as well..