Archive for the ‘Nexenta’ Category

NexentaStor 4.0 is released.. time for new storage at home and work!

After waiting for a few years, Nexenta has released the 4.0 version of their software defined storage solution.  This long awaited release was tested more than any other release ihave seen from Nexenta.  As most know I have worked for Nexenta for the last couple years and got to see testing from a QA team through sales engineering and support and some of the largest service providers.  Looking forward to seeing customers running on the latest version.  A community edition is out also!!!

Some of the new features of NexentaStor 4.0 are:

Simpler

  • Scaling to petabyte system capacities
  • Seamless handling of intermittently faulty devices
  • Guided wizards for system setup and configuration

Faster

  • Over 50% reduction in HA failover times
  • 4X faster performance
  • 512GB dynamic read cache per head
  • Native SMB 2.1 support

Stronger

  • Redesigned Auto-Sync replication
  • Illumos kernel integration
  • Deeper VMware, Windows, and OpenStack integration

Nexenta have been working hard to improve many areas of NexentaStor, including improving security, manageability, availability, reliability, TCO and scalability. NexentaStor 4.0 embodies all that hard work and shows particular improvement for operational performance and ease of administration.

Go see the site to get both Enterprise and Community Version

http://nexenta.com/products/nexentastor

 

Are your SSDs the weakest link or is it your file system?

The latest in storage trends today rely on flash or solid state storage, which seems a great method to speed up applications, speed up virtual machines and overall, to make your system run faster; but what happens when that same fast SSD fails? Jon Toigo does a good job explaining SSD failures here – http://goo.gl/zDXd2T. The failure rate on SSD, because of the cell technology involved is of huge concern. Many companies have looked for ways to solve this: from adding in wear leveling, or cell care, or even adding capacity that is not advertised just to have cells to put data writes that are new, while deleting the old blocks in the background. This is completely dependent on the drive manufacturer to save your disk.

Hare & TortoiseNow the important question – Did you get a choice when you bought your enterprise storage as to what manufacturer’s SSD were your “fast” drives? Unlikely, and, without it you wouldn’t know if your drives will be the fast rabbit that never slows down to win the race, or the one that stops on the side of the road which could easily be overtaken by the tortoise.

This is a situation in which using a ZFS based file system like Nexenta could help not only solve, but let you know exactly what you have, and how you need to manage the life span and speed of your enterprise class storage. Nexenta is based on the ZFS file system, and uses commodity drive solutions, so the first problem of not knowing what drive you have is instantly solved, because you can now use best of breed, SSD or flash, and replace them as newer technology arises.

The real secret sauce comes into play when you combine the best in class SSD protection with a file system built to optimizethe usage of the SSD, opting to use DRAM as much as possible and isolate the read and writes needed for normal usage. ZFS utilizes the hybrid storage pool for all data storage. ZFS inherently separates the read and write cache, each using its own SSD such that it should be selected specifically for the use case. SSD wear is more commonly known for write operations, in ZFS, the ZIL, or ZFS Intention Log handles this. For ZIL drives it is recommended to use SLC (Single Layer Cell) drives or RamSSD, like ZeusRAM. SLC drives have a much lower wear rate. To see an analysis of different types of SSD look here –http://goo.gl/vE87s. Only synchronous writes are written to the ZIL, and only after they are first written to the ARC, (Adaptive Replacement Cache) or the server’s DRAM. Once data blocks are written to the ZIL a response is sent to the client, and data is asynchronously written to the spinning disk. The writes from the client are not the only SSD writes done.  When using a tiered storage methodology, blocks of data must be written to the read cache prior to being read. This is the case with ZFS and hybrid storage pools also, however the differentiator is how often the blocks are written to the L2ARC, Layer 2 Adaptive Replacement Cache. The L2ARC is normally placed on MLC or eMLC SSD drives and is the second place that the system looks to find data blocks that are commonly used after the ARC/DRAM. It is not uncommon for other files systems to use a similar approach, however they use least recently used (LRU) algorithm. This does not account for if the data blocks may be used on a frequent basis but a large data read is done, from a backup for instance, and the blocks are shifted. The algorithm used for ARC and L2ARC accounts for these blocks and maintains data blocks based on both most recently as well as most frequently used data. Specifics are found here – http://goo.gl/tIlZSv.

pyramidThe way that data is moved from and to SSD with ZIL and L2ARC is impactful not just for the wear time on the SSD but also on power consumption, that is paramount in the datacenter of the future.  Using this approach allows systems to be built using all SSD footprints and minimal power, or even slower drives for the large capacity, while maintaining high level performance.

In many ways, the tortoise and hare analogy plays well in what we’ve been discussing. Leveraging the power of SSD, and the proper ones at that, will allow you to place the sheer power and lean nature of the Hare, while employing the economy and efficiency of the Tortoise. This, in a way, is the nature of ZFS. Power and economy wrapped up in one neat package. The real magic comes from the ability to tune the system either upward or down in an effort to get it to perform just the way you’d like it to. This is easily done simply by adding or removing SSD to the mix either in ZIL or L2Arc capacities.

There’s nothing inherently wrong with being a tortoise, but, to be a hare, well designed, and performing at peak efficiency, but also enduring for the entire race, really seems like the best way to go, doesn’t it?

EMC tells their SDS story, but is it really theirs alone?

EMC today announced the their latest entry into the Software Defined Storage (SDS) market, VIPR. They’ve coined it the “World’s first Software Defined Storage Platform” (http://www.emc.com/about/news/press/2013/20130506-03.htm). I have to say, I am a little put off by this initial push and need to be first when they are clearly not. I could list a few that have claimed to be a SDS platform first, DataCore, Nexenta, and when looking at some of the capabilities, I think IBM beat them out with the SVC Director.

Read more

My Old School Nintendo Setup to demo VDI at VMware PEX

Those of you that got to make it out to the VMware Partner Exchange probably got to see the demo in the Nexenta Booth.

Alot of the common social media geeks around virtualization got to swing by.  Chris Wahl from WahlNetworks included.  This was not an overly complex demo, but I wanted something fun to show off VDI sessions.  Using the real time performance metrics that are shown in NexentaVSA for View, we can actually see the systems running.

The install was rather easy, with one caveat.  VMware View does not recognize the Classic USB NES Controller for PC that we picked up from Amazon.

The great part is on the Lenovo Thinkpad I was using as a client I can make just a quick registry change and View will recognize.  This process is detailed here, very similar to restricting access, this allows you to add unknown USB devices to share to your View session.

The hardware was not very intense for the servers.  A couple Dell 1950 running ESX5.1 and using a Supermicro server with Nexenta installed as shared storage presenting 1 TB of NFS storage.  For the VDI hosts I put in 2 Cisco UCS C200 M2.  By adding in a single STEC ZeusIOP and a single spinning disk to house the desktops, and 96 GB of RAM we are able to build a rather robust VDI setup.  Allowing for about 100 desktops all being deployed with NexentaVSA for View..

By adding in the JNes Nintendo Emulator to the Windows 7 base images and VMware View Linked Clones, we have our own mini arcade.

 

Finally talking out my side project. vCloud and VDI in a Box

Over the past few weeks, I have been working on a side project with one of the Nexenta partners to prepare for the Intel Developers Forum in San Francisco this week. The partner Cirracore based in Atlanta works with Equinix and Telx pretty heavily and offers a few managed private and public cloud solutions. One of these solutions is based on the Intel Modular Server Chassis(IMS). If you have not checked out this chassis, it is probably one of the most engineered but least publicized piece of hardware I have seen in years. First to give you an Idea of what the chassis is made of, then two solutions we release this week, vCloud in a Box and VDI/SMB in a Box. Read more

How to make VDI easier to deploy? Take out the SAN, Take out the steps with Nexenta VSA for View

Deploying VDI should be an easy task, and with a proof of concept or trial it normally is.  The problems start to show up when you move from a small deployment to an enterprise rollout.  Problems like disk IO, throughput, management, and performance monitoring start to have a significant impact.  Nexenta has released their second product after working with VMware as just a way to ease that transition.  NexentaVSA for View(NV4V) is first an orchestration engine, followed by a performance tool.

 

Read more

Off to a new adventure…

Today marks my last day as a solutions architect for a reseller.  I have been working on and off for resellers since college and very infrequently have I seen a technology that I thought could be a true game changer.  The last one I can think of was VMware, and as many of you are aware that has played out well for me over the past few years.  While I have learned so much working for both Convergence Technology (@convergencetech) and Clearpath Solutions (@clearpathsg) in the reseller world, I got an offer too good to turn down.  Make sure to go follow them both as they have alot of smart guys (and girls).  So after much thought it was time to trust my gut again.

Starting on Monday I will be the Federal / Mid-Atlantic Solutions Architect/Sales Engineer for Nexenta(@nexenta).  For those of you not familiar with Nexenta, they have been listed as the fastest growing storage startup ever.  The interesting thing on that is that they are actually a software company.  Nexenta was started in 2004 and came out of stealth in 2008.  As a small and growing company that focuses on the virtualization, big data, and storage space, I felt they were a great fit.  NexentaStor is the flagship product built on the ZFS filesystem.   allows SMB to Enterprises to use commodity hardware with NexentaStor layered on top as fully functional enterprise class SAN and NAS units.  The feature set is actually much larger than I would want to put in this one post so expect more to come as I get more and more ramped up.

The great part is I will also still be able to be heavily involved with VMware.  I will also be joining as the third member of Nexenta’s Virtualization sniper team (I just made the name up but I like it).  I will be joining Theron Conrey (@theronconrey) and Tom Howarth (@tom_howarth) to help Nexenta continue to grow the relationship and integration with virtualization products.

I am really excited and looking forward to this next step and my first with a manufacturer.  Thank you to all that have supported me to here and heres to a great new adventure.

Return top