End of Year SMB Tech Survey

Every year we see survey results posted by Gartner and just about every known trade rag that says what the next year will hold.  What will be hot, and what will be relegated to the trash bin.

This year , I haver been asked to pass along a survey to my readers in conjunction with Ivy Worldwide.  I vy is a social media firm that I have worked with for the past few years.  The great part of the survey is that I get to publisht he results right here when I get them back, not just from my readers but from readers of many blogs around the world, but also you get a chance to win $250.  Who doesn’t like free money?

Click ont he graphic to be taken to the survey and i look forward to sharing the results.

SMB Survey

Top 25 Autocorrects of 2012

Last year I was able to grab the Top 25 Autocorrects from www.damnyouautocorrect.com to share with friends that could not view it. This year I went through and found some of my favorites. There are always tons of lists each year so hopefully you enjoy. Bear in mind this is PG-13 at a minimum so do not read in public or if you are under age.

If you missed last years, click here

Read more

2012: the year that storage died

Sure the title was meant to be inflammatory, but at the same time I am seeing one of the most dramatic shifts in enterprise storage in the last 10 years. Some history would probably help here. I began my career in IT 15 years ago, in 1997 major companies ran their entire businesses on either a mainframe or a midrange system and green screens ruled the world. We barely had email, and it was surely not a collaboration suite. At the time, I was a systems admin and spent days and often nights working with the large direct attached storage systems for either the mid range or the windows environments. We slowly moved into shared storage, often for a single system. Our exchange server had a shared set of disks for the cluster, same goes for SQL, but we didn’t dare move the mid range systems(as/400 at the time) to a shared storage solution. Around 2001, I was insistent with my management that we should have a shared solution for both open systems (windows and Linux) and our iSeries but got amazing pushback. The more we virtualized the more traction I was able to get. Probably helped that many of the mid range systems were being replaced by monolithic sun and windows boxes, the IBM purists had less traction. About this same time, you saw IBM itself start to transform itself into a services and software company, the move that Sun never realized it needed to do. With the vast growth of virtualization, came the rise of EMC and then startups like NetApp. Over the next 5 years you would see shared storage become the go to accepted platform. As our data growth has exploded so has the size of the arrays we use to store the massive amounts of data
So if we have massive data growth, how can I say that storage has died? The answer is simple, I can’t, but what I can say os that the way we address storage has changed. We are reverting back to the direct attached storage days, with a few exceptions. In the direct attached days, the big reason for keeping the drives local was that the data was all controlled by the software. Software defined storage, just no one called it that. Today we are seeing the same back to software defined storage. The major cloud players have all found that users want the choice of where they data goes. VCloud Director now has storage profiles. OpenStack had already let you have tiers of storage. Object based storage is leading a way to move data between entities without the need for a set structure. Hadoop and Gluster are saying that the data does not matter and you should concentrate on how we process the data.
So where does that leave us? We need to look at hardware vendors right? After all they control the drives and we want to make sure our data integrity stays high and we can control where we place our data. I argue that we should be only looking at the hardware vendors to give us a place to put data but not a way to control it. The software defined storage of today allows for me to add data integrity, portability, and speed with what ever hardware I want. We have 4TB drives spinning faster than most personal computer drives, solid state drives that will give us 5 year warranties and in sizes approaching a TB. Now we need the likes of HP, Dell, and IBM to press on the manufacturers, the Sanmina and Quantas among others to produce for density, and environmental factors. HP announced with the Gen8 servers that their hardware RAID controllers on their servers could hold more disk and process at a faster speed. But what about when I want to control the data? Where is my dumb JBOD at density? Dell has started to trend towards higher density with the 3020 60 drive JBOD. Well almost, they say you have to have the 3260 to manage the JBOD. Seems like a conflict to me.
If we can start to get all our data controlled by software, on the hardware we want, with the best density rates we can keep moving forward to a point when storage as we know it may very well die.

Finally talking out my side project. vCloud and VDI in a Box

Over the past few weeks, I have been working on a side project with one of the Nexenta partners to prepare for the Intel Developers Forum in San Francisco this week. The partner Cirracore based in Atlanta works with Equinix and Telx pretty heavily and offers a few managed private and public cloud solutions. One of these solutions is based on the Intel Modular Server Chassis(IMS). If you have not checked out this chassis, it is probably one of the most engineered but least publicized piece of hardware I have seen in years. First to give you an Idea of what the chassis is made of, then two solutions we release this week, vCloud in a Box and VDI/SMB in a Box. Read more

Follow Along with VMworld with the vExpert Daily

For the second year in a row, I have been given the distinct pleasure of anchoring the VMware Communities TV lineup at VMworld.  If you are not able to make VMworld, why not spend less than an hour a day and hear from some of the leaders in the virtualization community talk about everything that is announced and all there is to see at the conference?  If you have more time keep watching for some great tech talks throughout the day.  Our format is simple, a very casual conversation each day starting at 10 AM PST with myself and then 3-4 VMware vExperts.  The topics will be varied and will probably cover everything from the latest VMware releases to the best releases from vendors to how the welcome reception and evening.  Take a look below at the listing of the vExperts currently signed up and make sure to click on their names to follow them on twitter.  You never know who else might stop by though.

To follow along with all the VMworld Communities TechTalks bookmark http://vmwaretechtalks.com/ , the redirect will be edited as soon as the site goes live

Read more

How to Create “Add to Calendar” Links within a blog

I am hosting a live podcast in a few weeks (blog announcement upcoming) and I wanted to find a way to let people add the date and time to their calendars with just a simple click instead of having to enter the information themselves.  After alot of searching, all the post seem to say you had to add the event calendar plugin and then it would be a new plugin to the wordpress page and things like that.  Not exactly what I was looking for.  I wanted something as simple as

Heres a reminder to read TheSolutionsArchitect.net when the podcast announcement comes out:

Read more

How to make VDI easier to deploy? Take out the SAN, Take out the steps with Nexenta VSA for View

Deploying VDI should be an easy task, and with a proof of concept or trial it normally is.  The problems start to show up when you move from a small deployment to an enterprise rollout.  Problems like disk IO, throughput, management, and performance monitoring start to have a significant impact.  Nexenta has released their second product after working with VMware as just a way to ease that transition.  NexentaVSA for View(NV4V) is first an orchestration engine, followed by a performance tool.

 

Read more

HP gives the All-In-One iMac look with interchangeable parts (with video)

HP has released their Z1 workstation.  All-in-One devices have been a double aged sword, but HP touts the Z1 as the first true all-in-one workstation.  With the ability to self service parts, run full Xeon processors, and ECC memory, not to mention the ability to run mirrored 2.5″ drives, the Z1 gives you the full workstation flexibility without the footprint of a tower chassis.  A few nice features include USB 3.0, an additional display port for dual monitors, the 27″ glossy monitor, and a locking case.  The last is most important if you want to have the flexibility of using the included internal USB port.  The idea that you could secure a USB drive internal and then still have portable data of a USB drive.

Below is a small video of the system and how you could open and service the unit.

 

Want to buy that out-of-print book? HP preserves and shares some fo the great works of Literature.

HP released the latest of their cloud offerings today. What was a result of HP printing trying to figure out how to print more in an era of less paper.  HP Labs initially looked at a small project that digitized some old books.  The thought was that while a publisher will not print a single copy of a book with current digital presses and a collection of books, a printer can not print the single copy.  The move into the cloud was simply a way to support the project on a large scale.  Bookprep.com lets you search, find, preview rare, out-of-print and hard to find books on every topic imaginable.

Read more

Did HP dedupe the efforts of EMC and leapfrog past with StoreOnce Catalyst?

HP Discover started today and with it comes the birage of new product announcements.  One fo the first ones is for the latest upgrade of the HP StoreOnce deduplication portfolio.  EMC is clearly the targeted competitor.  The DataDomain and Avamar portfolio are considered by many to be best of breed.  Then looking at target side deduplication, few have rivaled the DataDomain portfolio.

Just this past week, DataDomain release the DD990, at the time the fastest throughput archive disk device on the market with a 31 TB/hr throughput rate.  This is all done with a single hed unit device, not the much touted Global Deduplication Array (GDA) that was just released last year.  This same unit will give you remote replication to 270 sites, completing up to 570 concurrent backup jobs. This by itself is a great product and one that will work in even the largest environments, but what happens if you also want Source-side deduplication?

In comes Avamar.  In my experience, the single backup application that when a customer is asked if they like their backup solutions, they actually respond “Yes.”  Avamar requires a separate grid and if you want to replicate from a remote site you may end up with a design similar to ones I designed a few years ago.  You deploy a client on a endpoint, back up to a small Avamar grid or Avamar Virtual Edition (AVE) and then replicate that branch office back to a main grid at the centralized datacenter.

This all sounds great unless you want to have all your data in a single place and in a single backup solution.  If you have been in the EMC circles you have heard the possibility of Avamar backing up to a DataDomain appliance for years, but as of yet they are still seperate systems.

In comes StoreOnce.  Today’s announcement for StoreOnce comes in two parts the first performance and the second usability.  For performance HP went directly after the DD990 with the latest rendition of the B6200 Backup System.  This new solution supports up to 100 TB/hour of backups and up to 40 TB/hour of restore.  Personally I think that second part is probably just as important if not more important.

Now what do I do to increase usability? Federated Deduplication is HP’s answer to Avamar.  While still requiring the integration with either Symantec OST, or DataProtector 7, StoreOnce Catalyst allows a application server to handle a local backup to any of the approved backup targets, and then replicate without rehydrating the data to a similar device at the datacenter.  You now get source-side deduplication with Catalyst at a remote site, and target side dedupe with the main StoreOnce system.

See it in action bere: http://goo.gl/gNltj

The question that remains to be seen is will this really take a bite out of the enterprise backup market, or will it just force EMC to make the step of combining Avamar with DataDomain?  Guess ti remains to be seen, but the performance to me would sure make me think twice before purchasing a new backup device.

 

 

Return top