Refresh your test and lab VMs without using snapshots

I am often building out machines that I want to change configurations on but would like a quick way to revert them back to an original state quickly.  I could do this with snapshots but often I dont want to mess with the snapshots getting large or forgetting I took a snapshot, this is where the “Independent disk” comes into play.  Cormac Hogan did a nice job summarizing an Independent disk on his blog but I found a slightly different example, his really focuses on the backup scenario.  Mine is in a lab and I dont care about the backups.  I want a lab system than no matter what a user does I can simply put it back to the way I set it up with little to no effort, possibly even for multiple machines with a script.  I want the user to be able to make changes however and even reboot the system from within the OS or using VMtools integrated reboots.  Heres how I did it…

Step 1.

Build the VM with all the virtual disk you would like and install an OS.  Keep the defaults as you add disks.  If you add additional disks before powering on, make sure they are at the default as shown below.

Screen Shot 2016-07-01 at 3.13.04 PM

Step 2.

Make any configuration changes you need within the core OS.  If you want things on secondary disks to be static when reset, make those changes now also.

Step 3.

Power Off the VM and Edit Settings.  You will now go to all the virtual disks and change the disks to “Independent – Nonpersistent”

Screen Shot 2016-07-01 at 3.16.34 PM

Step 4.

Power on the VM.

You can not make any changes you want to the VM, even allowing users to make changes involving multiple disks.  To reset the VM to the clean state that you built, simply go to vCenter and power cycle the VM.



Note: If youhave any snapshots you can not change the disk style.  You will need to delete all snapshots and consolidate if you want to set this.

How to emulate 10 Gbps NIC in a VMware Fusion VM

I am working on deploying some new VMs to demo some of the latest Nexenta products but I found one issue instantly.  The deployment requires 10 Gbps networking and since it is all internal to my macbook I assumed this would be easy.  Unfortunately VMware Fusion 7 does not have a graphical way that I can find to change the type NIC.  Turns out this is very easy and I thought I would share the process so here goes..

  1. Build the VM with whatever number of NICs you need.
  2. Power off the VM
  3. Quit VMware Fusion
  4. Go to the location of the virtual machines
  5. Right-click on the VM you want to edit
  6. Select “Show Package Contents”
  7. Right-click on the .vmx file and open with TextEdit
    • Make sure you open Preferences and uncheck “Smart Quotes” if you dont it adds stuff into the VMX fileText Edit Preference
  8. By default all the NIC will show as e1000, you need to change them to vmxnet3.  Find this line: ethernet0.virtualDev = “e1000” and change the e1000 to vmxnet3Screen Shot 2015-05-30 at 12.12.24 PM
  9. Save the file
  10. Open Fusion and Start the VM

Thats it!

Testing the MyVMware iOS App

VMware has released a mobile client for their newly released MyVMware page.  For years one fo the biggest issues with VMware has been the confusing licensing and user management.  With the release of MyVMware, many of these issues have been resolved.  One thing I will enjoy is the ability to grab a license key directly from my phone when I need it.  After working for a reseller and now a vendor, both big VMware partners, I often need to test software that it can be a pain to go grab a license key from the portal only to not be able to use cut and paste and have to type the key in.  Now I can open my app, grab a key and still type it in, but it is much quicker.  I took a few screenshots of the app and listed them below so you can get an idea what the app can do.

1. Start by logging in:












2. You then have to approve the EULA (Surprise!)












3. You will then see your profile











4. You now have to pick your folders (whatever ones you have created on the MyVMware website)











5. Once you go into the folder you can see the products under it.











6. The select the product and click next (you have to do that each time, that is kind of annoying) and you will see the license keys











Click on the power icon on the top to logout, or the gear to set your refresh and timeout time



vCenter Operations Gets a Facelift

VMware announced today at VMworld Europe the vCenter Operations Management Suite. Those of us who are familiar with the product set will see lots of significant changes including the addition of vCenter Infrastructure Navigator 1.0 and a new user interface. vCenter Ops Suite delivers an integrated performance capacity and configuration management solution for your vSphere environment. You are able to continuously monitor for compliance and health. This release is the first integrated suite and allows seamless upgrades between Standard(Ops Manager Only), Advanced(includes Chargeback), and Enterprise(includes Configuration Management). The suite is scheduled to be available for deployment in Late 2011 or Early 2012. Continue reading “vCenter Operations Gets a Facelift”

How vCenter Operations kept my Tier 1 app virtual

Virtualization admins have become the last line of defense when something is not working right. Networking teams will claim it is the over use of hardware, developers will claim it is the hardware, storage guys claim that they just provide disk. So how can a virtualization admin clearly show the relationships between different systems, different hosts, and the needs for CPU, Memory, network and storage. The answers to all the questions already sit in the logs and data collected by vCenter, but without an easy way to interpret the data it is pointless. In comes vCenter Operations.

vCenter Operations is a natural way for virtualization administrators enter into a full scope virtualization monitoring platform. As an integrator, we have found that each time we place a new or upgraded vSphere environment, we have added Operation Standard to the kit to allow the admins on site to get a great overview of their environment as shown below.

You will notice that you can see vCenter systems, datacenters, clusters, and even all the way down to an individual VM. The next questions is how did this help us with a Tier one app.

I was recently at a client site that was having some significant speed issues in their Sharepoint environment. As a multi-tier application Sharepoint needs both a web front end and a database backend. We got word that Sharepoint was running slowly. Luckily we had just installed vCenter Operations and when we checked the console we noticed that 2 of the ESX hosts were in the Red category. After double clicking on the ESX host we saw that the network usage was the issue, with 100% of network being used similar to the graphic below that shows a systems at 100% of CPU being used.

The great part was looking further down the page on the ESX host we found the Child Objects. With a single child object being Red as well I was able to drill down into the exact machine.

Once we drilled down to the machine we found that the database server from Sharepoint was using 100% of my network capacity. Then we looked at the other ESX host that showed Red, we found out that it also only had a single machine with a Red status. It turned out that the second guest with a Red status was the Web front end for Sharepoint. We then backed out of the vCenter Operations console (which conveniently is integrated into the Virtual Infrastructure Client) and migrated the web server to the same ESX host as the SQL server. We gave vCenter Operations a couple minutes to make sure that the data was up to date and we were back to a healthy Green environment across the board.

Now that we know the network load could be impacting the rest of my environment when the two servers are split, we simply set an Affinity Rule on DRS. This forces the two servers to always stay on the same ESX host. Searches and document retrieval speeds from Sharepoint decreased almost immediately. Needless to say convincing the powers that be that instead of hours upon hours of troubleshooting a simple add-on product is sometimes worth its weight in gold. The next solutions might have been to move Sharepoint back to a physical environment, meaning the cost for new hardware that would have been a minimum of twice the cost of a simple monitoring and correlation product.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑