Blog

PowerCLI – Get a List of VMs from a List of Datastores

I’ve found plenty of examples for pulling a list of VMs from a Datastore in PowerCLI. The requirement was given a list of LUNs, which Datastores did not contain VMs. Running the script for each Datastore would not have been any faster than browsing each datastore in the GUI.

I have a programming background from college, and found the learning curve with Powershell to be steep. Adding PowerCLI into the mix immediately upon diving in did not help. Some documentation and script examples are already obsolete from depreciated Cmdlets and Cmdlet options in PowerCLI; specifically, the Get-Datastore Cmdlet frequently returned this error “Passing values to this parameter through a pipeline is deprecated  and will be removed in a future release.” The website http://psvmware.wordpress.com/tag/get-vms-that-reside-on-datastore/ provided an essential piece of code but I could not pass an array of values directly into the Get-Datastore Cmdlet via a pipe “|” : 

 

Get-Content “TextFile” | (Get-Datastore -Name ‘datastore_name’).Extensiondata.Vm|%{(Get-View -Id $_.toString()).name}

 

I used a ForEach loop to pull the Datastore names out of the file one at time and pass each one individuallly to the Get-Datastore Cmdlet. Here is my modification in full:

 

$Result = ForEach ($Datastore in (Get-Content $ListofDatastores)) { 

(Get-Datastore -Name $Datastore).Extensiondata.Vm|%{(Get-View -Id $_.toString()).name}}
Echo $Result

 

Here is the entire script. It will request all required information to connect to vCenter, produce the desired output, and save the output to a text file.

<#
This Script will take a text file of line separated Datastores and output the VMs they contain.
#>

Param(
[Parameter(Mandatory=$true, Position=0, HelpMessage=”vCenter Server?”)]
[string] $vCenter,
[Parameter(Mandatory=$true, Position=0, HelpMessage=”User DOMAIN\USER”)]
[string] $UserName,
[Parameter(Mandatory=$true, Position=0, HelpMessage=”Password?”)]
[Security.SecureString] $Password,
[Parameter(Mandatory=$true, Position=0, HelpMessage=”Enter .txt File Name of the List of Datastores”)]
[string] $ListofDatastores
)

$pw = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($password))

Connect-VIServer -Server $vCenter -User $UserName -Password $pw

$Result = ForEach ($Datastore in (Get-Content $ListofDatastores)) { 
Echo “”
Echo $Datastore
Echo “”
(Get-Datastore -Name $Datastore).Extensiondata.Vm|%{(Get-View -Id $_.toString()).name}}
Echo $Result
Echo “”
Echo $Result | Out-File VMsOnDatastore.txt
Echo “Results stored in VMsOnDatastore.txt”

VNX 5000 Series File Pool Deployment

Automatic Volume Manager (AVM) has always been too simplistic, and never flexible enough when I’m deploying storage for File on an EMC VNX SAN. Manually allocating block storage to the Data Mover isn’t an intuitive task, and contains too many decision points for a novice engineer. This guide takes the EMC VNX Best Practices Guide and breaks it down into a generic tutorial for deploying a File Pool for basic workloads.

 

Click Here to Download the Guide in PDF Format. 

Delete Unisphere Logs

Taken from EMC Support Primus 284357

I’ve been looking for this solution for a long time, since I started working with the VNX array. A knowledge base article was finally published, and I immediately wanted to help share the information. 

 

Problem: Unable to delete alerts from the alerts page in Unisphere for VNX. Whenever an alert is deleted, another one with different time or date appears instead. This is an extreme annoyance when a large number of alerts are generated as a result of an outage. E.g. naviagent respawing and generating several alerts per minute.

 

With out the ability to easily highlight multiple errors, this CLI method is a relief.

 

Fix:

1. ssh/telnet into the pirmary control station as nasadmin or root. We will be creating a file for the user nasadmin.

2. Change directory to the /nas/log/webui directory. This directory rsync’s with another log directory in /nbsnas

[nasadmin@CS0 /]# cd/nas/log/webui

3. Delete all alert_log files. I would suggest using tar to back them up first.

[nasadmin@CS0 /]# rm alert_log*

4. Recreate the alert_log file and apply the approriate ownership/permissions

 [nasadmin@CS0 /]# touch alert_log

[nasadmin@CS0 /]# chmod 664 alert_log

[nasadmin@CS0 /]# chown nasadmin:nasadmin alert_log

VNX 5000 Series Install Preconfigure Upgrade

The VNX 5100, 5300 and 5500 arrays with dual control station setups currently do not fully support installation, pre-configuration and upgrade procedures through the GUI. This guide is meant to help those who are not experienced with deploying a VNX array and will mainly focus on the dual control station setup.

 

Click Here to Download the Guide in PDF Format

Big Changes

For years after my visit to France, I didn’t think I would be happy unless I lived in Europe. I found Boston in 2009 and fell in love with the city immediately, it’s not exactly Europe, but I’ve been visiting consistently ever since. I found a studio apartment in Brighton, a neighborhood in Boston a few minutes from downtown. It’s a perfect location for me. I hope to (tentatively) move in April 1st.

Things have been rocky at my job for a year now. A hostile manager, contract re-compete, and finding my preferred niche have all contributed to me looking for a new position. I’ve accepted a position as a delivery consultant for AdvizeX out of an office in Burlington, MA starting April 4th. My (personal) focus will be virtualization implementation with Microsoft and VMware products. With virtualization software, I can consolidate hundreds of physical servers into a finite few for customers.

I’m pursuing many dreams, and making them a reality. I’m leaving Maryland, permanently. I’ve lived in my home state my entire life (with the exception of college), and it’s time to see the world. Moving to a big city has been a goal of mine for as long as I can remember. I’m achieving professional success, as I pursue a job with few compromises and more desirable responsibilities. All of this is the result of years of hard work, I’ve earned this.

“Come on, guys, this is our time!”

ESXi Multi-Homed Networking Produces HA Confusion

Problem:

When turning on or reconfiguring HA for a node, the configuration hangs for several minutes and then bombs.

 

Possible Errors:

“HA agent has an error : cmd addnode failed for primary node: Internal AAM Error – agent could not start. : Unknown HA error”
“HA agent has an error : cmd addnode failed for primary node: Internal AAM Error – agent could not start. : Unknown HA error”

vmkping cannot reach the management network of another host or its own

 

vSphere OS: ESXi 4.0 or 4.1

 

Tried These Steps First:
kb.vmware.com/kb/1001596

kb.vmware.com/kb/1007234

 

My Scenario:

I had been using ESX 4.1 for a couple months when I decided to make the switch to ESXi before my environment went into production. The idea behind my network configuration is that I wanted to use as few subnets as possible, being only on a two host infrastructure.

OLD Network:

Network Switch

Host1 Management Network 10.0.10.50/16

Host2 Management Network 10.0.10.100/16

 

Private Data iSCSI switch

Host1 10.0.10.55/24 10.0.11.55/24 10.0.10.56/24 10.0.11.56/24

Host2 10.0.10.105/24 10.0.11.105/24 10.0.10.106/24 10.0.11.106/24

Physical Switch 10.0.10.1/24

MD3000i 10.0.10.15/24 10.0.11.15/24 10.0.10.16/24 10.0.11.16/24

 

When using ESX, this configuration worked fine without hiccups. All traffic designed to use the Service Console went out and came back in without issue. After migrating a host to ESXi, HA would no longer configure; all other cluster features including DRS and vMotion would still function properly.

 

Explanation: The major obvious difference between ESX and ESXi is the port each hypervisor uses for its management network (ESX uses a Service Console / ESXi uses a VMKernel). While HA is designed to automatically use the management network for all communication, it’s important to realize that all ports are listening for traffic. What this can mean is if your VMKernel ports are on the same subnet, traffic may go out one NIC and come back on another. On ESX, management network traffic will go out the Service Console and come back on the Service Console; however on ESXi in a multi-homed network configuration its possible management network traffic may not come back on the same NIC and thus causing problems. In the situation above, technically both the management networks and iSCSI traffic are on different subnets, but the default gateway in the routing table will contain both networks under its subnet mask. This is demonstrated in the console output below:

#esxcfg-route -l

VMKernel Routes:

Network Netmask Gateway Interface
10.0.11.55 255.255.255.0 Local Subnet vmk2
10.0.10.55 255.255.255.0 Local Subnet vmk1
10.0.10.50 255.255.0.0 Local Subnet vmk0
default 0.0.0.0 10.0.0.1 vmk0

 

But why, in this example, does only HA fail to configure? The answer is in the HA Health Check Script which was updated in ESXi 4.0. The script runs every 30 seconds (default) and whenever HA is configured on a node. HA configuration is set to fail by the script in certain multi-homed network environments – “A feature, Not a Bug”. This is by design to ensure stability when sending out heartbeats. In a production environment, you can’t afford to have heartbeat packets coming in on the wrong NIC – if a series of packets are lost, you may end up with a whole lot of powered down VMs.

Solution:

I reconfigured the iSCSI network as follows:

NEW Network:

Network Switch

Host1 Management Network 10.0.10.50/16

Host2 Management Network 10.0.10.100/16

 

Private Data iSCSI switch (according to Dell documentation)

Host1 10.0.11.55/24 10.0.12.55/24 10.0.11.56/24 10.0.12.56/24

Host2 10.0.11.105/24 10.0.12.105/24 10.0.11.106/24 10.0.12.106/24

Physical Switch 10.0.11.1/24

MD3000i 10.0.11.15/24 10.0.12.15/24 10.0.11.16/24 10.0.12.16/24

 

While the iSCSI traffic isn’t completely off the management network subnet, this new configuration was sufficient enough for the health check script. VMware best practice suggests the two networks should be on different ip addressing schemes, e.g. iSCSI traffic would be on a 192.168.x.x network.

 

This solution was found by myself, Daniel Helm. My routing table theory was confirmed by a Dell Tech Rep in a reproduced environment and the HA Health Check Script explanation was given by a Dell VCDX.