PowerShell Debate: Write-Verbose As Opposed To Writing Comments

Recently, I was working out some problems in my script that I’m developing in my Azure environment, and a thought had occurred to me while I was working: Is commenting in my scripts a waste of time?

WriteVerbose2

While I was building my script, I was actually using a lot of Write-Verbose messages so that way I could monitor the commands as they executed by using the -Verbose parameter.  This was especially helpful in my If/Else statements, or my Try/Catch statements, so I could monitor what patch my script was heading down to make sure it was doing what it was supposed to do.  I also added some extra bits here and there to make sure that my variables were generating as I expected them to.  Very quickly I found that I was actually duplicating the work I was putting into commenting my code with my Write-Verbose statements.  This is what lead me to my to the thought that perhaps comments aren’t really the way to go.

If you’ve coded a robust function, there should almost never be a time when an admin has to dig into your scripts.  This means that the only time someone is ever going to look at it, is when something goes wrong.  Instead of tearing through your code and figuring out at what point things fall apart, you could far more quickly and easily execute the script with -Verbose, and see the last command executed before everything hit the fan.  After all, if it’s a script that has worked well for a long time, it’s more likely that something changed in your environment more so than the script just breaking for no good reason.

I still use commenting, but more really for my own sanity, such as tracking which curly bracket is closing which statement, and of course when adding my Comment Based Help! But for the most part, I’m seeing less reason to use commenting over Write-Verbose, because of the usability that the latter supplies.  I’d actually like to hear what other people think about this.  Let me know in the comments below!

PowerShell – Automating Server Builds In Azure – Pt. 1 – Basic Builds

During this scripting session, I am working on a system that is running PowerShell 5.0 (February 2015 release).

I started with a very simple goal towards learning Azure and Desired State Configuration: to be able to rapidly deploy a series of machines required for giving demonstrations at user group meetings and internal company discussions.  In order to get my Azure environment to this point, I figured that I would need to learn the following:

  • Build a single deployment using a one-line command.
  • Build a series of constants to provide the one-line command to ensure consistency in my deployments.
  • Build a series of basic parameterized inputs to provide the command the necessary information to deploy the server into a service group for a given purpose (IIS server, print server, etc.)
  • Expand the scope of the script to build a specified number of servers into a service group, or add a number of servers to an existing service group.
  • Build a second command to tear down an environment cleanly when it is no longer required.

Once complete, I will stand up a DSC environment to explore the possibility of leveraging my provisioning scripts to deploy my standardized configurations to a number of different deployments based on the designated purpose.

Before I begin, it should be noted that I had an issue with the New-AzureQuickVM cmdlet returning an error regarding the CurrentStorageAccountName not being found.  A quick Googling took me to Stephen Owen’s blog on how to resolve this error.  You’ll want to read up on this post and update your Azure subscription info if necessary.  You’ll likely have to.

The ‘One-Liner’

Building a one-line command is pretty easy, provided that you have some necessary things configured before deploying.  For my purposes, I’ll be using the New-AzureQuickVM cmdlet.  At a bare minimum for a Windows deployment, you’ll need the following bits of information:

  • You need to specify that you want to deploy a Windows server. (-windows)
  • The location of your preferred Azure datacenter (-location)
  • The image name you wish to deploy. (-imagename)
  • The name of the server you wish to deploy.(-name)
  • The Azure Cloud Services name to deploy the server into. (-servicename)
  • The user name of the Administrator account. (-adminusername)
  • The password of the Administrator account. (-password)

But before we can do the assemblage, we need to gather some information first; such as the information for location and imagename.  Getting the location is fairly straightforward.  Using the Get-AzureLocation cmdlet, you can get a listing of the datacenters that are available globally.

Get-AzureLocation

Azure2

For our purposes, we’ll use the West US datacenter location.  Now to look up the image name, we’ll use the Get-AzureVMImage since we’re not using any custom images.

Get-AzureVMImage

Now you’ll find that when you run this, it’s going to pull all of the images available through Azure; 470 at the time of this writing to be exact!  So we’re going to try to pare this down a bit.

Azure3

(Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2*"}) | Measure-Object

Azure4

Well, almost.  But if we whittle it down a bit further…

Azure5

Ah!  Better.  Now that we’ve gotten down to the base server load, we can take a look and we’ll find that, like the images available in the image gallery in the New Virtual Machine wizard in Azure, you have three different dated versions available to choose from.  For the purposes of our implementation, we’ll just grab the latest image.

Azure6

What we’ll be looking to grab from here is the long string of characters that is the image name to fulfill the image requirement for our command.  So let’s go ahead and snag this bit for our BaseImage variable and add our $Location variable as well.

$Location = 'West US'
$BaseImage = (Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2 Datacenter*" -and $PSItem.PublishedDate -eq "2/11/2015 8:00:00 AM" })

So now we can build our command.  But I’d like to not have my commands run off the screen, so let’s do some splatting!

$Location = 'West US'
$BaseImage = (Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2 Datacenter*" -and $PSItem.PublishedDate -eq "2/11/2015 8:00:00 AM" })
$AzureArgs = @{
 Windows = $True
 ServiceName = 'LWINerd'
 Name = 'TestSrv'
 ImageName = $BaseImage.ImageName
 Location = $Location
 AdminUserName = 'LWINAdmin'
 Password = 'b0b$y3rUncle'
}
New-AzureQuickVM @AzureArgs

And now we see that we have a new service in Azure…

Azure7

And a new VM is spinning up!

Azure8

And now we have our base script for building VMs in Azure.  Next post, we’ll be looking at creating availability sets through PowerShell, as well as assigning our VMs to specific virtual network address spaces, bringing in some parameterization and more!

***EDIT*** – For some reason I was previously under the impression that you have to create an availability set if you wanted to have multiple machines co-existing in the same cloud service.  This is not the case, but I’ll be exploring the creation of availability sets in my code next week nonetheless.

Exploring Azure and Desired State Configuration Through PowerShell

I’ve decided to test out the possibility of using an Azure environment for carrying the tools necessary for some of my presentations.  The goal is to build out an Azure server instance that stores a number of DSC configurations that I can use to spin up different environment scenarios on the fly.  It also gives me a good excuse to finally get heads-down and learn Azure and DSC.

The other side of the coin is to see if Azure itself can actually be utilized as an affordable alternative to building out a virtual lab at home.  So while my posts may have less code for the next few weeks, I’m hoping that all of the work will pay off in some creative coding that will include some examples of spinning up resources in Azure and then applying DSC templates to them for your digestion.

Azure

For the moment, I’m using a free trial of Azure, which is available by going here and signing up.  At the time of writing this, I was granted a 30-day trial with $200 in credits to use as I saw fit.

There’s a lot of good information out there regarding configuring your first Azure environment.  Here are some of the blogs and guides that I’m using, including some light reading for DSC.

Of course, as I’m new to Azure and DSC, I’ll be happy to have people point out any gaps or improvements I can make, so please feel free to comment!

PowerShell – Use SCCM Automated Reports to Create Software Update Rollups

A while back, I showed off how you could use PowerShell to create Software Update packages and deployments for SCCM.  I was asked about my process for grabbing outstanding updates and adding them into the package.  I’ve since automated this method, and thought I’d share an update with the class.

SCCM gives you the ability to create subscriptions to automatically generate reports and email them or deposit them on a file share.  For the purpose of rollup creation, I use ‘Compliance 3 – Update group (per update) under Software Updates – A Compliance’.  The process is simple – Create a software update group of all of the updates you track and right-click on the report, click create subscription, and fill out the necessary fields.  Make sure you select the report to be delivered by Windows File Share, and Overwrite an existing file with a newer version.

Once you’ve completed the process and SCCM has generated its first report, you can use it to mine for the data you want.  First, we’ll need to import the CSV file.

Import-Csv "\\Server01\Patching\Critical and Security Updates All Workstations.csv"

CMReports1

As you will very quickly find, you’re not going to immediately get the data you want.  This is because the canned report adds 6 rows of header data that’s completely unusable for our purposes.  Not to worry though!  It can easily be remedied using Get-Content and re-exporting the data you want to a new CSV.

 Get-Content "\\server01\Patching\Critical and Security Updates All Workstations.csv" | Select-Object -Skip 6 | Out-File "\\server01\Patching\OutstandingWksUpds.csv"
 (Import-csv -Path "\\server01\Patching\OutstandingWksUpds.csv")

CMReports2This gives us a little something more to work with.  But we’re dealing with a few hundred lines of information, so let’s thin this out a bit.  For our purposes, I’m going to just look for updates for Windows components and Office for my workstations.  So we’ll filter out the following as a base:

  • Updates that are missing on 0 machines
  • Updates that have Server in the description
  • Updates that are applicable to applications we don’t wish to include. (Lync, SQL, Onedrive, etc.)

So now we’ll have a filter statement that looks like this:

(Import-csv -Path "\\Server01\Patching\OutstandingWksUpds.csv").where({$PSItem.Details_Table0_Missing -ne 0 -and $PSItem.Details_Table0_Vendor0 -notlike "*Server*" -and $PSItem.Details_Table0_Vendor0 -notlike "*Lync*" -and $PSItem.Details_Table0_Vendor0 -notlike "*Skydrive*"  -and $PSItem.Details_Table0_Vendor0 -notlike "*Onedrive*"-and $PSItem.Details_Table0_Vendor0 -notlike "*Sharepoint*"}) | Select-Object Details_Table0_Vendor0,Details_Table0_Missing

There’s a lot in there,  but once you’ve filtered out the things you don’t want, you’ll wind up with a nicely ordered list:CMReports3

Now that we have the updates, we can incorporate them into a Software Update Group using some of my previous code.

Important note: If you attempt to run the first line of the script while mapped to your SCCM PSDrive, you will get an error on the Out-File “Cannot open file because the current provider (AdminUI.PS.Provider\CMSite) cannot open a file.”  So make sure you’re not in your CM PSDrive until you’re ready to execute your CM cmdlets.

Get-Content "\\Server01\Patching\Critical and Security Updates All Workstations.csv" | Select-Object -Skip 6 | Out-File "\\Server01\Patching\OutstandingWksUpds.csv"
$update = (Import-csv -Path "\\Server01\Patching\OutstandingWksUpds.csv").where({$PSItem.Details_Table0_Missing -ne 0 -and $PSItem.Details_Table0_Vendor0 -notlike "*Server*" -and $PSItem.Details_Table0_Vendor0 -notlike "*Lync*" -and $PSItem.Details_Table0_Vendor0 -notlike "*Skydrive*"  -and $PSItem.Details_Table0_Vendor0 -notlike "*Onedrive*"-and $PSItem.Details_Table0_Vendor0 -notlike "*Sharepoint*"}) | Select-Object Details_Table0_Vendor0,Details_Table0_Missing


$PowerShellPath = "\\Server01\C$\Program Files\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1"
$CMSiteCode = "A00"
Import-Module $PowerShellPath
CD ("$CMSiteCode" + ":")

         $UpdateGroupArgs = @{
                            'Name' = 'Workstations Update Rollup';
                            'Description' = 'Created with Powershell!'
                            'UpdateID' = '16802522','16801092','16803970'
                            }
New-CMSoftwareUpdateGroup @UpdateGroupArgs
    ForEach ($upd in $update){Add-CMSoftwareUpdateToGroup -SoftwareUpdateName $upd.Details_Table0_Vendor0 -SoftwareUpdateGroupName $UpdateGroupArgs.Name }

Execute the script, and then check your Software Update Groups and you’ll find you have a basic rollup package to test and deploy to your systems.

Using the same methods, you can streamline your process for creating new Software Update groups for your monthly deployments as well.  Nothing beats a set of eyeballs to review the list and make sure you’ve got what you want to push, but importing a list of outstanding updates and adding them straight to an update group sure as heck beats selecting them individually through the console.