Generating Azure Storage Tokens On the Fly With PowerShell

As I talked about in last week’s blog post, it’s important to ensure that files that you store in blob are secure from public eyes. But how do you allow your automation to access them when needed? That’s where a Shared Access Signature (SAS) token comes into play.

A SAS token is essentially an authorized URI that grants the person or object using it rights to access the object that you are otherwise concealing from the world. You can specify the amount of time that the URI is valid for; the protocol that is allowed; and the specific permissions to the object (read, write, delete). Once the time has elapsed, the URI is no longer valid and the object is not accessible.

Let me show you how this works!

After we’ve logged into Azure and set the appropriate subscription context, We need to get the resource group and storage account that our blob object lives in:

PS BlogScripts:> $StorageAccount = Get-AzureRmStorageAccount -ResourceGroupName 'nrdcfgstore' -Name 'nrdcfgstoreacct'

Once you’ve got your storage account, we can then acquire the storage account key, like we did in our last blog.


$StorageKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $StorageAccount.ResourceGroupName -Name $StorageAccount.StorageAccountName)[0]

And then once we have our key, we can get the storage context and access our container:


$StorContext = New-AzureStorageContext -StorageAccountName $StorageAccount.StorageAccountName -StorageAccountKey $StorageKey.Value$Containers = Get-AzureStorageContainer -Context $StorContext -Name 'json'

And now we can get our object inside of the container:

 $TargetObject = (Get-AzureStorageBlob -Container $Containers.Name -Context $StorContext).where({$PSItem.Name -eq 'AzureDSCDeploy.json'})

And finally, we can get our SAS Token URI. Note, that I’m using HTTPSOnly for the protocol, r (Read-Only) for the permission, setting an immediate start time, and then limiting the time allowed for one hour with the ExpiryTime parameter. This ensures that the object will only be accessible for an hour after the command is run via HTTPS.


$SASToken = New-AzureStorageBlobSASToken -Container $Containers.Name -Blob $TargetObject.Name -Context $StorContext -Protocol 'HttpsOnly' -Permission r -StartTime (Get-Date) -ExpiryTime (Get-Date).AddHours(1) -FullUri

So by comparison, if I tried to access the direct URL of the object, this is what I’ll get:

However, with my SAS Token URL, I can successfully read the file:

And we’re done!

“So where is this useful in automation?” you may ask. Well I’ll be showing you exactly how next week when we take the code that we’ve built for the last couple of weeks and use it to deploy an Azure template via Azure automation.

See you then!

Managing Azure Blob Containers and Content with PowerShell

I do a lot of work in Azure with writing and testing ARM templates.  Oftentimes I deal with a lot of parameters that need to access resources existing in Azure.  Things such as Azure Automation Credentials, KeyVault objects, etc.  To streamline my testing process, I’ll often create an Azure runbook to run the deployment template, pulling in the necessary objects as they’re needed.

Of course, this requires putting the template in a place that’s secure, and that Azure Automation can easily get to it.  This means uploading my templates to a location, and then creating a secure method of access.  This week, I’ll show you how to do the former process – with the latter coming next week.  Then later on, I’ll be walking you through how to create a runbook to access these resources and do your own test deployments!

First, let’s log in to our AzureRM instance in PowerShell and select our target subscription.  Once we’re done, we’re going to get our target resource group to play with and the storage account.:

$Subscription = 'LastWordInNerd'
Add-AzureRmAccount
$SubscrObject = Get-AzureRmSubscription -SubscriptionName $Subscription
Set-AzureRmContext -SubscriptionObject $SubscrObject

$ResourceGroupName = 'nrdcfgstore'
$StorageAccountName = 'nrdcfgstoreacct'

$StorAcct = Get-AzureRmStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName
 Now that we have our storage account object, we’re going to retrieve the storage account key for use with the classic Azure storage commands.
$StorKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $ModuleStor.ResourceGroupName -Name $ModuleStor.StorageAccountName).where({$PSItem.KeyName -eq 'key1'})

I know it’s not the most intuitive thing to think of, but if you take a look, there are currently no AzureRM cmdlets for accessing blob stores.  What we can do, however, is use the storage key that we’ve retrieved and pass it in to the appropriate Azure commands to get the storage context.  Here’s how:

Let’s go ahead and log in to our Azure classic instance and select the same target subscription.    Once you’re logged in, you can use the New-AzureStorageContext cmdlet and pass the storage key we just retrieved from AzureRM.  This allows us to use the AzureRM storage account in the ASM context.

Add-AzureAccount

$AzureSubscription = ((Get-AzureSubscription).where({$PSItem.SubscriptionName -eq $SubscrObject.Name}))
Select-AzureSubscription -SubscriptionName $AzureSubscription.SubscriptionName -Current

$StorContext = New-AzureStorageContext -StorageAccountName $StorAcct.StorageAccountName -StorageAccountKey $StorKey.Value
Now that we have a usable storage context, let’s create our blob store by using the New-AzureStorageContainer cmdlet with the -Context parameter to get at our storage account:
$ContainerName = 'json'
Try{

$Container=Get-AzureStorageContainer-Name $ContainerName-Context $StorContext-ErrorAction Stop

}

Catch [System.Exception]{

Write-Output ("The requested container doesn't exist. Creating container "+$ContainerName)

$Container=New-AzureStorageContainer-Name $ContainerName-Context $StorContext -Permission Off

}

I decided to write this as a Try/Catch statement so that if the container doesn’t exist, it will go ahead and create one for me.  It works great for implementations where I might be working with a new customer and I forget to configure the storage account to where I need it.  Also, if you notice, I’ve set the Public Access to Private by setting the Permission parameter to Off.  Once again, a little counter-intuitive.

Now, if our script created the blob, we’ll be able to look at the storage account in the portal we’ll see that our container is available:

But we’ve also captured the object on creation, which you can see here:

So now that we have our container, all we have to do is select our target and upload the file:

$FilesToUpload = Get-ChildItem -Path .\ -Filter *.json
ForEach ($File in $FilesToUpload){

Set-AzureStorageBlobContent-Context $StorContext-Container $Container.Name-File $File.FullName-Force -Verbose

}

And we get the following return:

Now that we’ve uploaded our JSON template to a blob store, we can use it in automation.  But first, we’ll need to be able to generate Shared Access Signature (SAS) Tokens on the fly for our automation to securely access the file.  Which is what we’ll be talking about next week!

You can find the script for this discussion on my GitHub.

What’s In An Azure Subscription ID?

“Can I be hacked if someone has my Azure Subscription ID?”

“Is my Azure Subscription ID the key to the kingdom?”

I’ve had this conversation a number of times with colleagues and clients alike.  What is this ID that Azure assigns to your account, and can it be leveraged to gain access to your subscription?  Not really.  So let’s take a look at what an Azure Subscription ID is, how it works, and how it should be handled.

An Azure Subscription ID is a GUID – a globally unique identifier – that identifies your subscription and the underlying services.  When someone hears this, they immediately think of it in the same regard as a user account, but it’s really not.  What it is, is directions to a container of the services that you want to access, if you have the permissions to do so.  In order to access a particular subscription ID, you need to do the following:

  • Be authenticated to Azure (through the portal, CLI, or PowerShell).
  • Have your Microsoft Azure or Active Directory ID assigned the permissions to view the subscription ID.

Let’s test this.

Here’s a subscription ID for you to play with:

$UnknownID = 'f2007bbf-f802-4a47-9336-cf7c6b89b378'

Looks pretty unassuming.  So I’m going to see if I can look at the properties of this subscriptionID without authenticating to Azure.

PS C:\WINDOWS\system32> $UnknownID = 'f2007bbf-f802-4a47-9336-cf7c6b89b378'

PS C:\WINDOWS\system32> Get-AzureRmSubscription -SubscriptionId $UnknownID
Get-AzureRmSubscription : Run Login-AzureRmAccount to login.
At line:1 char:1
+ Get-AzureRmSubscription -SubscriptionId $UnknownID
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Get-AzureRmSubscription], PSInvalidOperationException
    + FullyQualifiedErrorId : InvalidOperation,Microsoft.Azure.Commands.Profile.GetAzureRMSubscriptionCommand
 

PS C:\WINDOWS\system32>

Well…that gave me bupkus.  So let’s authenticate and try again.

PS C:\WINDOWS\system32> Get-AzureRmSubscription -SubscriptionId $UnknownID
Get-AzureRmSubscription : Subscription f2007bbf-f802-4a47-9336-cf7c6b89b378 was not found in tenant . Please verify 
that the subscription exists in this tenant.
At line:1 char:1
+ Get-AzureRmSubscription -SubscriptionId $UnknownID
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [Get-AzureRmSubscription], PSArgumentException
    + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.GetAzureRMSubscriptionCommand
 

PS C:\WINDOWS\system32>

So I log into Azure, and try again to resolve the SubscriptionID to the tenants that I have access to, and it returns an error stating that there is no such subscription in my tenant.  So by means of leveraging both unauthenticated and authenticated means, I cannot see any information pertinent to this SubscriptionID.

So, let’s try using our preferred internet search provider.  Which, if you’ve tried this, you’ll actually get search hits because this is my subscriptionID – one that I use for just about all of my Azure examples.  However, you’ll find that the only thing that comes up are links to my articles.  There is nothing from an Azure standpoint that is publicly available when searching for this ID.  Even publicly available blob URIs.

Shameless plug: Read my articles. I put a lot of love into those.

So what have we figured out so far?

  • No information in Azure that is tied to your SubscriptionID is made publicly available by search.
  • No information in Azure that is tied to any SubscriptionID is made available unauthenticated.
  • No information in Azure that is tied to a SubscriptionID is made available to you if you are authenticated with an account that does not have permissions to view that SubscriptionID.

So what do we need?  User creds.  If you have access to a user credential that has admin rights to a subscription (or multiple subscriptions), you don’t even need the SubscriptionID.

PS C:\WINDOWS\system32> Get-AzureRmSubscription


Name     : ProdSub
Id       : 1a8c783b-3317-4535-8f12-5066eec9094c
TenantId : 1f9d2d05-2bef-4f58-8f74-697e76e704db
State    : Enabled

Name     : LastWordInNerd
Id       : f2007bbf-f802-4a47-9336-cf7c6b89b378
TenantId : 96b32bac-743d-49bb-adff-7552b2d86956
State    : Enabled
<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

Notice that after I authenticated to Azure, I was able to use the Get-AzureRmSubscription command to get the entire list of subscriptions that I have access to.  I have the metaphorical keys to the castle, or multiple castles, if I have the admin credentials.  After I have those credentials, I use the subscriptionID (of which I now have) to put myself into the context of the Azure Subscription.  I’m telling Azure, “I want to work in THIS subscription,” and it takes me there.

What you really need to protect are your credentials.  This can easily be handled with multi-factor authentication.  Use it.  At the very least, privileged accounts should have this enabled by default.  According to a 2017 Verizon Data Breach Investigations Report, 81% of hacking-related breaches leveraged either stolen and/or weak passwords.

If you haven’t enabled multi-factor authentication in your environment yet, and you’ve already gone to (or are planning on going to) the cloud, a subscription ID is the least of your concerns.

PowerShell – Using Try Catch To Help With Decision Making In A Script

Recently, while working on my scripts for rolling out server deployments in Azure, I came across an interesting issue with a cmdlet throwing a terminating error when I wasn’t expecting one.

I was attempting to use the Get-AzureService cmdlet to verify if the cloud service that I specified already existed or not.  It was necessary to check its existence in case VMs had already been deployed to the service and we were adding machines to the pool.  If it didn’t exist, I would add script logic to create the cloud service before deploying the VMs.  So when I execute:

Get-AzureService -ServiceName 'LWINPRT'

Returns with the following terminating error:

TryCatch1

Now, I expected the service to not be there, because I haven’t created it, but I didn’t expect the cmdlet to terminate in a way that would stop the rest of the script from running.  Typically, when using a command to look for something, it doesn’t throw an error if it can’t find it.  For example, when I look to see if a VM exists in the service:

Get-AzureVM -ServiceName 'LWINPRT' -Name 'LWINPRT01'

I get the following return:

TryCatch2

While the error wasn’t expected, it’s certainly not a show-stopper.  We just have to rethink our approach.  So instead of a ForEach statement looking for a null-value, why don’t we instead look at leveraging Try-Catch?

The Try-Catch-Finally blocks are what allows you to catch .NET exception errors in PowerShell, and provide you with a means to alert the user and take a corrective action if needed.  You can read about them here, or there’s an exceptional article by Ashley McGlone on using it.   So we’ll go ahead and set this up to test.

    Try {
        Get-AzureService -ServiceName 'LWINPRT' -ErrorAction Stop 
        }#EndTry

    Catch [System.Exception]
        {
        Write-Host "An error occurred"
        }#EndCatch

And we execute…

TryCatch3

And we get a return!  But I don’t want an error in this case.  What I want is to create the cloud service if it doesn’t exist.  so let’s do this instead:

    Try {
        Get-AzureService -ServiceName 'LWINPRT' -ErrorAction Stop 
        }#EndTry

    Catch [System.Exception]
        {
        New-AzureService 'LWINPRT' -Location "West US"
        }#EndCatch

And we execute this…

TryCatch4

And now we get the service created.  And we can now see it in our Azure console:

TryCatch5

Now we can use this Try block to check if a cloud service exists or not, knowing that if it can’t find the cloud service it will throw a terminating error.  And when it does, we can use the Catch block to create the existing service.  Decision made.

PowerShell – Automating Server Builds In Azure – Pt. 3 – Finish And Function

Over the last couple of weeks, we’ve taken our simple Azure VM creation script and expanded its versatility to support standardization in an automated fashion.  Now  we’re going to add some finishing touches to make this a function that includes some scalability and added functionality before we turn our eyes towards the DSC portion of our role-based deployments.

Of course, because of some of the functionality that we’ll be adding in the script, we’re going to be jettisoning that easy stuff that was New-AzureQuickVM in favor of New-AzureVM.  New-AzureVM offers us a lot more flexibility to build our VMs, including the ability to statically assign an IP address during the configuration.  So to wrap up this portion of our Azure exploration, we’ll be:

  • Adding logic to verify that your Azure account token is valid.
  • Checking the predefined subnets’ address pools for available addresses and assigning them to the machine
  • Adding logic to deploy multiple VMs for a given role simultaneously.
  • Adding in our comment-based help and building our script into a function.

First step, let’s add in our comment-based help.  Aside from it being a community best-practice, it’s helpful to whomever you’re intending to use this script to understand what it is you’ve created and how it works.  So in it goes.

AzurePt3-1

We’ll go ahead and call this function New-AzureRoleDeployment.  Along with adding the block to set this as a function, we’re going to go ahead and leverage the Begin, Process, and End blocks as well.  The bulk of our previously existing script will reside in the Process block.  In the Begin block, I’m going to add some code to verify that there is an Azure Account configured for the PowerShell instance, and to execute the Add-AzureAccount cmdlet if no Azure account is signed in.  I’m using Get-AzureService to verify that the account’s authentication token is current, because Get-AzureAccount doesn’t readily give up that information.  Get-AzureService will throw an exception if it’s not current.

***NOTE*** – I was previously using Get-AzureSubscription, but found that this didn’t provide a consistent result.  I’ve updated the script to reflect the use of Get-AzureService instead.

    BEGIN {
        Write-Verbose "Verifying Azure account is logged in."
        Try{
            Get-AzureService -ErrorAction Stop
            }#EndTry

        Catch [System.Exception]{
            Add-AzureAccount
            }#EndCatch

    }#EndBEGIN

We’ll also add in a quick Write-Verbose message in the End block to state that the function finished.  We could omit the End block altogether, or use it to clean up our login with the Remove-AzureAccount cmdlet, but depending on how you’ve set up your Azure account on the system, you could wind up creating more work for yourself after running this function.  I’d recommend doing some reading up on how the Remove-AzureAccount cmdlet works before deciding if it’s something you want to add.

    END {

        Write-Verbose "New-Deployment tasks completed."

        }#EndEND

Now let’s do some modifications to the script to allow us to add a number of systems instead of a single system at a time.  This is going to require us to work with one of my favorite PowerShell features – math!  First, let’s update our parameter block with a Quantity parameter to input.

    Param (

        [Parameter(Mandatory=$True)]
        [ValidateSet('IIS','PSWA','PRT','DC')]
        [string]$Purpose,

        [Parameter(Mandatory=$True)]
        [int]$Quantity,

        [switch]$Availability

    )#End Param

Now, we’ll find our original code for creating the numbering portion of our server names.

$CountInstance = (Get-AzureVM -ServiceName $ConvServerName).where({$PSItem.InstanceName -like "*$ConvServerName*"}) | Measure-Object
$ServerNumber = ($CountInstance.Count + 1)
$NewServer = ($ConvServerName + ("{00:00}" -f $ServerNumber))
Write-Verbose "Server name $NewServer generated.  Executing VM creation."

We’re going to modify this code by changing the ServerNumber variable to FirstServer.  To make this easier, I use the Replace function in ISE (CTRL + H) to change all of the references to ServerNumber at once.  Next, we need to figure out the last server in the series.  Logically, you would think that this would just be the Quantity variable, plus the FirstServer.  However, this doesn’t work exactly as expected.  For example, if we:

$CountInstance = (Get-AzureVM -ServiceName 'LWINPRT').where({$PSItem.InstanceName -like "*LWINPRT*"}) | Measure-Object

We get a return of 0, because the cloud service doesn’t currently exist.  Now, so we don’t start at 0 or the highest allocated number for our server number series, we have to do this:

$FirstServer = ($CountInstance.Count + 1)

And if we execute our two lines of code, then the FirstServer variable will equal 1.  Now, we’ll go ahead and create a Quantity variable with the value of 3 and add the FirstServer and Quantity together.

$Quantity = 3
$LastServer = $FirstServer + ($Quantity)

Now, if we check the LastServer variable, we get a value of 4.  Now the problem comes up when we array it:

$Range = $FirstServer..$LastServer

We get the following array of values in the Range variable.

AzurePt3-4

So now, while we’ve requested 3 machines, our logic will tell PowerShell to build 4.  So we instead rectify it by subtracting a number from the Quantity like so:

            $CountInstance = (Get-AzureVM -ServiceName $ConvServerName).where({$PSItem.InstanceName -like "*$ConvServerName*"}) | Measure-Object
            $FirstServer = ($CountInstance.Count + 1)
            $LastServer = $FirstServer + ($Quantity - 1)
            $Range = $FirstServer..$LastServer

AzurePt3-5

And now we have the appropriate range.  Next, we’re going to add a new switch block under our existing one to help set us up for assigning a static address in the subnet that the new systems will be assigned in.  So first let’s create the block with the output variable VNet:

            Switch ($Purpose){
            'IIS' {$VNet = '10.0.0.32'};
            'PSWA' {$VNet = '10.0.0.16'};
            'PRT' {$VNet = '10.0.0.48'};
            'DC' {$VNet = '10.0.0.64'}            

            }#Switch

Notice that I’m using the same purpose parameter.  No sense in requiring our user to enter information needlessly when we can pull it from a single source.

Because of how we need to craft our command to build a VM with the New-AzureVM cmdlet (you’ll see in a minute), we can no longer use a single argument list as before.  So instead we’re going to take what we had before…

                #Standard arguments to build the VM  
                $AzureArgs = @{

                    'ServiceName' = $ConvServerName
                    'Name' = $NewServer
                    'InstanceSize' = 'Basic_A1'
                    'SubnetNames' = $_Purpose
                    'VNetName' = 'LWINerd'
                    'ImageName' = $BaseImage.ImageName
                    'AdminUserName' = 'LWINAdmin'
                    'Password' = 'b0b$yerUncl3'
                }#EndAzureArgs

…and we’re going to update it like so:

           #Standard arguments to build the VM  
           $InstanceSize = 'Basic_A1'
           $VNetName = 'LWINerd'
           $ImageName = $BaseImage.ImageName
           $AdminUserName = 'LWINAdmin'
           $Password = 'b0b$yerUncl3'

Now we’re going to use our VNet switch to test the subnet, check the available addresses, and get the first one available to assign.  Also, I’m adding in some Write-Verbose statements so I can verify that the variables that I need to have created are actually being generated by my script.

      $AvailableIP = Test-AzureStaticVNetIP -VNetName $VNetName -IPAddress $VNet
      $IPAddress = $AvailableIP.AvailableAddresses | Select-Object -First 1

      Write-Verbose "Subnet is $VNet"
      Write-Verbose "Image used will be $ImageName"
      Write-Verbose "IPAddress will be $IPAddress"

As before, we’re going to use the presence of the Availability parameter to determine our path here.  The biggest change will be with our actual creation command.  Instead of a quick one-liner, we’ll instead be moving through the pipe, creating a new VM configuration object, adding the necessary information, assigning the static IP, and finally kicking off the build.

If($Availability.IsPresent){
                    
Write-Verbose "Availability set requested.  Building VM with availability set configured."
                    
Try{
    Write-Verbose "Verifying if server name $NewServer exists in service $ConvServerName"
    $AzureService = Get-AzureVM -ServiceName $ConvServerName -Name $NewServer
        If (($AzureService.InstanceName) -ne $NewServer){

            New-AzureVMConfig -Name $NewServer -InstanceSize $InstanceSize -ImageName $ImageName -AvailabilitySetName $ConvServerName | 
            Add-AzureProvisioningConfig -Windows -AdminUsername $AdminUserName -Password $Password | 
            Set-AzureSubnet -SubnetNames $_Purpose | 
            Set-AzureStaticVNetIP -IPAddress $IPAddress | 
            New-AzureVM -ServiceName $ConvServerName -VNetName $VNetName
        }#EndIf

        Else {Write-Output "$NewServer already exists in the Azure service $ConvServerName"
                
        }#EndElse

}#EndTry

Catch [System.Exception]{$ErrorMsg = $Error | Select-Object -First 1
    Write-Verbose "VM Creation failed.  The error was $ErrorMsg"
}#EndCatch

}#EndIf

The process is repeated for the Else statement in the event that the Availability parameter is not selected.

Else{
                    
        Write-Verbose "No availability set requested.  Building VM."
                    
    Try{
                        
        Write-Verbose "Verifying if server name $NewServer exists in service $ConvServerName"
                        
        $AzureService = Get-AzureVM -ServiceName $ConvServerName -Name $NewServer
        If (($AzureService.InstanceName) -ne $NewServer){
            New-AzureVMConfig -Name $NewServer -InstanceSize $InstanceSize -ImageName $ImageName | 
            Add-AzureProvisioningConfig -Windows -AdminUsername $AdminUserName -Password $Password | 
            Set-AzureSubnet -SubnetNames $_Purpose | 
            Set-AzureStaticVNetIP -IPAddress $IPAddress | 
            New-AzureVM -ServiceName $ConvServerName -VNetName $VNetName
        }#EndIf

        Else {Write-Output "$NewServer already exists in the Azure service $ConvServerName"
        }#EndElse

    }#EndTry

    Catch [System.Exception]{$ErrorMsg = $Error | Select-Object -First 1
                                Write-Verbose "VM Creation failed.  The error was $ErrorMsg"
    }#EndCatch

}#EndElse

Now we’ll go ahead and execute our new code to create three new VMs destined to be print servers.

New-AzureRoleDeployment -Purpose PRT -Quantity 3 -Availability -Verbose

AzurePt3-6

Success!  Now we can deploy any number of servers to our designated subnets, configure them with a statically assigned IP Address, and assign them to an availability group off of a simple one-liner!  Now I’m off to do some more reading and research on Desired State Configuration so we can continue our automated deployment track!

You can download the full script at the TechNet Script Center for review.

Using Azure to Keep Moving Your Career Forward

As admins and engineers, it’s often left on us to gain the knowledge and experience needed to further our careers.  Even if we’re lucky enough to work for a company that will pay for some training, it’s often directly related to the position that you’re currently working.  For large environments with silo’d IT groups, this means that you’ll likely get trained on one or two products that you already have experience working in.

Sure, getting that training is cool, and hopefully you’ll learn some things that you didn’t know before (and hopefully make you more efficient), but what if you want to expand your horizons to move to a different position or just gain a broader understanding of how everything works?

WhyAzure

A lot of the talk these days is about cloud, and if you’re in a Microsoft environment, that means Azure.  Furthermore, PowerShell has moved beyond a simple system management tool, to a tool for handling configuration management and deployment of applications among other things.  Desired State Configuration, released with PowerShell v4.0 just 16 months ago, has already celebrated it’s 10th resource kit wave.  PowerShell 5.0, slated to launch with Windows 10, will provide application deployments using OneGet.  Even though I’ve been pretty hot and heavy on the PowerShell track for a while now, I’m still feeling pretty far behind.

PowerShell is a management platform that has absolutely taken off in the last couple of iterations, and there’s no indication from Redmond that it’s going to slow down anytime soon.  New PowerShell cmdlets are made available in wave updates to Windows, and other applications are following suit by releasing new or enhanced product-specific cmdlets in cumulative update releases.  So for those that haven’t started learning PowerShell, you might want to consider taking your IT education into your own hands.

But let’s step away from the PowerShell discussion for the moment and talk about those other applications and operating systems themselves.  Companies are increasingly relying on us to be knowledgeable about many new apps and server platforms the moment they hit RTM.  But getting VMs spun up in a lab or non-production environment, and scheduling time during work hours is pretty close to impossible.  But of course, how do you begin to overcome those challenges?

For a long time, I used a home PC as a lab environment, leveraging Microsoft Virtual PC, and later, Hyper-V.  But I find that as I do more presentations, I need more flexibility than carrying around a massive PC with me, and my Surface just doesn’t have enough power to support five or six VM instances.  Even if you take presentation out of the equation, there’s still the question of managing legal server licenses and software, or tearing down and rebuilding an environment every 90 days if you’re using evals.  So I decided to try out Microsoft’s Azure service to see what it could offer me from a learning perspective, as well as a presentation point.

The Pros

Well first off, you’re going to be directly learning a technology that you’re going to have to eventually learn to deal with.  Whether it’s in Microsoft’s cloud, or your own internal one, my gut tells me that Azure is going to be the management platform for Windows Server for many moons to come.  On top of that, you’ll have access to the latest versions of server, and many applications depending on what subscription level you’re running, all without having to manage as many licenses as you were previously dealing with.

Want to try something new?  Spin up a new VM in minutes.  If it explodes the machine, you can delete it and start all over again without having to build a machine from scratch.  Getting underway is super fast and easy.

You’re also dealing with products that those exam prep books are talking about!  You can build up your environment along with the study guide and get underway to your next certification in hopefully minimal time.

WhyAzure2

Finally, you can access it pretty much anywhere you have an internet connection.  So if you’ve got a presentation to head out to, or you’re on the road and want to test out a theory or new configuration, you can do so through RDP or PowerShell Remoting.

The Cons

It costs money.  Not a lot mind you; especially if you’re careful.  Microsoft basically charges you for what you’re using, so if you shut down your VMs when they’re not in use, it won’t cost you as much.  Though a couple of times, I have managed to leave a large number of VMs on overnight, and that wound up costing me about $6 for the overnight mistake.  But if you’re careful, you can keep the bill under $50 a month USD.

It’s internet-based.  So if you’re unable to access the internet from your location (or they have a slow connection), you can’t get to your environment.  From a presentation perspective, this is becoming less and less of an issue, but still something you’ll want to check in on when presenting at a new locale.

In the End

It’s cool to say that you’ve built out your own infrastructure from scratch and blah blah blah…  Actually, who are we kidding?  Nobody thinks it’s cool or fun; even other people that do it themselves.  It’s a lot of work to maintain, and a total pain in the ass to lug around!  The cost of an hour or two of your monthly salary can save you tons of headaches and give you a foundation of new technology that everyone’s talking about.  It might not be free, but it’s been my experience that if you’re not willing to make a financial commitment to your own career to get further ahead, then maybe it’s time to consider investing in a new direction.

PowerShell – Automating Server Builds In Azure – Pt. 2 – Rolling Servers To Their Silos

During this scripting session, I’ll be working on a system that is running PowerShell 5.0 (February 2015 release).

So now that I’ve put together a basic script for building out a server in Azure, I want to do even more by making the script and my environment more versatile.  So we’re going to go ahead and add some parameterization that will build a naming standard for the hostname and cloud service, as well as give our script the ability to deploy the server into a preconfigured VLAN.

Ultimately, my goal will be to deploy a system and apply a DSC configuration using a single script that will configure the system using standardized settings based on the role that I’ve selected initially.  But let’s concentrate on the basics first.

You might have noticed, using the previous script, that there are some things that are configured (or not) with your Azure server.  For example, the VM is built as a Standard A1 Standard system (1 core, 1.75 GB RAM).  It also deposits the machine in your root network instead of any subnets you may have configured.  We’re going to remedy that today.

Before I start with updating my script, I’m creating some virtual networks in my Azure environment to assign systems to.  At the time of this writing, handling this in my PowerShell script requires a little more work than what I’d like to do, so I’m creating my virtual networks through the Azure UI.  These newly created subnets will use a naming convention that will be recognizable to my script with minimal work.

Azure2-6

Now on to the scripting!  First, I’m going to create a parameterization for the server role.  This will be the core variable that will determine a number of settings for us.  Second, I’m going to create a switch for creating an availability group (more on this later).  I’m also adding the cmdletbinding function for use later.

[cmdletbinding()]
Param (

    [Parameter(Mandatory=$True)]
    [ValidateSet('IIS','PSWA','PRT')]
    [string]$Purpose,

    [switch]$Availability

)#End Param

Now I’m going to create the switch to work with my Purpose parameter.  The purpose of this switch will be to determine the future role of the server, its naming convention, what subnet to add the system to, as well as the DSC configuration to apply to it at a later time.  So basically, everything.

Switch ($Purpose){
    'IIS' {$_Purpose = 'IIS'};
    'PSWA' {$_Purpose = 'PSWA'};
    'PRT' {$_Purpose = 'PRT'}
    }#Switch

Now we’ll also add our standardized naming prefix, set up our server naming, and the default Azure location we want to use.

$RootName = "LWIN"
$ConvServerName = ($RootName + $_Purpose)
$Location = "West US"

So when our script executes and we specify the PSWA parameter, we’ll get this:

Azure2-1

And I’m also going to add some Write-Verbose data here for troubleshooting purposes if anything goes south later on.

Write-Verbose "Environment is $_Purpose"
Write-Verbose "Root name is $RootName"
Write-Verbose "Service will be $ConvServerName"
Write-Verbose "Datacenter location will be $Location"
If($Availability.IsPresent){Write-Verbose "Server will be assigned to $ConvServerName availability group."}

Since I’m building a fairly basic environment for now, I’m going to silo things by server role.  But before we deploy the machine, we’re going to check to see if a cloud service exists, and if not, create it using our $ConvServerName label.  For some reason, if you attempt to retrieve a service name that doesn’t exist, Azure throws a terminating error.  So we’re going to handle this with a Try/Catch statement, and leverage that to create the cloud service if it runs into this error.

Try 
    {Write-Verbose "Checking to see if cloud service $ConvServerName exists."
    Get-AzureService -ServiceName $ConvServerName -ErrorAction Stop 
    }#EndTry

Catch [System.Exception]
    {Write-Verbose "Cloud service $ConvServerName does not exist.  Creating new cloud service."
    New-AzureService $ConvServerName -Location $Location
    }#EndCatch

Now that we’ve created the cloud service, we’ll go ahead and create the host name that we’ll be using.  Since I’ll be rolling out servers in numerical order, I’m going to add some logic in to count the number of existing servers in the cloud service (if any) and create the next instance based on count.

$CountInstance = (Get-AzureVM -ServiceName $ConvServerName).where({$PSItem.InstanceName -like "*$ConvServerName*"}) | Measure-Object
$ServerNumber = ($CountInstance.Count + 1)
$NewServer = ($ConvServerName + ("{00:00}" -f $ServerNumber))
Write-Verbose "Server name $NewServer generated.  Executing VM creation."

So let’s add in our arguments table from last week and our boot image location.  We’re making some modifications over last week’s script to accomodate for some of the automation we’re performing.  We’re specifying the Basic_A1 instance size, as well as assigning the machine to a pre-configured subnet, and using our $ConvServerName variable to determine the service to put the machine into.

$BaseImage = (Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2 Datacenter*" -and $PSItem.PublishedDate -eq "2/11/2015 8:00:00 AM" })

$AzureArgs = @{

    'ServiceName' = $ConvServerName
    'Name' = $NewServer
    'InstanceSize' = 'Basic_A1'
    'SubnetNames' = $_Purpose
    'VNetName' = 'LWIN.Azure'
    'ImageName' = $BaseImage.ImageName
    'AdminUserName' = 'LWINAdmin'
    'Password' = 'b0b$yerUncl3'
}

Now for our VM creation, we’re going to add some logic in to verify whether or not the machine already exists in the service (just in case!), and add a little error handling in case things get a little ugly.  We’ll also wrap this in an If statement for handling the build with and without the availability parameter selected.

If($Availability.IsPresent){
    Write-Verbose "Availability set requested.  Building VM with availability set configured."
    Try{
        Write-Verbose "Verifying if server name $NewServer exists in service $ConvServerName"
        $AzureService = Get-AzureVM -ServiceName $ConvServerName -Name $NewServer
            If (($AzureService.InstanceName) -ne $NewServer){
                New-AzureQuickVM -Windows @AzureArgs -AvailabilitySetName $ConvServerName
            }#EndIf
            Else {Write-Output "$NewServer already exists in the Azure service $ConvServerName"}#EndElse
        }
    Catch [System.Exception]{$ErrorMsg = $Error | Select-Object -First 1
                                Write-Verbose "VM Creation failed.  The error was $ErrorMsg"}#EndCatch
}#EndIf
Else{
        Write-Verbose "No availability set requested.  Building VM."
    Try{
        Write-Verbose "Verifying if server name $NewServer exists in service $ConvServerName"
        $AzureService = Get-AzureVM -ServiceName $ConvServerName -Name $NewServer
            If (($AzureService.InstanceName) -ne $NewServer){
                New-AzureQuickVM -Windows @AzureArgs
            }#EndIf
            Else {Write-Output "$NewServer already exists in the Azure service $ConvServerName"}#EndElse
        }
    Catch [System.Exception]{$ErrorMsg = $Error | Select-Object -First 1
                                Write-Verbose "VM Creation failed.  The error was $ErrorMsg"}#EndCatch
}#EndIf

So now we’ll go ahead and save our script and execute…

.\ServerDepl.ps1 -Purpose IIS -Availability -Verbose

Pt2PicA

And success!  So let’s check and verify that we have our service:

Pt2PicB

And that we have our VM.

Pt2PicC

And now we can check our VM config and verify that we have an availability group and the correct network.

Pt2PicD

Tada!

Next week I’ll be putting some of the finishing touches on this script to make it a bit more versatile.  And hopefully in the following week after that, I’ll be able to show off a little of what I’ve learned of DSC before heading out to the PowerShell Summit this April.  Stay tuned!

PowerShell – Automating Server Builds In Azure – Pt. 1 – Basic Builds

During this scripting session, I am working on a system that is running PowerShell 5.0 (February 2015 release).

I started with a very simple goal towards learning Azure and Desired State Configuration: to be able to rapidly deploy a series of machines required for giving demonstrations at user group meetings and internal company discussions.  In order to get my Azure environment to this point, I figured that I would need to learn the following:

  • Build a single deployment using a one-line command.
  • Build a series of constants to provide the one-line command to ensure consistency in my deployments.
  • Build a series of basic parameterized inputs to provide the command the necessary information to deploy the server into a service group for a given purpose (IIS server, print server, etc.)
  • Expand the scope of the script to build a specified number of servers into a service group, or add a number of servers to an existing service group.
  • Build a second command to tear down an environment cleanly when it is no longer required.

Once complete, I will stand up a DSC environment to explore the possibility of leveraging my provisioning scripts to deploy my standardized configurations to a number of different deployments based on the designated purpose.

Before I begin, it should be noted that I had an issue with the New-AzureQuickVM cmdlet returning an error regarding the CurrentStorageAccountName not being found.  A quick Googling took me to Stephen Owen’s blog on how to resolve this error.  You’ll want to read up on this post and update your Azure subscription info if necessary.  You’ll likely have to.

The ‘One-Liner’

Building a one-line command is pretty easy, provided that you have some necessary things configured before deploying.  For my purposes, I’ll be using the New-AzureQuickVM cmdlet.  At a bare minimum for a Windows deployment, you’ll need the following bits of information:

  • You need to specify that you want to deploy a Windows server. (-windows)
  • The location of your preferred Azure datacenter (-location)
  • The image name you wish to deploy. (-imagename)
  • The name of the server you wish to deploy.(-name)
  • The Azure Cloud Services name to deploy the server into. (-servicename)
  • The user name of the Administrator account. (-adminusername)
  • The password of the Administrator account. (-password)

But before we can do the assemblage, we need to gather some information first; such as the information for location and imagename.  Getting the location is fairly straightforward.  Using the Get-AzureLocation cmdlet, you can get a listing of the datacenters that are available globally.

Get-AzureLocation

Azure2

For our purposes, we’ll use the West US datacenter location.  Now to look up the image name, we’ll use the Get-AzureVMImage since we’re not using any custom images.

Get-AzureVMImage

Now you’ll find that when you run this, it’s going to pull all of the images available through Azure; 470 at the time of this writing to be exact!  So we’re going to try to pare this down a bit.

Azure3

(Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2*"}) | Measure-Object

Azure4

Well, almost.  But if we whittle it down a bit further…

Azure5

Ah!  Better.  Now that we’ve gotten down to the base server load, we can take a look and we’ll find that, like the images available in the image gallery in the New Virtual Machine wizard in Azure, you have three different dated versions available to choose from.  For the purposes of our implementation, we’ll just grab the latest image.

Azure6

What we’ll be looking to grab from here is the long string of characters that is the image name to fulfill the image requirement for our command.  So let’s go ahead and snag this bit for our BaseImage variable and add our $Location variable as well.

$Location = 'West US'
$BaseImage = (Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2 Datacenter*" -and $PSItem.PublishedDate -eq "2/11/2015 8:00:00 AM" })

So now we can build our command.  But I’d like to not have my commands run off the screen, so let’s do some splatting!

$Location = 'West US'
$BaseImage = (Get-AzureVMImage).where({$PSItem.Label -like "*Windows Server 2012 R2 Datacenter*" -and $PSItem.PublishedDate -eq "2/11/2015 8:00:00 AM" })
$AzureArgs = @{
 Windows = $True
 ServiceName = 'LWINerd'
 Name = 'TestSrv'
 ImageName = $BaseImage.ImageName
 Location = $Location
 AdminUserName = 'LWINAdmin'
 Password = 'b0b$y3rUncle'
}
New-AzureQuickVM @AzureArgs

And now we see that we have a new service in Azure…

Azure7

And a new VM is spinning up!

Azure8

And now we have our base script for building VMs in Azure.  Next post, we’ll be looking at creating availability sets through PowerShell, as well as assigning our VMs to specific virtual network address spaces, bringing in some parameterization and more!

***EDIT*** – For some reason I was previously under the impression that you have to create an availability set if you wanted to have multiple machines co-existing in the same cloud service.  This is not the case, but I’ll be exploring the creation of availability sets in my code next week nonetheless.