2 PowerShell Features I was Surprised to Love

After talking about features I don’t want to talk about anymore I thought I would turn my attention to a couple of things in PowerShell that I initially felt were mistakes but have had a change of heart about.

For the most part, I think the PowerShell team does a fantastic job in terms of language design. They have made some bold choices in a few places, but time and time again their choices seem to me like the correct choices.

The two features I’m talking about today were things that, when I first heard about then, I thought “I’ll never use that”. Time has shown me that my reactions were in haste.

Module Auto-loading

I really like to be explicit about what I’m doing when I write a script. I like explicitly importing modules into a script.  Knowing where the cmdlets used in a script come from is a big part of the learning process.  As you read scripts (you do read scripts, don’t you?), you can slowly expand your knowledge base as you start looking into functionality implemented in different modules.  Another big advantage to explicitly importing modules into a script is that you’re helping to define the set of dependencies of the script.  “Oh, I need to have the SQLServer module installed to run this script…I thought it looked like a SQLPS script!”.  Since cmdlets can have similar names, explicitly loading the module can make it clear what’s going on.

When I saw that PowerShell 3.0 introduced module auto-loading the first thing I thought was “I wonder how I can turn that off”, followed closely by “I’m always going to turn that off on every system I use”.

I hadn’t met PowerShell 3.0 yet, though.  The number of cmdlets jumped from several hundred to over two thousand.  Knowing what cmdlets came from which modules became a much harder problem.  There were so many more cmdlets (aided by cdxml modules) that keeping track was difficult.

Module auto-loading was a logical solution to the “too many modules and cmdlets” problem.  I find myself depending on it almost every time I write a script.

I do like to explicitly import modules (either with import-module or via the module manifest) if I’m using something unusual, though.

Collection Properties

I don’t know if there’s an official name for this feature. Bruce Payette in PowerShell in Action calls this a “fallback dot operator”.  The idea is that you can use dot-notation against a collection to retrieve a collection of properties of the objects in the collection.  Since that was probably as hard to read as it was to write, here’s an example:

$filenames = (dir C:\temp).FullName

Clearly, an Array doesn’t have a FullName property, right?  And we already had 2 ways (the “old” way and the “aha” way) to do this:

$filenames = dir c:\temp | foreach-object {$_.FullName}
$filenames = dir c:\temp | select-object -expandProperty FullName

I like being able to use dot-notation against an expression, which just considers the object which is the result of the expression and applies the dot-operator to it. It does require that you add some parentheses, but that’s a small price to pay for not having to introduce another variable. One of my scripting maxims is that the less you write, the less you debug. More variables means more places to make mistakes (like misspelling), so I like this approach.

Using dot-notation to “fall back” from the collection to the members creates a bit of a semantic issue (or at least it messed up my head). When you see $variable.property, you no longer know what’s going on. You can be certain that there is some kind of property reference happening, but it isn’t clear whether there is collection unrolling happening at the same time.

How this one has turned out is that it eliminates the need to check whether I got multiple results and now I can use the same notation for single or multiple objects. (Side note: this is reminiscent of adding “fake” .Length and .GetEnumerator() members to objects in PowerShell 3.0). It’s very concise and reduces the use of pipelines (which helps performance).

Well, those were 2 things in PowerShell I was surprised to love. What about PowerShell delights you? Let me know in the comments!

–Mike

Generating All Case Combinations in PowerShell

At work, a software package that I’m dealing with requires that lists of file extensions for whitelisting or blacklisting be case-sensitive. I’m not sure why this is the case (no pun intended), but it is not the only piece of software that I’ve used with this issue.

What that means is that if you want to block .EXE files, you need to include 8 different variations of EXE (exe, exE, eXe, eXE, Exe, ExE, EXe, EXE). It wasn’t too hard to come up with those, but what about ps1xml? 64 variations.

For fun, I wrote a small PowerShell function to generate a list of the different possibilities. It does this by looking at all of the binary numbers with the same number of bits as the extension, interpreting a 0 as lower-case and 1 as upper case.

Here it is:

function Get-ExtensionCases{
param([string]$ext= 'exe')

  $vars=$ext.ToLower() ,$ext.ToUpper() 

  $powers=0..$ext.length | foreach {  [math]::pow(2,$_) }
  foreach($i in 0..([math]::Pow(2,$ext.length)-1)){
      (0..($ext.length-1)|foreach {$vars[($i -band $powers[$_])/$powers[$_]][$_]}) -join ''
  } 
}

I pre-calculate the relevant powers of two in $powers, since we use them over and over again. I also do the upper/lower once at the beginning and do some (gross) indexing to get the proper one.

Here’s the output for exe:

It was a fun few minutes. Watching longer output scroll by can even be somewhat mesmerizing.

Let me know what you think

–Mike

P.S. PowerShellStation was named one of the top 50 PowerShell blogs! Thanks to everyone for stopping by and listening to my rambling.

Hyper-V HomeLab Intro

So  I’ve been playing with Hyper-V for a while now.  If you recall it was one of my 2016 goals to build a virtualization lab.

I’ve done that, building out the base Microsoft Test Lab Guide several times:

  • Manually (clicking in the GUI)
  • Using PowerShell commands (contained in the guides)
  • Using Lability and PS-AutoLab-Env

I was also fortunate enough to be a technical development editor for Learn Hyper-V in a Month of Lunches, which should be released this fall.

One thing that I’ve found is that being able to spin up a VM quickly is really nice.  With the Hyper-V cmdlets, that’s pretty easy.

Spinning up a machine from scratch and building a bootable image is not as easy.  Fortunately there are some tools to help.

In this post, I’m going to share a simple function I’ve written to help me get things built faster.

The goal of the function is to take the following information:

  • Which ISO to use
  • Which edition from the ISO to select
  • The Name of the VM (and VHDX)
  • How much memory
  • How many CPUs

With that information, it converts the windows image from the ISO to a VHDX, creates a VM with the right specs and using the VHDX, sets up the networking (or starts to, anyway), and starts the VM.

The bulk of the interesting work is done by Convert-WindowsImage, a function that pulls the correct image from an ISO and creates a virtual disk.

There are some problems with that script (read the Q&A on the Technet site and you’ll see what I mean).  The main one is when it tries to find the edition you ask for (by number or name).  The code is in lines 4087-4095, and should look like this:

                $Edition | ForEach-Object -Process {

                    $Edtn = $PSItem
    
                    if ([Int32]::TryParse($Edtn, [ref]$null)) {
                        $openImage = $openWim[[Int32]($Edtn)]    
                    } else {
                        $openImage = $openWim[$Edtn]
                    }    

There’s a more recent copy of the function on github, but it has slightly different parameters and seems to be stale as well (according to the page it’s on). I’ve got an email out to find the “live” version.

With that, here’s my function:

function new-BootableVM {
    param($ISOPath = 'E:\isos\2012R2_x64_EN_Eval.iso',
        $Name,
        $MemoryInGB,
        $vCPUs,
        [switch]$Stopped)


    $switch = 'LabNet'
    $vhdpath = "c:\users\Public\Documents\Hyper-V\Virtual hard disks\$name.vhdx"

    Convert-WindowsImage -SourcePath $ISOPath -Edition $edition -VHDPath $vhdpath -VHDFormat VHDX -VHDType Dynamic -SizeBytes 8GB 
    $vm = New-VM -Name $name -MemoryStartupBytes ($memoryInGB * 1GB) -VHDPath $vhdpath -Generation 2 
    Set-VMProcessor -VM $vm -Count $vCPUs
    Add-VMNetworkAdapter -vm $vm -SwitchName $switch

    if (!$stopped) {
        Start-VM -VM $vm
    }
    $vm
}

Once the function is done running (assuming it didn’t have any issues), a VM will be created and ready for you. You will need to accept the license, set the locale, and set the administrators password, but that only takes a minute. I’ll be adding functions (or adding to this function) to take care of those as well as things like renaming the guest, joining a domain, copying files to the drive, etc.

It’s still a work in progress, so you will see some hardcoded values. Hopefully you can see what’s going on and adapt it to your needs.

I’ll be writing more as I play more with Hyper-V, DSC, and containers.

Let me know what you think

–Mike

PowerShell Topics I’m Ready to Stop Talking About

Part of me wants to know every bit of PowerShell there is.  I know that’s true about me, so  I don’t have much of an input filter.  If the content is PowerShell-related, I’m interested.

When it comes to sharing, however, there’s clearly got to be a point at which I shouldn’t be talking about something.  Here are a few items that I’ve spoken or taught about that I think are going to get pulled from my routine.

 

  1. The TRAP statement
  2. Obscure Operators
  3. Filters
  4. Tee-Object
  5. (bonus) Workflows

Let’s go through them one by one and see why.  And yes, I know that I’m talking about them, but this should be the last time (and this time I mean it).

The  TRAP statement

The trap statement is the error handling statement that made the cut for v1.0 of PowerShell.  If you weren’t a PowerShell user at that time you probably haven’t ever used it, favoring TRY/CATCH/FINALLY.

Instead of being a block-structured statement like TRY, TRAP worked in a scope, and functioned like a VB ON ERROR GOTO.  The rules for program flow after a TRAP statement (which I’ve long forgotten) made understanding code that used TRAP into….a trap.

The advice I have given students in the past is, “If you stumble upon some code that uses TRAP, look for other code.”

Obscure Operators

PowerShell has a lot of operators, and that’s a good thing.  On the other hand, I’m not sure why I need to tell people about every single operator.  Some of the operators, though, are obscure enough that I haven’t used them in any language more than a handful of times in the last thirty years.  Candidates for expulsion (from discussion, not from the language) include:

  • -SHL, -SHR    (I guess someone does bitwise shifting, but I haven’t ever needed this except in machine language)
  • *=, /=, %=      (I can see what these do, but I don’t ever do much arithmetic so don’t find the need for these “shorthand” operators)

Filters

Filters are another PowerShell 1.0 topic.  They are one of the ways to use the pipeline for input without using advanced functions and parameter attributes.  They’re pretty slick, but are easily replaced with an advanced function with a process block.  In the last 5 years, I’ve only seen filters used once (by Rob Campbell at a user group meeting).

Tee-Object

I generally consider the -Object cmdlets to be the backbone of PowerShell.  They allow you to deal with pipeline objects “once-and-for-all” and not write a bunch of plumbing code in every function.  For that reason, I like to talk about all of them.  Tee-Object, however, might get sent to an appendix, because I don’t see anyone using it and don’t use it myself.  This one might be changing as we see (being optimistic) people with more Linux backgrounds submitting PowerShell code.  They use tee, right?  I find that the -outvariable common parameter serves most of the need I would have for Tee-Object, so, it makes this list.

And finally,

Workflows

Workflows sound awesome.  When you talk about workflows you get to use adjectives like “robust”, and “resilient”.  And don’t get me wrong Foreach-Object -Parallel is pretty sweet.

On the other hand, writing PowerShell in the workflow-subset of PowerShell is tricky.  Remembering what needs to be an inlinescript and how to use/access variables in each kind of block is not fun.

I haven’t ever used workflows for anything interesting, and have only heard a few examples of them being used by coworkers.  Those examples could mostly be summed up by “I needed parallel”.

It won’t be hard for me to stop talking about workflows, as I’ve never really talked about them.

 

Before I get flamed because I included/excluded your favorite topic, these are just for me.  If you like one of these, sell it!  You might convince me to change my mind.  Is there something that you think should fade away?  Let me know what it is.  I might be able to change your mind.

 

–Mike

An Unexpected Parameter Alias

I’ve always said that if you want to learn something really well, teach it to someone.  I’ve been doing internal PowerShell training for several years at my company.  I’m very grateful for the opportunity for a number of reasons, but in this post I’m going to call out how something I learned in a recent trip to our San Diego office.

When I’m starting to talk about cmdlets, I usually use get-childitem for the simple reason that almost everyone knows what the DOS DIR command does.  It gives us a point of reference to compare and contrast cmdlets with.

I mentioned the -Recurse switch and explained that it was analogous to the /S switch in DIR, but one person in the class didn’t quite get the context switch.  When he did one of the examples, he tried get-childitem -s.  I told him that he needed to use -Recurse, to which he replied “But it works!”.

I always keep a pad of paper when I’m teaching so I can write down anything puzzling (it happens in almost every class).  When the class took a break, I opened a fresh PowerShell session and tried it.

Of course, it worked.

Now, to determine why it worked.

First of all, I thought that parameter disambiguation would have been a problem. because of the -System parameter.  That wasn’t a problem.

Then, I realized that the PowerShell team must have included a “legacy alias” for the -Recurse parameter, similar to how they include cmdlet aliases to ease the transition from DOS or *NIX (dir, ls, ps, cat, etc.).  I don’t think I’ve ever heard anyone mention legacy aliases for parameters, though.

PowerShell easily verifies that this is the case:

Of course, I verified this on my work computer.  As I sit here writing on my home laptop, it didn’t list any aliases until I updated help.  Blogging is a lot like teaching in that you’re bound to find surprises whenever you try to explain something.

Anyway, this was a fun discovery for me.

Can you think of any other parameter aliases that are there for legacy reasons?  I might have to try to work up a script to find candidates.

Let me know what you think in the comments.

-Mike

PowerShell Parameter Disambiguation and a Surprise

When you’re learning PowerShell one of the first things you will notice is that you don’t have to use the full parameter name.  This is because of something called parameter disambiguation.

When it works

For instance, instead of saying Get-ChildItem -Recurse, you can say Get-ChildItem -R.  Get-ChildItem only has one (non-dynamic) parameter that started with the letter ‘R’..  Since only one parameter matches, PowerShell figures you must mean that one.  As a side note, dynamic parameters like -ReadOnly are created at run-time and are treated a bit differently.

Here’s the error message.  Notice that it included a couple of other parameters as possibilities:

AmbiguousParameter error

AmbiguousParameter error

When it doesn’t work

This doesn’t always work, though. An easy example is with Get-Service. You can’t say Get-Service -In because you haven’t specified enough of the parameter name for PowerShell to work out what parameter you meant.  With Get-Service, both -Include and -InputObject start with -In, so PowerShell can’t tell which of these you meant.

Trying it ourselves

Let’s write a quick function to make sure we understand what’s going on.

function test-param{
Param($da,$de)
$true
}

Calling this function with test-param -d gives us the same kind of error as before:

Interestingly (this is the surprise) if we make this an advanced function (and we should almost always do that), something strange happens.

function test-param{
[CmdletBinding()]
Param($da,$de)
$true
}

Remember that one of the benefits of having an advanced function is that it now supports common parameters (like -debug).

When we call it with test-param -de, however, we don’t get an ambiguous parameter message! It’s asking for a value for -de!

So, even though we got a couple of common parameters in the error message for Get-Service -In, the -Debug common parameter isn’t considered in the disambiguation for this function.

Not earth-shattering, but something to take note of.

If I’m missing something (and it’s entirely possible), let me know in the comments.

-Mike

P.S.  Remember that it is a best practice to spell out parameter names fully when writing a script.  Abbreviating (and aliases) are considered fair game for the command-line, though.

When the PowerShell pipeline doesn’t line up

The PowerShell Pipeline

One of the defining features of PowerShell is the object-oriented pipeline.  The ability to “wire-up” parameters to the pipeline and allow objects (or properties) to be automatically assigned to them allows us to write code that is often variable-free.

By “variable-free”, I mean that instead of doing something like this:

 $services=Get-Service *SQL* 
foreach($service in $services){ 
    Stop-Service -Name $Service.Name 
} 

we can write things like this:

Get-Service *SQL* | Stop-Service

There’s nothing wrong with the first script. It is logically laid out, it is clear what’s going on, and accomplishes the same goal. On the other hand, by introducing more variables (and more statements), we have added many more places where we can make mistakes.

When possible, you should write your functions so that they allow pipeline input wherever it makes sense.

When it doesn’t work

I was helping a co-worker with a script the other day and we found something unusual.  The module he was using (open-source) allowed pipeline input, but it didn’t work quite right.  The library (which dealt with processes running on specified computers) allowed you to pipe objects into the Stop function, but instead of using the objects as-is, it only used the PID from each object.  The problem with that was that the Stop function then prompted for a computername for each object, although the incoming objects had properties which contained that value.

The solution was to hand-wire the pipeline like this:

Get-RemoteProcess <criteria> | foreach {
           Stop-RemoteProcess  -ID $_.PID -ComputerName $_.ComputerName
}

(note that these are not the actual function/parameter names…I’m not writing this to shame the original module author)

If the pipeline support had been implemented more reasonably, that could have been written like this:

Get-RemoteProcess <criteria> | Stop-RemoteProcess

As I said before, not supporting the pipeline (correctly) introduces places where we can make mistakes. And if you’re like me,
you will make mistakes in those places.

–Mike

Great Books for PowerShell Ideas

I get asked a lot about what PowerShell books people should be reading. The easy answer is, “It depends”.

If you’re looking for a tutorial book (or two) to get you started with PowerShell, the only answer I give is “Learn PowerShell in a Month of Lunches”, followed by “Learn PowerShell Toolmaking in a Month of Lunches”. There are other good books in this space (including one I wrote), but these are by far the best I’ve found.

If you’re looking for a reference book, I generally recommend Bruce Payette’s “PowerShell in Action”. It has a new version coming out soon (april?) and I can hardly wait. Besides that book, “PowerShell in Depth” (by Jones, Hicks, and Siddaway) is also a safe bet.

If you’ve got the basics of PowerShell down, and are looking for ideas for how to do something, here are some books that aren’t mentioned as often, but are indispensible:

  1. PowerShell Cookbook (Lee Holmes)
  2. PowerShell Deep Dives (several)
  3. PowerShell for Developers (Doug Finke)

What are your book recommendations? Did I miss something essential?

-Mike

Some small PowerShell bugs I’ve found lately

I love PowerShell. Everyone who knows me knows that. Recently, though, I seem to be running into more bugs. I’m not complaining, because PowerShell does tons of amazing things and the problems I’m encountering don’t have a huge impact. With that said, here they are.

Pathological properties in Out-GridView

PowerShell has always allowed us to use properties with names that aren’t kosher. For instance, we can create an object that has properties with spaces and symbols in the name like this:

$obj=[pscustomobject]@{'test property #1'='hello'}

This capability is essential, since we often find ourselves importing a CSV file that we don’t have any control over. (As an exercise, look at the expanded CSV output from schtasks.exe). To access those properties we can use quotes where most languages doesn’t like them.

$obj.'test property #1'

Or we can use variables (again, something most languages won’t let you do this easily):

$prop='test property #1'; $obj.$prop

A friend called me last week with an interesting issue which turned out to be related to this kind of behavior. He had a SQL query which renamed output columns in “pathological” ways. When he piped the output of the SQL to Out-GridView, the ugly columns showed up in the output, but the columns were empty.

Here’s a minimal case to reproduce the issue:

[pscustomobject]@{'test property.'='hello'} | out-gridview

The problem here is that the property name ends with a dot. Here’s a UserVoice entry that explains that Out-GridView doesn’t like property names that end in whitespace, either. I added a comment about dots for completeness’ sake.

Formatting remote Select-String output

Another minor issue I’ve run into is that deserialized select-string output doesn’t format nicely. The issue looks to be that the format.ps1xml for MatchInfo objects uses a custom ToString() method that doesn’t survive the serialization. What happens is that you just get blank lines instead of any helpful output. The objects are intact, though, all of the properties are there. So using the output is fine, just that the formatting is broken. Here’s a minimal example:

"hello`r`n"*6 | Out-File c:\temp\testFile.txt
write-host 'Local execution'
select-string -Path c:\temp\testfile.txt -Pattern hello -SimpleMatch  

write-host 'Remote execution'
invoke-command -ScriptBlock{ select-string -Path c:\temp\testfile.txt -Pattern hello -SimpleMatch} -ComputerName localhost   

I didn’t find anything close on UserVoice, so I posted a new entry.
Neither of these caused any real problem, but they were fun to dig into.

What bugs have you found lately? Have you reported them?

-Mike

February STLPSUG Meeting

I had the privilege of sharing again at the STLPSUG. February’s meeting was at Model Technologies, and Jason Rutherford was a great host.

I spoke on being a good citizen on the pipeline, both for output and input. Basically, best practices for pipeline output (which is fairly straight-forward), and techniques for accepting pipeline input (including $input, filters, and parameter attributes).

The group was a bit more advanced than usual, which was cool. There was a lot of fun heckling (I’ll give you $5 if you put $input in the process block, for instance) and a lot of participation from everyone.

As usual, after the presentation the talk turned into a giant DevOps discussion.

If you live anywhere near St. Louis and haven’t attended one of these meetings, I highly recommend them. Mike Lombardi has done a great job keeping the group moving and focused.

You can find out about upcoming meetings on meetup.com.

P.S. My friend and co-worker Ian was able to come with me this time. Made the drive a lot more fun, and he had a good time, too.