Get-Learning – Introducing a new series of PowerShell Posts

I’ve been blogging here since 2009. In that time, I’ve tried to focus on surprising topics, or at least topics that were things I had recently learned or encountered.

One big problem with that approach is that it makes it much more difficult to produce content.

I really enjoy writing, and I’m teaching PowerShell very frequently (a bit less than 10% of my time at work) so I’m in contact with basic PowerShell topics all the time.

With that in mind, I’m going to start writing PowerShell posts that are more geared towards beginning scripters.

The series, for which I’ll be creating an “index page”, will be called Get-Learning. I hope to write at least 2 or 3 posts in this series each week for the next several months.

If you have any suggestions for topics, drop me a line.

For now, though, watch this space.


Calling Extension Methods in PowerShell

A quick one because it’s Friday night.

I recently found myself translating some C# code into PowerShell.  If you’ve done this, you know that most of it is really routine.  Change the order of some things, change the operators, drop the semicolons.

In a few places you have to do some adjusting, like changing using scopes into try/finally with .Dispose() in the finally.

But all of that is pretty straightforward.

Then I ran into a method that wasn’t showing up in the tab-completion.  I hit the dot, and it wasn’t in the list.

I had found….and extension method!

Extension Methods

In C# (and other managed languages, I guess), an extension method is a static method of a class whose first parameter is declared with the keyword this.

For instance,

public static class MyExtClass {
    public static int NumberOfEs (thisstring TheString)
        return TheString.Length-TheString.Replace ("e", "").Length;

Calling this method in C# goes like this: “hello”.NumberOfEs().

It looks like this method (which is in the class MyExtClass is actually a string method with no parameters.

Extension Methods in PowerShell

Unfortunately, PowerShell doesn’t do that magic for you. In PowerShell, you call it just like it’s written, a static method of a different class.

So, in PowerShell, we would do the following:

public static class MyExtClass {
    public static int NumberOfEs (this string TheString)
        return TheString.Length - TheString.Replace ("e", "").Length;
add-type -TypeDefinition $code 


Note that I’ve included the C# code in a here-string and used add-type to compile it on the fly.

The point is, when translating extension method calls into PowerShell, you need to find the extension class (in this case MyExtClass) and call the static method directly.

You learn something every day.


Deciphering PowerShell Syntax Help Expressions

In my last post I showed several instances of the syntax help that you get when you use get-help or -? with a cmdlet.

For instance:

This help is showing how the different parameters can be used when calling the cmdlet.

If you’ve never paid any attention to these, the notation can be difficult to work out.  Fortunately, it’s not that hard.  There are only NNN different possibilities.  In the following, I will be referring to a parameter called Foo, of type [BAR].

  • An optional parameter that can be used by position or name:
[[-Foo] <Bar>]
  • An optional parameter that can only be used by name:
[-Foo <bar>]
  • A required parameter that can be used by position or name:
[-Foo] <Bar>
  • An optional parameter that can be used only by name:
-Foo <Bar>
  • A switch parameter (switches are always optional and can only be used by name)

[-Foo <Switchparameter>]  #odd, but you may see this in the help sometimes

So, in the example above we see that we have

  • parm1, which is a parameter of type Object (i.e. no type specified), is optional and can be used by name or position
  • parm2, which is a parameter of type Object, is optional and can only be used by name
  • parm3, which is a parameter of type Object, is optional and can only be used by name
  • parm4, which is a parameter of type Object, is optional and can only be used by name

With some practice, you will be reading more complex syntax examples like a pro.

Let me know if this helps!


Specifying PowerShell Parameter Position

Positional Parameters

Whether you know it or not, if you’ve used PowerShell, you’ve used positional parameters. In the following command the argument (c:\temp) is passed to the -Path parameter by position.

cd c:\temp

The other option for passing a parameter would be to pass it by name like this:

cd -path c:\temp

It makes sense for some commands to allow you to pass things by position rather than by name, especially in cases where there would be little confusion if the names of the parameters are left out (as in this example).

What confuses me, however, is code that looks like this:

function Test-Position{

In this parameter declaration, we’ve explicitly assigned positions to the first four parameters, in order.

Why is that confusing? Well, by default, all parameters are available by position and the default order is the order the parameters are defined. So assigning the Position like this makes no difference (or sense, for that matter).

It gets worse!

Even worse than being completely unnecessary, I would argue that specifying positions like this is a bad practice.

One “best practice” in PowerShell is that you should (almost) always use named parameters. The reason is simple. It makes your intention clear. You intend to bind these arguments (values) to these specific parameters.

By specifying positions for all four parameters (or not specifying any) you’re encouraging the user of your cmdlet to write code that goes against best practice.

What should I do?

According to the help (about_Functions_CmdletBindingAttribute), you should use the PositionalBinding optional argument to the CmdletBinding() attribute, and set it to $false. That will cause all parameters to default to not be allowed by position. Then, you can specify the Position for any (hopefully only one or two) parameters you wish to be used by position.

For instance, this will only allow $parm1 to be used by position:

function Test-Position{

Looking at the help for this function we see that this is true:

Because parm1 is in brackets ([-parm1]) we know that that parameter name can be omitted. The other parameter names are not bracketed (although the entire parameters/arguments are), so they are only available by name.

But wait, it gets easier

Even though the help says that all parameters are positional by default, it turns out that using Position on one parameter means that you have to use it on any parameters you want to be accessed by position.

For instance, in this version of the function I haven’t specified PositionalBinding=$False in the CmdletBinding attribute, but only the first parameter is available by position.

function Test-Position2{

Here’s the syntax help:

That’s interesting to me, as it seems to contradict what’s in the help.  Specifically, the help says that all parameters are positional.  It then says that in order to disable this default, you should use the PositionalBinding parameter.  This shows that you don’t need to do that, unless you don’t want any positional parameters.

As a final example, just to make sure we understand how the Position value is used, consider the following function and syntax help:

function Test-Position3{
Param(                       $parm1,

By including Position on 2 of the parameters, we’ve ensured that the other two parameters are only available by name. Also, the assigned positions differ from the order that the parameters are defined in the function, and that is reflected in the syntax help.

I don’t think about parameter position a lot, but to write “professional” cmdlets, it is one of the things to consider.



Missing the Point with PowerShell Error Handling

I’ve been using PowerShell for about 10 years now.  Some might think that 10 years makes me an expert.  I know that it really means I have more opportunities to learn.  One thing that has occurred to me in the last 4 or 5 months is that I’ve been missing the point with PowerShell error handling.


PowerShell Error Handling 101

First, PowerShell has try/catch/finally, like most imperative languages have in the last 15 years or so.  At first glance, there’s not much to see. I usually give an example that looks something like this:

try {
   #do something here
   write-verbose 'it worked'
} catch {
   write-verbose "An error happened : $err"
} finally {
   write-verbose 'Time to clean up'

Running that script with $VerbosePreference set to Continue would output
VERBOSE: An error happened : Attempted to divide by zero.
VERBOSE: Time to clean up

At this point in the explanation, most people with a development background of any kind is likely nodding their head.

And now for something completely different

The next example shows that all is not as expected:

try {
  $results=get-wmiobject -class Win32_ComputerSystem -computername Localhost,NOSUCHCOMPUTER
} catch {
  write-verbose "An error happened : $err"

Most people are surprised to see red error text on the screen and the nice message nowhere to be found.

Anyone with much experience with PowerShell knows that some (most?) PowerShell cmdlets output error records (not exceptions) in some cases, and that try/catch doesn’t “catch” these error records. In PowerShell parlance, exceptions are terminating errors, and error records are non-terminating errors.

My explanation for the why the PowerShell team created non-terminating errors is this:
Imagine you managed a farm of 1000 computers. What would be the odds of all 1000 of them responding correctly to a get-wmiobject call? If anyone in the class optimistically says anything other than “slim to none”, up the number to 10,000 and repeat.

With standard “programming semantics” (i.e. exceptions, terminating errors), a call to 1000 computers which failed on any of them would immediately throw an exception and leave the try block. At that point, all positive results are lost.

As a datacenter manager, is that how you want your automation engine to work? I don’t think so.

With non-terminating errors, the correct results are returned from the cmdlet and error records are output to the error stream. The error stream can be inspected to see what went wrong, and you still get the output.

Where I missed the point

What I’ve been teaching (and I’m not alone) is that the solution is to use the -ErrorAction common parameter to cause the non-terminating error to be a terminating error. That means that we can use try/catch, but it also means that we need to introduce a loop.

Adding the try/catch and -ErrorAction, it looks something like this:

foreach($comp in $computers){
   try {
     $results+=get-wmiobject -class Win32_ComputerSystem -computername $comp -ErrorAction Stop
     write-verbose "Something went wrong with $comp : $err"

Before saying anything else, let me say this…it works.

Unfortunately, it misses the point.

An aside

If you ever find yourself writing code that sidesteps something that the PowerShell team put in place, you should take a step back and see if you’re doing the right thing. The PowerShell team is really, really smart, so if you’re working around them, you probably missed the point (like I did).

Why this is missing the point

One thing that people often miss about PowerShell cmdlets is how often they let you pass lists as arguments. The -ComputerName parameter is one such place. By passing a list of computers to Get-WMIObject, you let PowerShell execute the command against all of those computers “at the same time”. There is overhead, and it’s not multi-threaded, but since most of the work is being done on other machines, you really do get a huge performance increase. It might take five times longer to hit 100 machines than a single machine, but it won’t be anything like 100 times slower.

By introducing a loop, we’ve guaranteed that the time it takes will be at least 100 times as long, because each cmdlet execution is being done in sequence. Using an array (or list) as the argument would allow most of the work to be done more or less in parallel.

That’s not to mention the fact that now we’ve taken on the responsibility of adding the individual results into a collection.  Not a big deal, but anywhere you write more code is a place to have more bugs.

So what’s the right way to do this?

In my opinion, a much better way to do this kind of activity would be to continue to pass the list, but use the -ErrorAction and -ErrorVariable parameters in conjunction to get the best of both worlds. It would look something like this:

   try {
     $results=get-wmiobject -class Win32_ComputerSystem -computername $computers -ErrorAction SilentlyContinue -ErrorVariable Problems
     write-verbose "Something went wrong : $err"
   foreach($errorRecord in $Problems){
     write-verbose "An error occurred here : $errorRecord"

With this construction, we’re only calling get-wmiobject once, so we get the speed of parallel execution. By using -ErrorAction SilentlyContinue, we won’t have any error records (non-terminating errors) written to the error stream. That means, no red text in our output. By the way, SilentlyContinue will write the error records to the $Error automatic variable. If you don’t want that, you can use -ErrorAction Ignore instead.

The “key” to making this technique work is -ErrorVariable Problems. This collects all of the non-terminating errors output by the command, and puts them in the variable $Problems (remember to leave the $ off when using -ErrorVariable). Since I have those in a variable, I can loop through them after I get the results and do whatever I need to with them.

Finally (no pun intended), I put the cmdlet call in a try/catch in case it throws an exception (for instance, out of memory).

So, to summarize, I get the speed of only calling the cmdlet once, and I also get to do something with the errors on an individual basis.

I’m sure someone in the community is teaching this pattern, but I don’t remember seeing it.

What do you think?


Why WMI instead of CIM?

I use Get-WMIObject (even though WMI cmldets are deprecated) for a couple of reasons. First, my work environment doesn’t have WinRM enabled on our laptops by default. I teach error handling before remoting, so at this point using CIM cmdlets with -ComputerName causes errors that are even harder to explain. Also, my first memorable exposure to non-terminating errors was with WMI cmdlets.

One unfortunate problem with using WMI cmdlets is that the error records that it emits do not contain the offending computername. I filed an issue in the appropriate place, but was told that it was too late for WMI cmdlets. Once the general principle of non-terminating errors is understood, substituting CIM cmdlets is an easy sell. Also, it’s a good reason for people to make the switch.

Lots of Recent User Group Activity!

There has been a lot of PowerShell activity in Missouri lately.

I started the Southwest Missouri PSUG in June and have had 4 successful meetings covering the following topics:

  • June – organizational
  • July – Error Handling
  • August – Pester
  • September – DSC

I also spoke at the St. Louis PSUG in August (on Error Handling).  Ken Maglio spoke in September on accessing Web services (especially RESTful services)

I was privileged to speak at the Springfield .NET UG last week and gave a “developer’s overview of PowerShell”.  BTW, it’s hard for me to try to sum up PowerShell and only talk for an hour.  Had a great time, though.

Coming in December, I will be speaking at the Northwest Arkansas Developers Group (PowerShell-related topic TBD).

And, as an exciting addition, the Kansas City PSUG had their first meeting in September!  I hope to be able to get up that way for a meeting or two before the year is out.

I really enjoy the energy and enthusiasm that I see in all of these groups and love to speak or listen to talented speakers in the community.


Celebrating Fake Internet Points in the PowerShell Community

This week, I (finally) hit 10,000 points on StackOverflow. On some level, I know it’s just fake internet points, but it’s a nice milestone.

Like everyone I know in IT, I often find useful answers to questions I have on StackOverflow. Since there are so many answered questions on that site, I generally don’t even need to ask the question, just search for it instead.

When I talk to people about StackOverflow, I always mention the awesome PowerShell presence there. Usually, if you ask a “good” question, you will have lots of people competing to quickly provide answers that are not only correct, but are also informative and helpful. I’m constantly amazed by the character of the PowerShell community. We’re all about getting things done and sharing what we use to succeed with others. I’m proud to be a part of this wonderful community.

And that brings me to the part of this “celebration” that isn’t fake.

In addition to this number:




You will also see this statistic:

That means (by StackOverflow’s calculations, at least), that almost a million people have viewed my answers (and questions). That’s a bit overwhelming. I can’t tell you how many times I talk to people about PowerShell and they tell me that they’ve used one of my answers. A million people, though is more than I can fathom.

For what it’s worth, I’m going to keep on writing, teaching, answering, and speaking about PowerShell. Maybe I’ll hit 2 million.



P.S.  When I speak about StackOverflow, I also mean to include ServerFault, which is the sysadmin-oriented site in the same family.  PowerShell questions pop up on both, but more often on the significantly more popular StackOverflow.

Voodoo PowerShell – VisioBot3000 Lives Again!

Back in January I wrote a post about how VisioBot3000 had been broken for a while, and my attempts to debug and/or diagnose the problem.

In the process of developing a minimal example that illustrated the “breakage”, I noticed that accessing certain Visio object properties caused the code to work, even if the values of those properties were not used at all.

It’s been almost six months now, and I have no idea why that code makes any difference. So instead of letting VisioBot3000 die, I decided to take the easy route, and incorporate the “nonsense” code in the VisioBot3000 module.

If you look at the latest commit (as of this writing), the New-VisioContainer function (in VisioContainer.ps1) starts with the following single line of nonsense:


In that code, I’m using a module-level reference to the Visio application, getting the active document from it, and retrieving the first page. And then I’m throwing away the reference that I just retrieved. The only thing that I can imagine is doing anything is the Pages[1] call. It’s possible that the COM object is doing something internally in addition to pulling back the first page, but that’s grasping at straws.

And that’s why I call this Voodoo PowerShell. I’m using code that I don’t understand because I get what I want from it. It’s a meaningless ritual. I hate including it, but I hate that the module has been largely unchanged for a year even worse.

I will be trying to make more regular updates to VisioBot3000 in the near future, and will be presenting on it at the second SWMO PSUG meeting scheduled for next week.

Let me know what your thoughts are.


Get-Command, Aliases, and a Bug

I stumbled across some interesting behavior the other day as I was demonstrating something that I understand pretty well.

[Side note…this is a great way to find out things that you don’t know…confidently explain how something works, and demo it.]

I was asked to give an overview of how modules work in PowerShell. I’ve been writing and using modules since PowerShell 2.0 came out (2009?) so I didn’t think there was anything (at least anything basic) that I wasn’t comfortable with. Not to say that there aren’t module concepts I’m not super-clear on, but the basics should have been all worked out.

After explaining the concepts of modules (encapsulating functions, variables, aliases) and showing how PowerShell knows where to look for modules, I turned to an example module I had written.

I won’t replicate that module here, because the contents don’t really matter. I’ve boiled the “weirdness” into a simple example and it looks like this:

function Get-Thing{

new-alias -Name MyDir -Value Get-ChildItem
new-alias -Name Func  -Value Get-Thing
new-alias -Name File -Value Get-Item
new-alias -Name Get-TheThing -Value Get-Thing
Export-ModuleMember -Function * -Alias *

If you save that as SampleModule.psm1 file (and put it in a same-named folder in the PSModulePath), you will be able to play along with me.

I showed the group that Import-Module was able to import the module using the name only, not requiring the user to know what path the module was installed into. Then, I thought “I’ll show them how to use Get-Command to find the items that were imported from the module!”

Get-Command -module SampleModule

Imagine my surprise when I saw the following output:

Only one of the three aliases showed up.

Further qualifying the Get-Command by specifying the CommandType (which should logically show fewer results) showed all three.

As a side note, I was also able to see the other aliases by using the -All switch, even though the help for -All says it is used for showing commands that are hidden due to naming collisions.

This isn’t a huge thing, but I did go ahead and add it to the PowerShell User Voice here. I’ve reproduced it in 5.0 and 5.1. I wouldn’t be surprised if it has been this way for a while.

What do you think? Surprised by this result?

Let me know in the comments.