Getting Data From the Middle of a PowerShell Pipeline

Pipeline Output


If you’ve used PowerShell for very long, you know how to get values of of a pipeline.

$values= a | b | c

Nothing too difficult there.

Where things get interesting is if you want to get data from the middle of the pipeline. In this post I’ll give you some options (some better than others) and we’ll look briefly at the performance of each.

Method #1

First, there’s the lovely and often overlooked Tee-Object cmdlet. You can pass the name of a variable (i.e. without the $) to the -Variable parameter and the valu

es coming into the cmdlet will be written to the variable.

For instance:

Get-ChildItem c:\ -Recurse | 
                   Select-Object -Property FullName,Length | 
                   Tee-Object -Variable Files | 
Sort-Object -Property Length -Descending

After this code has executed, the variable $Files will contain the filenames and lengths before they were sorted.  To append the values to an existing variable, include the -Append switch.

Tee-Object is easy to use, but it’s an entire command that’s essentially not doing anything “productive” in the pipeline. If you need to get values from multiple places in the pipeline, each would add an additional Tee-Object segment to the pipeline. Yuck.

Method #2

If the commands you’re using in the pipeline are advanced functions or cmdlets (and you’re only writing advanced functions and cmdlets, right?), you can use the -OutVariable common parameter to send the output of the command to a variable.  Just like with Tee-Object, you only want to use the name of the variable.

If you’re dealing with cmdlets or advanced functions, this is the easiest and most flexible solution. Getting values from multiple places would just involve adding -OutVariable parameters to the appropriate places.

Get-ChildItem c:\ -Recurse | 
    Select-Object -Property FullName,Length -OutVariable Files | 
    Sort-Object -Property Length -Descending 

This has the benefit of one less command in the pipeline, so that’s a nice bonus. If you want to append to an existing variable, here you would use a plus (+) in front of the variable name (like +Files).

Method #3

This method is simply to break the pipeline at the point you want to get the values and assign to a variable. Then, pipe the variable to the “remainder” of the pipeline. Nothing crazy. Here’s the code.

$files=Get-ChildItem c:\ -Recurse | 
    Select-Object -Property FullName,Length 
$files | Sort-Object -Property Length -Descending 

If you want to append, you could use the += operator instead of the assignment operator.

If you want to capture multiple “stages” in the pipeline, you could end up with a bunch of assignments and not much pipeline left.

Method #4

This method is similar to method #3, but uses the fact that assignment statements are also expressions. It’s easier to explain after you’ve seen it, so here’s the code:

($files=Get-ChildItem c:\ -Recurse | 
    Select-Object -Property FullName,Length) | 
    Sort-Object -Property Length -Descending 

Notice how the first part of the pipeline (and the assignment) are inside parentheses? The value of the assignment expression is the value that was assigned, so this has the benefit of getting the variable set and passing the values on to the remainder of the pipeline.

If you want to get multiple sets of values from the pipeline, you would need to nest these parenthesized assignments multiple times. Statements like this can only be used as the first part of a pipeline, so don’t try something like this:

Get-ChildItem c:\ -Recurse | 
    Select-Object -Property FullName,Length) | 
    ($Sortedfiles=Sort-Object -Property Length -Descending) 


I used the benchmark module from the gallery to measure the performance of these 4 techniques. I limited the number of objects to 1000 and staged those values in a variable to isolate the pipeline code from the data-gathering.

$files=dir c:\ -ErrorAction Ignore -Recurse | select-object -first 1000

$sb1={$files | select-object FullName,Length -OutVariable v1 | sort-object Length -Descending}
$sb2={$files | select-object FullName,Length | tee-object -Variable v2| sort-object Length -Descending}
$sb3={$v2=$files| select-object FullName,Length;$files | sort-object Length -Descending}
$sb4={($v2=$files| select-object FullName,Length)|sort-object Length -Descending}
Measure-These -ScriptBlock $sb1,$sb2,$sb3,$sb4 -Count 100 | Format-Table

Title/no. Average (ms) Count   Sum (ms) Maximum (ms) Minimum (ms)
--------- ------------ -----   -------- ------------ ------------
        1     98.60119   100   9860.119     131.7581      87.6203
        2    120.32475   100 12032.4754     150.4985     104.6586
        3    100.92144   100 10092.1436     132.2665      90.0685
        4     98.48383   100   9848.383     135.5229      84.7717

The results aren’t particularly interesting. -OutVariable is about 20% slower than the rest, but other than that they’re all about the same. I’m a little bit disappointed, but 30% isn’t that big of a difference to pay for the cleaner syntax and flexibility (in my opinion).

BTW, those timings are for Windows PowerShell 5.1. The numbers for PowerShell 6.0 (Core) are similar:

Title/no. Average (ms) Count  Sum (ms) Maximum (ms) Minimum (ms)
--------- ------------ -----  -------- ------------ ------------
        1    120.97498    10 1209.7498     136.1319     112.0041
        2     139.9865    10  1399.865      147.659     132.1466
        3    128.86957    10 1288.6957     148.0096     115.0421
        4    119.44978    10 1194.4978     142.9651     109.1328

Here we see slightly less spread (17%), but all of the numbers are a bit higher.

I’ll probably continue to use -OutVariable.

What about you?

PowerShell Reflection-Lite

hand mirror with reflectionN.B. This is just a quick note to relate something I ran into in the last couple of weeks. Not an in-depth discussion of reflection.


Reflection is an interesting meta-programming tool. Using it, we can find (among other things) a constructor or method that matches whatever criteria we want including name, # of parameters, types of parameters, public/private, etc. As you can imagine, using reflection can be a chore.

I have never had to use reflection in PowerShell. Usually, `Get-Member` is enough to get me what I need.

Dynamic Commands in PowerShell

I have also talked before about how PowerShell lets you by dynamic in ways that are remarkably easy.

For instance, you can invoke an arbitrary command with arbitrary arguments with a command object (from `Get-Command`), and a hashtable of parameter/argument mappings simply using `& $cmd @params`.

That’s crazy easy. Maybe I’ve missed that kind of functionality in other languages and it’s been there, but I don’t think so. At least not often.

I had also seen that the following work fine:

[email protected]{A=1;B=1}

#use the key as a property
$hash['A'] -eq $hash.A

#use a variable as a property name
$hash['A'] -eq $hash.$prop

#use a string literal as a property name
$hash['A'] -eq $hash.'A'

What I found

I was working on some dynamic WPF suff (posts coming this week, I promise) and needed to add an event handler to a control. The problem was that the specific event I was adding a handler for was a parameter. In case you didn’t know, adding an event handler to a WPF control looks something like this (we’ll use the TextChanged event):


Or, if you prefer, you can omit the parentheses if the only parameter is a scriptblock:


The issue was that the name of the method is different for each event. I thought “Oh, no! I’m going to have to use reflection”.

But then I thought…I wonder if PowerShell has already taken care of this. I tried the following:

# $control, $eventName, and $action were parameters 

I figured that the worst that could happen was that it would blow up and I’d dig out a reference on using reflection (and probably translate from C#).

Instead, it worked like a charm.

Chalk another win up for the PowerShell team. In case you hadn’t noticed, they do good work.


No PowerShell Goals for 2018

After three years (2015, 2016, 2017) of publishing yearly goals, I’ve decided to not do that this year.

One reason is that I’ve not done a great job of keeping these goals in the forefront of my mind, so they haven’t (for the most part) been achieved.

I definitely fell off the wagon a few times in terms of keeping up with regular posting here. 27 posts last year, so about one every 2 weeks. I’d like to get to where I’m posting twice per week.

I did not work on any new projects (writing, video course, etc.) throughout the year.

In 2017 I’ve been working on:

  • VisioBot3000 – now in the PSGallery
  • Southwest Missouri PowerShell User Group (SWMOPSUG) – meeting since June
  • Speaking at other regional groups (STL and NWA)

Recently (mostly in 2018), I’ve also been working on:

  • PowerShell Hosting
  • WPF in PowerShell (without XAML)

I’m going to try to get back on the ball and post twice a week. Weekly goals rather than yearly…that way if I mess up a week, I can still succeed the next one. 🙂


Visio Constants in VisioBot3000

One of the great things about doing Office automation (that is, COM automation of Office apps) is that all of the examples are filled with tons of references to constants. A goal of VisioBot3000 was to make using those constants as easy as possible.

I mentioned the issue of having so many constants to deal with in a post over 18 months ago, but haven’t ever gotten around to showing how VisioBot3000 gives you access to some (most?) of the Visio constants.

First, here’s a snippet of code from that post:

$connector.CellsSRC(1,23,10) = 16
$connector.CellsSRC(1,23,19) = 1 

which includes the following constants (except that it uses the values rather than the names):

  • visSectionObject = 1
  • visRowShapeLayout = 23
  • visSLORouteStyle = 10
  • visLORouteCenterToCenter = 16
  • visSLOLineRouteExt = 19
  • visLORouteExtStraight = 1

VisioBot3000 Constants Take 1

So, the straight-forward thing to do would be to define a variable for each of the constants like this:

$visSectionObject = 1
$visRowShapeLayout = 23
$visSLORouteStyle = 10
$visLORouteCenterToCenter = 16
$visSLOLineRouteExt = 19
$visLORouteExtStraight = 1

With those definitions (in the module, of course), we could re-write the code above like this:

$connector.CellsSRC($visSectionObject,$visRowShapeLayout,$visSLORouteStyle) = $visLORouteCenterToCenter
$connector.CellsSRC($visSectionObject,$visRowShapeLayout,$visSLOLineRouteExt) = $visLORouteExtStraight

That looks a lot less cryptic, or at least the constant names go a long way towards helping explain what’s going on.

Even better, if we had all of the Visio constants defined this way it would make translating example code (and recorded macro code) a lot easier.

Wait…there’s a problem.

There are over 2000 constants defined. Guess how long it takes PowerShell to parse and execute over 2000 assignment statements? Too long, as in several seconds.

VisioBot3000 Constants Take 2

So, my original approach was to take a list of Visio constants from a web page which doesn’t exist anymore (on, redirects to an Idera site now). Some friendly PowerShell person had gone to the trouble of creating PowerShell assignment statements for each of the Visio constants, at least up to a point. I ended up adding a couple of dozen, but that’s a drop in the bucket. There were a whopping 2637 assignment statements.

When I added that code to the VisioBot3000 module, importing the module now took a noticeable amount of time. Another fun issue is that we just added 2637 variables to the session, which causes some fun with tab completion (think $vis….waiting…waiting).

Since that wasn’t a really good solution, I thought about what would be better.

My first thought would be an enumeration (enum in PowerShell). I thought briefly about lots of enumerations (one for each “set” of constants), but quickly discarded that idea as unusable.

A single enumeration would be an ok solution, but it would mean that VisioBot3000 only worked on PowerShell 5.0 or above. No dice (at least at this point).

I decided instead to create a single object called $Vis that had each constant as a property.

To do that, I needed a hashtable with 2637 members. I ended up using an ordered hashtable, but that doesn’t really change much.

The code in question is in the VisioConstants.ps1 file in VisioBot3000. It looks something like this:

#Public Enum VisArcSweepFlags 
ArcSweepFlagConcave = 0 
ArcSweepFlagConvex = 1 
#End Enum 
#Public Enum VisAutoConnectDir 
AutoConnectDirDown = 2 
AutoConnectDirLeft = 3 
AutoConnectDirNone = 0 
AutoConnectDirRight = 4 
AutoConnectDirUp = 1 
#End Enum 

# SNIP !!!!

#Public Enum VisDiagramServices
ServiceNone = 0
ServiceAll = -1
ServiceAutoSizePage = 1
ServiceStructureBasic = 2
ServiceStructureFull = 4
ServiceVersion140 = 7
ServiceVersion150 = 8

#End Enum 

$VIS=new-object PSCustomObject -Property $VIS

It turns out that the code runs quite a bit faster than the individual assignments and now there’s only one variable exposed. Intellisense is a bit slow to load the first time you do $Vis., but it’s pretty quick after that.

The re-written code above looks like this after this change:

$connector.CellsSRC($vis.SectionObject,$vis.RowShapeLayout,$vis.SLORouteStyle) = $vis.LORouteCenterToCenter
$connector.CellsSRC($vis.SectionObject,$vis.RowShapeLayout,$vis.SLOLineRouteExt) = $vis.LORouteExtStraight

It’s a tiny bit less direct, but it’s really easy to get used to.

In the spirit full disclosure, I should mention that a few of the refactored names of constants had to be quoted in order to be valid syntax in PowerShell, like these:

'1DBeginX' = 0 
'1DBeginY' = 1 
'1DEndX' = 2 
'1DEndY' = 3 

That’s enough for now. I think I’m about to start back to work on VisioBot3000…any ideas on features that would be nice to have (or are clearly missing)?


Get-Learning : Why PowerShell?

As the first installment in this series, I want to go back to the topic I wrote on in my very first blog post back in 2009. In that post, I talked about why PowerShell (1.0) was something that I was interested enough in to start blogging.

Many of the points I mentioned there are still relevant, so I’ll repeat them now. Here are some of the things that made PowerShell awesome to me in 2009:

  • Ability to work with multiple technologies in a seamless fashion (.NET, WMI, AD, COM)
  • Dynamic code for quick scripting, strongly-typed code for production code (what Bruce Payette calls “type-promiscuous”)
  • High-level language constructs (functions, objects)
  • Consistent syntax
  • Interactive environment (REPL loop)
  • Discoverable properties/functions/etc.
  • Great variety of delivered cmdlets, even greater variety of community cmdlets and scripts
  • On a similar note, a fantastic community that shares results and research
  • Extensible type system
  • Everything is an object
  • Powerful (free) tools like PowerGUI, PSCX, PowerShell WMI Explorer, PowerTab, PrimalForms Community Edition, and many, many more. (ok…I don’t use any of these anymore)
  • Easy embedding in .NET apps including custom hosts.
  • The most stable, well-thought out version 1.0 product I’ve ever seen MicroSoft produce.
  • An extremely involved, encouraging community..

Of those things, the only ones that aren’t very relevant are the “free tools” (those tools aren’t relevant, but there are a lot of other new, free ones), and the 1.0 comment.

Since it’s been almost 11 years now, instead of talking about 1.0, let’s talk about now.

Microsoft, has placed PowerShell at the focus of its automation strategy. Instead of being an powerful tool which has a passionate community, it now is a central tool behind nearly everything that is managed on the Windows platform. And given the imminent release of PowerShell core, it will soon be (officially) available on OSX and Linux to provide some cross-platform functionality for those who want it. In 2009 you could leverage PowerShell to get more stuff done. Now, in 2017 you can’t get much done without touching PowerShell.

Finally, PowerShell is a part of so many solutions now, including most (all?) of the management UIs and APIs coming out of Microsoft in the last several years. Microsoft is relying on PowerShell to be a significant part of their products. Other companies are doing the same, delivering PowerShell modules along with their products. They do this because it is a proven system for powerful automation.

Why PowerShell? Because it’s awesome.

Why PowerShell? Because it’s everywhere.

Why PowerShell? Because it’s proven.

And my final point, which hasn’t changed since I talked about it in 2009 is that PowerShell is fun!

Are you looking to start your PowerShell learning journey? Maybe you have already started and are looking for pointers. Perhaps you’ve got quite a bit of experience and you just want to fill in some gaps.

Follow along with me and get-learning!


Get-Learning – Introducing a new series of PowerShell Posts

I’ve been blogging here since 2009. In that time, I’ve tried to focus on surprising topics, or at least topics that were things I had recently learned or encountered.

One big problem with that approach is that it makes it much more difficult to produce content.

I really enjoy writing, and I’m teaching PowerShell very frequently (a bit less than 10% of my time at work) so I’m in contact with basic PowerShell topics all the time.

With that in mind, I’m going to start writing PowerShell posts that are more geared towards beginning scripters.

The series, for which I’ll be creating an “index page”, will be called Get-Learning. I hope to write at least 2 or 3 posts in this series each week for the next several months.

If you have any suggestions for topics, drop me a line.

For now, though, watch this space.


Calling Extension Methods in PowerShell

A quick one because it’s Friday night.

I recently found myself translating some C# code into PowerShell.  If you’ve done this, you know that most of it is really routine.  Change the order of some things, change the operators, drop the semicolons.

In a few places you have to do some adjusting, like changing using scopes into try/finally with .Dispose() in the finally.

But all of that is pretty straightforward.

Then I ran into a method that wasn’t showing up in the tab-completion.  I hit the dot, and it wasn’t in the list.

I had found….and extension method!

Extension Methods

In C# (and other managed languages, I guess), an extension method is a static method of a class whose first parameter is declared with the keyword this.

For instance,

public static class MyExtClass {
    public static int NumberOfEs (thisstring TheString)
        return TheString.Length-TheString.Replace ("e", "").Length;

Calling this method in C# goes like this: “hello”.NumberOfEs().

It looks like this method (which is in the class MyExtClass is actually a string method with no parameters.

Extension Methods in PowerShell

Unfortunately, PowerShell doesn’t do that magic for you. In PowerShell, you call it just like it’s written, a static method of a different class.

So, in PowerShell, we would do the following:

[email protected]'
public static class MyExtClass {
    public static int NumberOfEs (this string TheString)
        return TheString.Length - TheString.Replace ("e", "").Length;
add-type -TypeDefinition $code 


Note that I’ve included the C# code in a here-string and used add-type to compile it on the fly.

The point is, when translating extension method calls into PowerShell, you need to find the extension class (in this case MyExtClass) and call the static method directly.

You learn something every day.


Deciphering PowerShell Syntax Help Expressions

In my last post I showed several instances of the syntax help that you get when you use get-help or -? with a cmdlet.

For instance:

This help is showing how the different parameters can be used when calling the cmdlet.

If you’ve never paid any attention to these, the notation can be difficult to work out.  Fortunately, it’s not that hard.  There are only NNN different possibilities.  In the following, I will be referring to a parameter called Foo, of type [BAR].

  • An optional parameter that can be used by position or name:
[[-Foo] <Bar>]
  • An optional parameter that can only be used by name:
[-Foo <bar>]
  • A required parameter that can be used by position or name:
[-Foo] <Bar>
  • An optional parameter that can be used only by name:
-Foo <Bar>
  • A switch parameter (switches are always optional and can only be used by name)

[-Foo <Switchparameter>]  #odd, but you may see this in the help sometimes

So, in the example above we see that we have

  • parm1, which is a parameter of type Object (i.e. no type specified), is optional and can be used by name or position
  • parm2, which is a parameter of type Object, is optional and can only be used by name
  • parm3, which is a parameter of type Object, is optional and can only be used by name
  • parm4, which is a parameter of type Object, is optional and can only be used by name

With some practice, you will be reading more complex syntax examples like a pro.

Let me know if this helps!


Specifying PowerShell Parameter Position

Positional Parameters

Whether you know it or not, if you’ve used PowerShell, you’ve used positional parameters. In the following command the argument (c:\temp) is passed to the -Path parameter by position.

cd c:\temp

The other option for passing a parameter would be to pass it by name like this:

cd -path c:\temp

It makes sense for some commands to allow you to pass things by position rather than by name, especially in cases where there would be little confusion if the names of the parameters are left out (as in this example).

What confuses me, however, is code that looks like this:

function Test-Position{

In this parameter declaration, we’ve explicitly assigned positions to the first four parameters, in order.

Why is that confusing? Well, by default, all parameters are available by position and the default order is the order the parameters are defined. So assigning the Position like this makes no difference (or sense, for that matter).

It gets worse!

Even worse than being completely unnecessary, I would argue that specifying positions like this is a bad practice.

One “best practice” in PowerShell is that you should (almost) always use named parameters. The reason is simple. It makes your intention clear. You intend to bind these arguments (values) to these specific parameters.

By specifying positions for all four parameters (or not specifying any) you’re encouraging the user of your cmdlet to write code that goes against best practice.

What should I do?

According to the help (about_Functions_CmdletBindingAttribute), you should use the PositionalBinding optional argument to the CmdletBinding() attribute, and set it to $false. That will cause all parameters to default to not be allowed by position. Then, you can specify the Position for any (hopefully only one or two) parameters you wish to be used by position.

For instance, this will only allow $parm1 to be used by position:

function Test-Position{

Looking at the help for this function we see that this is true:

Because parm1 is in brackets ([-parm1]) we know that that parameter name can be omitted. The other parameter names are not bracketed (although the entire parameters/arguments are), so they are only available by name.

But wait, it gets easier

Even though the help says that all parameters are positional by default, it turns out that using Position on one parameter means that you have to use it on any parameters you want to be accessed by position.

For instance, in this version of the function I haven’t specified PositionalBinding=$False in the CmdletBinding attribute, but only the first parameter is available by position.

function Test-Position2{

Here’s the syntax help:

That’s interesting to me, as it seems to contradict what’s in the help.  Specifically, the help says that all parameters are positional.  It then says that in order to disable this default, you should use the PositionalBinding parameter.  This shows that you don’t need to do that, unless you don’t want any positional parameters.

As a final example, just to make sure we understand how the Position value is used, consider the following function and syntax help:

function Test-Position3{
Param(                       $parm1,

By including Position on 2 of the parameters, we’ve ensured that the other two parameters are only available by name. Also, the assigned positions differ from the order that the parameters are defined in the function, and that is reflected in the syntax help.

I don’t think about parameter position a lot, but to write “professional” cmdlets, it is one of the things to consider.



Missing the Point with PowerShell Error Handling

I’ve been using PowerShell for about 10 years now.  Some might think that 10 years makes me an expert.  I know that it really means I have more opportunities to learn.  One thing that has occurred to me in the last 4 or 5 months is that I’ve been missing the point with PowerShell error handling.


PowerShell Error Handling 101

First, PowerShell has try/catch/finally, like most imperative languages have in the last 15 years or so.  At first glance, there’s not much to see. I usually give an example that looks something like this:

try {
   #do something here
   write-verbose 'it worked'
} catch {
   write-verbose "An error happened : $err"
} finally {
   write-verbose 'Time to clean up'

Running that script with $VerbosePreference set to Continue would output
VERBOSE: An error happened : Attempted to divide by zero.
VERBOSE: Time to clean up

At this point in the explanation, most people with a development background of any kind is likely nodding their head.

And now for something completely different

The next example shows that all is not as expected:

try {
  $results=get-wmiobject -class Win32_ComputerSystem -computername Localhost,NOSUCHCOMPUTER
} catch {
  write-verbose "An error happened : $err"

Most people are surprised to see red error text on the screen and the nice message nowhere to be found.

Anyone with much experience with PowerShell knows that some (most?) PowerShell cmdlets output error records (not exceptions) in some cases, and that try/catch doesn’t “catch” these error records. In PowerShell parlance, exceptions are terminating errors, and error records are non-terminating errors.

My explanation for the why the PowerShell team created non-terminating errors is this:
Imagine you managed a farm of 1000 computers. What would be the odds of all 1000 of them responding correctly to a get-wmiobject call? If anyone in the class optimistically says anything other than “slim to none”, up the number to 10,000 and repeat.

With standard “programming semantics” (i.e. exceptions, terminating errors), a call to 1000 computers which failed on any of them would immediately throw an exception and leave the try block. At that point, all positive results are lost.

As a datacenter manager, is that how you want your automation engine to work? I don’t think so.

With non-terminating errors, the correct results are returned from the cmdlet and error records are output to the error stream. The error stream can be inspected to see what went wrong, and you still get the output.

Where I missed the point

What I’ve been teaching (and I’m not alone) is that the solution is to use the -ErrorAction common parameter to cause the non-terminating error to be a terminating error. That means that we can use try/catch, but it also means that we need to introduce a loop.

Adding the try/catch and -ErrorAction, it looks something like this:

foreach($comp in $computers){
   try {
     $results+=get-wmiobject -class Win32_ComputerSystem -computername $comp -ErrorAction Stop
     write-verbose "Something went wrong with $comp : $err"

Before saying anything else, let me say this…it works.

Unfortunately, it misses the point.

An aside

If you ever find yourself writing code that sidesteps something that the PowerShell team put in place, you should take a step back and see if you’re doing the right thing. The PowerShell team is really, really smart, so if you’re working around them, you probably missed the point (like I did).

Why this is missing the point

One thing that people often miss about PowerShell cmdlets is how often they let you pass lists as arguments. The -ComputerName parameter is one such place. By passing a list of computers to Get-WMIObject, you let PowerShell execute the command against all of those computers “at the same time”. There is overhead, and it’s not multi-threaded, but since most of the work is being done on other machines, you really do get a huge performance increase. It might take five times longer to hit 100 machines than a single machine, but it won’t be anything like 100 times slower.

By introducing a loop, we’ve guaranteed that the time it takes will be at least 100 times as long, because each cmdlet execution is being done in sequence. Using an array (or list) as the argument would allow most of the work to be done more or less in parallel.

That’s not to mention the fact that now we’ve taken on the responsibility of adding the individual results into a collection.  Not a big deal, but anywhere you write more code is a place to have more bugs.

So what’s the right way to do this?

In my opinion, a much better way to do this kind of activity would be to continue to pass the list, but use the -ErrorAction and -ErrorVariable parameters in conjunction to get the best of both worlds. It would look something like this:

   try {
     $results=get-wmiobject -class Win32_ComputerSystem -computername $computers -ErrorAction SilentlyContinue -ErrorVariable Problems
     write-verbose "Something went wrong : $err"
   foreach($errorRecord in $Problems){
     write-verbose "An error occurred here : $errorRecord"

With this construction, we’re only calling get-wmiobject once, so we get the speed of parallel execution. By using -ErrorAction SilentlyContinue, we won’t have any error records (non-terminating errors) written to the error stream. That means, no red text in our output. By the way, SilentlyContinue will write the error records to the $Error automatic variable. If you don’t want that, you can use -ErrorAction Ignore instead.

The “key” to making this technique work is -ErrorVariable Problems. This collects all of the non-terminating errors output by the command, and puts them in the variable $Problems (remember to leave the $ off when using -ErrorVariable). Since I have those in a variable, I can loop through them after I get the results and do whatever I need to with them.

Finally (no pun intended), I put the cmdlet call in a try/catch in case it throws an exception (for instance, out of memory).

So, to summarize, I get the speed of only calling the cmdlet once, and I also get to do something with the errors on an individual basis.

I’m sure someone in the community is teaching this pattern, but I don’t remember seeing it.

What do you think?


Why WMI instead of CIM?

I use Get-WMIObject (even though WMI cmldets are deprecated) for a couple of reasons. First, my work environment doesn’t have WinRM enabled on our laptops by default. I teach error handling before remoting, so at this point using CIM cmdlets with -ComputerName causes errors that are even harder to explain. Also, my first memorable exposure to non-terminating errors was with WMI cmdlets.

One unfortunate problem with using WMI cmdlets is that the error records that it emits do not contain the offending computername. I filed an issue in the appropriate place, but was told that it was too late for WMI cmdlets. Once the general principle of non-terminating errors is understood, substituting CIM cmdlets is an easy sell. Also, it’s a good reason for people to make the switch.