PowerShellStation.com update

I just changed the syntax highlighting used by the site (to SyntaxHighlighter Evolved). One reason is that it’s much easier to use.

I have tried to go through the older posts and update the markup to include the proper codes to highlight using the new plugin. If you notice one that doesn’t look quite right, let me know.

Mike

Best Practices Update and some Scripting Games thoughts

Just a quick note to let you know that I haven’t given up on writing about PowerShell best practices. A few things which have derailed my thinking.

  • My first “best practice” I thought was a no-brainer. After I wrote it I got thinking about what actual benefit there was to sticking to single-quotes rather than using double-quotes. Perhaps it makes sense to use double quotes all the time unless you don’t want interpolation and control characters.
  • The 2013 Scripting Games started. Reading the comments by the community regarding the scripts has been a real eye-opener about how people feel about different topics. I think I’ll probably wait until the games are over and try to compile a list of what everyone seems to agree on.

With regard to the Scripting Games, if you haven’t gotten involved with them it’s not too late. There are still 2 events left (I think). Even if you don’t feel up to competing, looking at over a hundred different implementations of the same problem will definitely get your brain working on some new stuff to try in your scripts. Maybe some technique you hadn’t really used before (splatting? parameter validation? pipeline input? comment-based help?). Take some time to read through some of the entries and at the very least you’ll start to develop an opinion on what “good” means in a script. If you do enter, don’t worry too much about the judging. The point values have been “evolving” over time and the important thing (to me) is the constructive comments I’ve received on my scripts. Some of the comments haven’t been accurate (or helpful), but hey, you get what you pay for.

My hat is definitely off to Don Jones and the rest of the PowerShell.org folks for hosting this. If you’ve been watching the forums at all, you can tell that they’re working hard to make it successful. If you’ve looked at scripts, you know that they’ve added a lot of awesome functionality on the judging side for how the commenting and scoring is handled.

Looking forward to event 5.

Mike

PowerShell Best Practice #1 – Use Single Quotes

I’m going to kick off this series with a no-brainer.

In PowerShell, there are 2 ways to quote strings, using single-quotes (‘) or double-quotes (“). This is probably not a surprise to you if you’ve seen PowerShell scripts before.

A “best practice” in PowerShell is that you should always default to using single-quotes unless you specifically need to use double-quotes. The reasons to use double-quotes are:

  • To enable substitution in the string (variables or expressions)
  • To utilize escape sequences in the string (introduced by backtick `)
  • To simplify embedding single-quotes in the string (without doubling the single quotes)

I have to admit, I find myself getting lazy about this and switching between types of quotes with no rhyme or reason. In fact, sometimes I see that I’m using double-quotes as the default just in case I end up doing variable substitution. In my opinion, however, this is not something I should be doing.

Here’s a post from Don Jones about quoting.

Anyone disagree with this one?

PowerShell Best Practices

I’ve seen several posts on PowerShell best practices, and even read Ed Wilson’s book on the subject. There is some commonality in the lists in the obvious places (verb-noun, output objects, format your code nicely), and some disagreement in other areas (code signing, for example). I also see a great amount of variation in use of aliases and whether or not to name every parameter. Looking at code in various blogs shows yet another view of what common practices are (whether those are “best” or not is another question).

I’ve been thinking about “best practices” for a long time in PowerShell, and I come at it backwards. I’m really a “proof-of-concept” person. I’ve got a background in Mathematics, so my tendency is to implement something to the point where it works (for some value of “works”) and move on. Polishing scripts and focusing on quality has unfortunately been something that I’ve never really invested a lot into.

At work, lately, I’ve started to spend some time (a few days a month) doing PowerShell training, and I’m really enjoying myself. As I’m teaching, though, I’m trying to instill upon my students a love of PowerShell, and the skills they need to implement quality scripts. And to do that, I have to think about what quality means for me.

Fortunately, I recently read Don Jones and Jeffery Hicks’ new book, Learn PowerShell Toolmaking in a Month of Lunches. This book focuses almost entirely on the practice of making powerful, high-quality, reusable functions in PowerShell and I recommend it highly to anyone who uses PowerShell. It is very different from any other PowerShell book in that it isn’t a tutorial on the language or on how to use certain cmdlets to accomplish tasks.

Will all this going around in my head, I’m trying to formulate a list of best practices and I think that there’s a continuum in what should be recommended. Practices range from “required” (use meaningful variable names!) to “likely to start a religious war” (set tabs to 4 spaces, or braces should be on their own line).

Since I’ve already spent this much text just rambling, I’m thinking that it’s too late to actually start listing my thoughts out, but I’ll try to do that in the next few days. I’d really like to hear some community feedback (pro/con) on various ideas, since I know that there will never be a “final list”.

Let me know what you think.

Mike

PowerShell Splatting Tricks

If you’ve never heard of splatting in PowerShell or possibly read about it but never used it, you should probably consider it.  Briefly, splatting is the ability to package up parameters into a hashtable and use the hashtable to supply the parameters to a function call. The parameters which are passed into a function automatically populate a hashtable called $PSBoundParameters. Note that to “splat” a hashtable you use an @ in place of the normal $. So to pass $PSBoundParameters, you’d use @PSBoundParameters. If this isn’t making sense, please refer to the code example below.

Why would you want to do this? I can think of a couple of instances where the functionality is very useful.

First, consider a function which calls several other, related functions. If the parameters for the “inner” are the same (or similar), splatting can make the resulting function calls very easy.

For example, assume we have a functions which start and stop a “widget” (with some options, of course). In order write a restart-widget function, we can simply pass the $PSBoundParameters hashtable on to the start/stop functions.
The code could look something like this:

function start-item{
param([switch]$option1,
      [switch]$option2,
      [switch]$option3)
      #start the item using the provided options
}
function stop-item{
param([switch]$option1,
      [switch]$option2,
      [switch]$option3)
      #Stop the item using the provided options
}
function restart-item{
param([switch]$option1,
      [switch]$option2,
      [switch]$option3)
      #restart the item using the provided options
      #note, the hashtable with parameters passed to this function is called $PSBoundParameters
      stop-item @PSBoundParameters
      start-item @PSBoundParameters
}
}

Note that with 3 switch parameters, you’d have to write 8 different if/then branches to pass the appropriate switches to these functions without using splatting.

Another example of splatting is when writing a proxy function, you will often be adding or removing parameters from a given function and you will need to adjust the hashtable accordingly before passing it into the “wrapped” function.

The first several times I used splatting, I was simply passing $PSBoundParameters to a subordinate function. In many cases, though, you’ll be constructing your own hashtable or modifying $PSBoundParameters in order to supply parameters to another function.

A couple of corner-cases which I haven’t ever seen discussed are:

  1. How are switch parameters handled in a hashtable since there isn’t really a value?
  2. Can I mix splatting with normal parameter passing

To answer these questions, we’ll write a simple function (all best-practices are out the window) and try it out.

function show-splat{
param($name,[switch]$hello)
  $PSBoundParameters | out-string
  if($hello){
    write-host "Hello!"
  }
  write-host $name
}

Calling it as such gives us the answers:

PS C:\> $parms=@{hello=$true}
PS C:\> show-splat -name Mike @parms
Key Value
--- -----
name Mike
hello True

Hello!
Mike

We see clearly that the switch ($hello) is simply passed as a boolean value, and we were able to mix a named parameter (-name) with splatting.  Mixing the two could be useful if you had a lengthy command-line and wanted to specify options for it using splatting or if you only wanted to pass a selection of parameters on to another function.

 

Anyway, splatting is a powerful technique which can easily simplify your functions.  Let me know if you have situations where it seems appropriate.

A Remoting Issue with PowerShell 3 Beta

I’ve been doing some thinking about PowerShell Remoting for a project at work and realized that I hadn’t ever set up remoting on my “home” laptop. I’m not in a domain, so remoting configuration is a bit different. In any case, I would be using the same machine as source and target of the remoting call, so how could it go wrong?

First of all, VMWare had set up some network adapters and placed them in a public profile. Enable-PSRemoting doesn’t like that. It was an easy google (bing?) to fix and Enable-PSRemoting succeeded.

I then issued this:

invoke-command -scriptblock { get-process | select-object -first 10  } -computer  localhost

Imagine my surprise when the result was this:

Could not find file 'C:\Windows\System32\WindowsPowerShell\v1.0\Event.Format.ps1xml'.
    + CategoryInfo          : OpenError: (:) [], RemoteException
    + FullyQualifiedErrorId : PSSessionStateBroken

I searched the internet for this, but only found one hit that was close, and that was a bug report for nuget.

It seems like the powershell engine that is running the remote payload is looking for a formatting file that doesn’t exist. To work around this, I simply copied an existing Format.ps1xml file (I chose Registry.format.ps1xml because it was the smallest), removed the signature from it, and changed the name of the view (so it wouldn’t change any output).

It’s not a big bug, and it’s a beta so I’m not worried. Just thought I’d share my workaround.

-Mike

Speeding up Powershell Webcast by Dr. Tobias Weltner

If you’ve done much looking around, you know that there’s an awful lot of great information about PowerShell available on the web. The community that has formed around this product is one of its strengths. You’re probably familiar with the name Tobias Weltner. His Master-PowerShell e-book has long been a resource that I’ve turned to for examples and explanations. I recently watched a webcast that Dr. Weltner did as part of a series of webcasts at idera.com. The title of the webcast is “Speeding up PowerShell: Multithreading”. When I got the announcement, I thought it was going to be about using the [System.Threading] namespace. Boy, was I wrong.

The talk starts off with discussing times when it might make sense to avoid using the pipeline. Once you see the material, it’s makes perfect sense. He then moves to using PowerShell jobs to perform tasks, discussing the pros and cons of that approach. Finally, he talks about using the Runspace class to run separate PowerShell instances. It uses the classes, but still manages to be very readable, very approachable PowerShell. He provides several examples in each section (including a function that executes a PowerShell scriptblock with a timeout, something I’ve never seen before).

All in all, this was easily the best webcast on PowerShell I’ve ever watched. Unlike most Powershell videos I’ve seen, it wasn’t targeting a beginner, but someone who already knows the basics of scripting and wants to learn more. The techniques he presents are, as I have said, very straightforward and explained very well. I can already think of several examples of code that I’m probably going to be writing in the near future based on this presentation.

Importing Modules using -AsCustomObject

I recently got thinking about the -AsCustomObject switch for the Import-Module cmdlet. I have seen it several times in discussions of implementing “classes” in PowerShell. Here’s a typical (i.e. trivial) example:

#module adder.psm1
function add-numbers($x,$y){
   return $x+$y
}

With that module, we can do the standard module stuff:

PS> import-module adder
PS> add-numbers 1 2
3

Ok, that was way too basic. Here’s something a lot closer to the topic at hand:

PS> $adder=import-module adder -ascustomobject
PS> $adder | gm

   TypeName: System.Management.Automation.PSCustomObject

Name        MemberType   Definition                    
----        ----------   ----------                    
Equals      Method       bool Equals(System.Object obj)
GetHashCode Method       int GetHashCode()             
GetType     Method       type GetType()                
ToString    Method       string ToString()             
add-numbers ScriptMethod System.Object add-numbers(); 

PS> $adder.add-numbers(1,2)
Unexpected token '-numbers' in expression or statement.
At line:1 char:11
PS>  $adder."add-numbers"( 1, 2)
3

There are a several interesting things to notice about this example. First of all, note that the add-numbers function has become a scriptmethod on the $adder object. As the help topic for import-module states, the members of the custom object are the (exported) members of the module. When we try to call the add-numbers method, we find that our decision to use the noun-verb naming convention has bitten us. To use the method, we need to enclose the offending method name in quotes (both single and double work fine). Note that since this is a method we need to use commas to separate the arguments to the function.

A second thing to note is since this is a method, not a function, we can’t skip arguments.

PS> $adder.add-numbers(,2)

Note that we could definitely do

add-numbers -y 2

if we had used a normal import-module. Granted that in this case there would be no need to.

What if we try to fix the quotation issue by including an alias (say, AddNumbers) to the module and exporting it?

function add-numbers($x,$y){
   return $x+$y
}
new-alias addNumbers add-numbers
export-modulemember -Function * -Variable * -alias *

Here’s what we find:

PS> $adder=import-module adder -ascustomobject -force
PS> $adder | gm

   TypeName: System.Management.Automation.PSCustomObject

Name        MemberType   Definition                    
----        ----------   ----------                    
Equals      Method       bool Equals(System.Object obj)
GetHashCode Method       int GetHashCode()             
GetType     Method       type GetType()                
ToString    Method       string ToString()             
add-numbers ScriptMethod System.Object add-numbers(); 

PS> get-alias AddNumbers

Capability      Name                             ModuleName                                                 
----------      ----                             ----------                                                       
Script          addNumbers -> add-numbers        adder

Hey! Our alias is missing. Unfortunately, it got imported into the global scope (possibly hiding another function). Note that I used the -force switch to make sure that we re-import it if it was already loaded.

When I first read about the -asCustomObject switch, I could see myself using this to import modules that had conflicting function names, and using the custom objects to call the methods in question. However, consider a function with a large number of switches or parameters. With an “-ascustomobject” object, you would need to specify all of the switches or parameters. Again, what about a function which used parametersets? As it turns out, scriptmethods don’t seem to use parametersets. Here’s function to demonstrate:

function test-psets{
param([Parameter(ParameterSetName="Set1")]$x,
      [Parameter(ParameterSetName="Set2")]$y)
      switch ($PsCmdlet.ParameterSetName){
         "Set1" {write-host "we're using Set1"}
         "Set2"  {write-host "we're using Set2"}
         default {write-host "don't know what parameter set we're in"}
      }
      Write-host "we had better be using $($PsCmdlet.ParameterSetName)"
}

Calling that function on a custom object looks like this:

PS> $adder=import-module adder -ascustomobject -force
PS> $adder."test-psets"(1)   #should use pset 1, since we're only using the first parameter
don't know what parameter set we're in
we had better be using 
PS> $adder."test-psets"(1,2)  #shouldn't be valid, since they're different parametersets
don't know what parameter set we're in
we had better be using
PS> #Sanity check to make sure the function works 
PS> import-module adder 
PS> test-psets -x 1
we're using Set1
we had better be using Set1

PS> test-psets -y 1
we're using Set2
we had better be using Set2

PS> test-psets -x 1 -y 2
test-psets : Parameter set cannot be resolved using the specified named parameters.
At line:1 char:1
+ test-psets -x 1 -y 2
+ ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (:) [test-psets], ParameterBindingException
    + FullyQualifiedErrorId : AmbiguousParameterSet,test-psets


So it seems that functions that utilize parametersets are going to be a lot less useful with -AsCustomObject imports.

As I mentioned, there are several examples floating around concerning creating new objects (or classes, depending on your perspective) using modules and this option. Given the drawbacks I’ve noted in this article, I think I’m going to stay away from that particular use case.

What do you think? Did I miss something important? Please let me know what your opinion is.

-Mike

Aggregation In PowerShell (and another pointless function)

I’ve been doing a lot of thinking about “idiomatic PowerShell” since my last post and my thinking led me to an idea that I haven’t actually used, but seems like the kind of thing that people would do in PowerShell.

If I were writing a script that needed to get a “bunch of things” from somewhere (perhaps several different sources) and return all of them, I might be tempted to do something like this. Please forgive my PowerShell pseudocode:

function get-stuff{
param($parm1 )
    $results=@()
    foreach ($source in $sources){
        $results += ($source | where { $_ -and "Some condition exists"  })
    }
    foreach ($source in $someothersources){
        $results += ($source | where { $_ -and "Some condition exists"  })
    }
 return $results
}

I’ve used several permutations of that kind of code using arrays of some sort to collect the results as I go along and eventually return the collection from the function. I’m not sure that there’s anything wrong with doing it this way. That is, I’m not sure that you’re likely to have issues with doing it this way.

On the other hand, it’s more idiomatic (i.e more in the style of the PowerShell language) to do something like this (again, pardon the pseudocode):

function get-stuff{
param($parm1 )
    foreach ($source in $sources){
        $source | where { $_ -and "Some condition exists"  }
    }
    foreach ($source in $someothersources){
        $source | where { $_ -and "Some condition exists"  }
    }

}

All I’m doing here is sending the output of the inner statements (which are pipelines) to the output stream of the function. Note that there isn’t any need for anything to accumulate the results into.  Using the output stream makes this function work more like the built-in cmdlets in PowerShell as it won’t be blocking the pipeline.

The only thing that I have against this code is that it goes against rule #2 that I wrote last time about writing values to the output stream. I said there that if you were going to write to the output stream, you should explicitly use write-output. We could modify the code above to use write-output, but that would involve using parentheses (around the pipelines), messing up the flow of the code, and even blocking the pipeline while the expressions in the parentheses were collected (as an argument to write-output).

That brings me to what I was saying about “another pointless function”. About a year ago I wrote a post about the identity function, which doesn’t really do anything except return the input. It is a really useful function for creating lists and such, allowing you to skip on providing a bunch of punctuation. It’s not a pointless function, but it’s not one that is getting much press, either. I was thinking about how to make the “pipeline” version of the code work nicely and not make it ugly and thought of the following function.

function out-output{
    process{ $_ }
} 

Like the identity function (or ql, as I’ve seen it referred to), out-output doesn’t do anything but emit values that are provided. Out-output, however, gets its values from the pipeline rather than the argument list. This function allows us to be explicit about our intent to use the output stream.

function get-stuff{
param($parm1 )
    foreach ($source in $sources){
        $source | where { $_ -and "Some condition exists"  } | out-output
    }
    foreach ($source in $someothersources){
        $source | where { $_ -and "Some condition exists"  } | out-output
    }
}

I’m not sure if this is a good idea, and I know that it’s just adding a tiny bit of processing to the script. My thought is that making the operation of the script explicit is worth it in the long run.

What do you think?

-mike