Voodoo PowerShell – VisioBot3000 Lives Again!

Back in January I wrote a post about how VisioBot3000 had been broken for a while, and my attempts to debug and/or diagnose the problem.

In the process of developing a minimal example that illustrated the “breakage”, I noticed that accessing certain Visio object properties caused the code to work, even if the values of those properties were not used at all.

It’s been almost six months now, and I have no idea why that code makes any difference. So instead of letting VisioBot3000 die, I decided to take the easy route, and incorporate the “nonsense” code in the VisioBot3000 module.

If you look at the latest commit (as of this writing), the New-VisioContainer function (in VisioContainer.ps1) starts with the following single line of nonsense:

[void]$script:Visio.ActiveDocument.Pages[1]

In that code, I’m using a module-level reference to the Visio application, getting the active document from it, and retrieving the first page. And then I’m throwing away the reference that I just retrieved. The only thing that I can imagine is doing anything is the Pages[1] call. It’s possible that the COM object is doing something internally in addition to pulling back the first page, but that’s grasping at straws.

And that’s why I call this Voodoo PowerShell. I’m using code that I don’t understand because I get what I want from it. It’s a meaningless ritual. I hate including it, but I hate that the module has been largely unchanged for a year even worse.

I will be trying to make more regular updates to VisioBot3000 in the near future, and will be presenting on it at the second SWMO PSUG meeting scheduled for next week.

Let me know what your thoughts are.

–Mike

Get-Command, Aliases, and a Bug

I stumbled across some interesting behavior the other day as I was demonstrating something that I understand pretty well.

[Side note…this is a great way to find out things that you don’t know…confidently explain how something works, and demo it.]

I was asked to give an overview of how modules work in PowerShell. I’ve been writing and using modules since PowerShell 2.0 came out (2009?) so I didn’t think there was anything (at least anything basic) that I wasn’t comfortable with. Not to say that there aren’t module concepts I’m not super-clear on, but the basics should have been all worked out.

After explaining the concepts of modules (encapsulating functions, variables, aliases) and showing how PowerShell knows where to look for modules, I turned to an example module I had written.

I won’t replicate that module here, because the contents don’t really matter. I’ve boiled the “weirdness” into a simple example and it looks like this:

function Get-Thing{
}

new-alias -Name MyDir -Value Get-ChildItem
new-alias -Name Func  -Value Get-Thing
new-alias -Name File -Value Get-Item
new-alias -Name Get-TheThing -Value Get-Thing
Export-ModuleMember -Function * -Alias *

If you save that as SampleModule.psm1 file (and put it in a same-named folder in the PSModulePath), you will be able to play along with me.

I showed the group that Import-Module was able to import the module using the name only, not requiring the user to know what path the module was installed into. Then, I thought “I’ll show them how to use Get-Command to find the items that were imported from the module!”

Get-Command -module SampleModule

Imagine my surprise when I saw the following output:

Only one of the three aliases showed up.

Further qualifying the Get-Command by specifying the CommandType (which should logically show fewer results) showed all three.

As a side note, I was also able to see the other aliases by using the -All switch, even though the help for -All says it is used for showing commands that are hidden due to naming collisions.

This isn’t a huge thing, but I did go ahead and add it to the PowerShell User Voice here. I’ve reproduced it in 5.0 and 5.1. I wouldn’t be surprised if it has been this way for a while.

What do you think? Surprised by this result?

Let me know in the comments.

–Mike

2 PowerShell Features I was Surprised to Love

After talking about features I don’t want to talk about anymore I thought I would turn my attention to a couple of things in PowerShell that I initially felt were mistakes but have had a change of heart about.

For the most part, I think the PowerShell team does a fantastic job in terms of language design. They have made some bold choices in a few places, but time and time again their choices seem to me like the correct choices.

The two features I’m talking about today were things that, when I first heard about then, I thought “I’ll never use that”. Time has shown me that my reactions were in haste.

Module Auto-loading

I really like to be explicit about what I’m doing when I write a script. I like explicitly importing modules into a script.  Knowing where the cmdlets used in a script come from is a big part of the learning process.  As you read scripts (you do read scripts, don’t you?), you can slowly expand your knowledge base as you start looking into functionality implemented in different modules.  Another big advantage to explicitly importing modules into a script is that you’re helping to define the set of dependencies of the script.  “Oh, I need to have the SQLServer module installed to run this script…I thought it looked like a SQLPS script!”.  Since cmdlets can have similar names, explicitly loading the module can make it clear what’s going on.

When I saw that PowerShell 3.0 introduced module auto-loading the first thing I thought was “I wonder how I can turn that off”, followed closely by “I’m always going to turn that off on every system I use”.

I hadn’t met PowerShell 3.0 yet, though.  The number of cmdlets jumped from several hundred to over two thousand.  Knowing what cmdlets came from which modules became a much harder problem.  There were so many more cmdlets (aided by cdxml modules) that keeping track was difficult.

Module auto-loading was a logical solution to the “too many modules and cmdlets” problem.  I find myself depending on it almost every time I write a script.

I do like to explicitly import modules (either with import-module or via the module manifest) if I’m using something unusual, though.

Collection Properties

I don’t know if there’s an official name for this feature. Bruce Payette in PowerShell in Action calls this a “fallback dot operator”.  The idea is that you can use dot-notation against a collection to retrieve a collection of properties of the objects in the collection.  Since that was probably as hard to read as it was to write, here’s an example:

$filenames = (dir C:\temp).FullName

Clearly, an Array doesn’t have a FullName property, right?  And we already had 2 ways (the “old” way and the “aha” way) to do this:

$filenames = dir c:\temp | foreach-object {$_.FullName}
$filenames = dir c:\temp | select-object -expandProperty FullName

I like being able to use dot-notation against an expression, which just considers the object which is the result of the expression and applies the dot-operator to it. It does require that you add some parentheses, but that’s a small price to pay for not having to introduce another variable. One of my scripting maxims is that the less you write, the less you debug. More variables means more places to make mistakes (like misspelling), so I like this approach.

Using dot-notation to “fall back” from the collection to the members creates a bit of a semantic issue (or at least it messed up my head). When you see $variable.property, you no longer know what’s going on. You can be certain that there is some kind of property reference happening, but it isn’t clear whether there is collection unrolling happening at the same time.

How this one has turned out is that it eliminates the need to check whether I got multiple results and now I can use the same notation for single or multiple objects. (Side note: this is reminiscent of adding “fake” .Length and .GetEnumerator() members to objects in PowerShell 3.0). It’s very concise and reduces the use of pipelines (which helps performance).

Well, those were 2 things in PowerShell I was surprised to love. What about PowerShell delights you? Let me know in the comments!

–Mike

Generating All Case Combinations in PowerShell

At work, a software package that I’m dealing with requires that lists of file extensions for whitelisting or blacklisting be case-sensitive. I’m not sure why this is the case (no pun intended), but it is not the only piece of software that I’ve used with this issue.

What that means is that if you want to block .EXE files, you need to include 8 different variations of EXE (exe, exE, eXe, eXE, Exe, ExE, EXe, EXE). It wasn’t too hard to come up with those, but what about ps1xml? 64 variations.

For fun, I wrote a small PowerShell function to generate a list of the different possibilities. It does this by looking at all of the binary numbers with the same number of bits as the extension, interpreting a 0 as lower-case and 1 as upper case.

Here it is:

function Get-ExtensionCases{
param([string]$ext= 'exe')

  $vars=$ext.ToLower() ,$ext.ToUpper() 

  $powers=0..$ext.length | foreach {  [math]::pow(2,$_) }
  foreach($i in 0..([math]::Pow(2,$ext.length)-1)){
      (0..($ext.length-1)|foreach {$vars[($i -band $powers[$_])/$powers[$_]][$_]}) -join ''
  } 
}

I pre-calculate the relevant powers of two in $powers, since we use them over and over again. I also do the upper/lower once at the beginning and do some (gross) indexing to get the proper one.

Here’s the output for exe:

It was a fun few minutes. Watching longer output scroll by can even be somewhat mesmerizing.

Let me know what you think

–Mike

P.S. PowerShellStation was named one of the top 50 PowerShell blogs! Thanks to everyone for stopping by and listening to my rambling.

Hyper-V HomeLab Intro

So  I’ve been playing with Hyper-V for a while now.  If you recall it was one of my 2016 goals to build a virtualization lab.

I’ve done that, building out the base Microsoft Test Lab Guide several times:

  • Manually (clicking in the GUI)
  • Using PowerShell commands (contained in the guides)
  • Using Lability and PS-AutoLab-Env

I was also fortunate enough to be a technical development editor for Learn Hyper-V in a Month of Lunches, which should be released this fall.

One thing that I’ve found is that being able to spin up a VM quickly is really nice.  With the Hyper-V cmdlets, that’s pretty easy.

Spinning up a machine from scratch and building a bootable image is not as easy.  Fortunately there are some tools to help.

In this post, I’m going to share a simple function I’ve written to help me get things built faster.

The goal of the function is to take the following information:

  • Which ISO to use
  • Which edition from the ISO to select
  • The Name of the VM (and VHDX)
  • How much memory
  • How many CPUs

With that information, it converts the windows image from the ISO to a VHDX, creates a VM with the right specs and using the VHDX, sets up the networking (or starts to, anyway), and starts the VM.

The bulk of the interesting work is done by Convert-WindowsImage, a function that pulls the correct image from an ISO and creates a virtual disk.

There are some problems with that script (read the Q&A on the Technet site and you’ll see what I mean).  The main one is when it tries to find the edition you ask for (by number or name).  The code is in lines 4087-4095, and should look like this:

                $Edition | ForEach-Object -Process {

                    $Edtn = $PSItem
    
                    if ([Int32]::TryParse($Edtn, [ref]$null)) {
                        $openImage = $openWim[[Int32]($Edtn)]    
                    } else {
                        $openImage = $openWim[$Edtn]
                    }    

There’s a more recent copy of the function on github, but it has slightly different parameters and seems to be stale as well (according to the page it’s on). I’ve got an email out to find the “live” version.

With that, here’s my function:

function new-BootableVM {
    param($ISOPath = 'E:\isos\2012R2_x64_EN_Eval.iso',
        $Name,
        $MemoryInGB,
        $vCPUs,
        [switch]$Stopped)


    $switch = 'LabNet'
    $vhdpath = "c:\users\Public\Documents\Hyper-V\Virtual hard disks\$name.vhdx"

    Convert-WindowsImage -SourcePath $ISOPath -Edition $edition -VHDPath $vhdpath -VHDFormat VHDX -VHDType Dynamic -SizeBytes 8GB 
    $vm = New-VM -Name $name -MemoryStartupBytes ($memoryInGB * 1GB) -VHDPath $vhdpath -Generation 2 
    Set-VMProcessor -VM $vm -Count $vCPUs
    Add-VMNetworkAdapter -vm $vm -SwitchName $switch

    if (!$stopped) {
        Start-VM -VM $vm
    }
    $vm
}

Once the function is done running (assuming it didn’t have any issues), a VM will be created and ready for you. You will need to accept the license, set the locale, and set the administrators password, but that only takes a minute. I’ll be adding functions (or adding to this function) to take care of those as well as things like renaming the guest, joining a domain, copying files to the drive, etc.

It’s still a work in progress, so you will see some hardcoded values. Hopefully you can see what’s going on and adapt it to your needs.

I’ll be writing more as I play more with Hyper-V, DSC, and containers.

Let me know what you think

–Mike

PowerShell Topics I’m Ready to Stop Talking About

Part of me wants to know every bit of PowerShell there is.  I know that’s true about me, so  I don’t have much of an input filter.  If the content is PowerShell-related, I’m interested.

When it comes to sharing, however, there’s clearly got to be a point at which I shouldn’t be talking about something.  Here are a few items that I’ve spoken or taught about that I think are going to get pulled from my routine.

 

  1. The TRAP statement
  2. Obscure Operators
  3. Filters
  4. Tee-Object
  5. (bonus) Workflows

Let’s go through them one by one and see why.  And yes, I know that I’m talking about them, but this should be the last time (and this time I mean it).

The  TRAP statement

The trap statement is the error handling statement that made the cut for v1.0 of PowerShell.  If you weren’t a PowerShell user at that time you probably haven’t ever used it, favoring TRY/CATCH/FINALLY.

Instead of being a block-structured statement like TRY, TRAP worked in a scope, and functioned like a VB ON ERROR GOTO.  The rules for program flow after a TRAP statement (which I’ve long forgotten) made understanding code that used TRAP into….a trap.

The advice I have given students in the past is, “If you stumble upon some code that uses TRAP, look for other code.”

Obscure Operators

PowerShell has a lot of operators, and that’s a good thing.  On the other hand, I’m not sure why I need to tell people about every single operator.  Some of the operators, though, are obscure enough that I haven’t used them in any language more than a handful of times in the last thirty years.  Candidates for expulsion (from discussion, not from the language) include:

  • -SHL, -SHR    (I guess someone does bitwise shifting, but I haven’t ever needed this except in machine language)
  • *=, /=, %=      (I can see what these do, but I don’t ever do much arithmetic so don’t find the need for these “shorthand” operators)

Filters

Filters are another PowerShell 1.0 topic.  They are one of the ways to use the pipeline for input without using advanced functions and parameter attributes.  They’re pretty slick, but are easily replaced with an advanced function with a process block.  In the last 5 years, I’ve only seen filters used once (by Rob Campbell at a user group meeting).

Tee-Object

I generally consider the -Object cmdlets to be the backbone of PowerShell.  They allow you to deal with pipeline objects “once-and-for-all” and not write a bunch of plumbing code in every function.  For that reason, I like to talk about all of them.  Tee-Object, however, might get sent to an appendix, because I don’t see anyone using it and don’t use it myself.  This one might be changing as we see (being optimistic) people with more Linux backgrounds submitting PowerShell code.  They use tee, right?  I find that the -outvariable common parameter serves most of the need I would have for Tee-Object, so, it makes this list.

And finally,

Workflows

Workflows sound awesome.  When you talk about workflows you get to use adjectives like “robust”, and “resilient”.  And don’t get me wrong Foreach-Object -Parallel is pretty sweet.

On the other hand, writing PowerShell in the workflow-subset of PowerShell is tricky.  Remembering what needs to be an inlinescript and how to use/access variables in each kind of block is not fun.

I haven’t ever used workflows for anything interesting, and have only heard a few examples of them being used by coworkers.  Those examples could mostly be summed up by “I needed parallel”.

It won’t be hard for me to stop talking about workflows, as I’ve never really talked about them.

 

Before I get flamed because I included/excluded your favorite topic, these are just for me.  If you like one of these, sell it!  You might convince me to change my mind.  Is there something that you think should fade away?  Let me know what it is.  I might be able to change your mind.

 

–Mike

An Unexpected Parameter Alias

I’ve always said that if you want to learn something really well, teach it to someone.  I’ve been doing internal PowerShell training for several years at my company.  I’m very grateful for the opportunity for a number of reasons, but in this post I’m going to call out how something I learned in a recent trip to our San Diego office.

When I’m starting to talk about cmdlets, I usually use get-childitem for the simple reason that almost everyone knows what the DOS DIR command does.  It gives us a point of reference to compare and contrast cmdlets with.

I mentioned the -Recurse switch and explained that it was analogous to the /S switch in DIR, but one person in the class didn’t quite get the context switch.  When he did one of the examples, he tried get-childitem -s.  I told him that he needed to use -Recurse, to which he replied “But it works!”.

I always keep a pad of paper when I’m teaching so I can write down anything puzzling (it happens in almost every class).  When the class took a break, I opened a fresh PowerShell session and tried it.

Of course, it worked.

Now, to determine why it worked.

First of all, I thought that parameter disambiguation would have been a problem. because of the -System parameter.  That wasn’t a problem.

Then, I realized that the PowerShell team must have included a “legacy alias” for the -Recurse parameter, similar to how they include cmdlet aliases to ease the transition from DOS or *NIX (dir, ls, ps, cat, etc.).  I don’t think I’ve ever heard anyone mention legacy aliases for parameters, though.

PowerShell easily verifies that this is the case:

Of course, I verified this on my work computer.  As I sit here writing on my home laptop, it didn’t list any aliases until I updated help.  Blogging is a lot like teaching in that you’re bound to find surprises whenever you try to explain something.

Anyway, this was a fun discovery for me.

Can you think of any other parameter aliases that are there for legacy reasons?  I might have to try to work up a script to find candidates.

Let me know what you think in the comments.

-Mike

PowerShell Parameter Disambiguation and a Surprise

When you’re learning PowerShell one of the first things you will notice is that you don’t have to use the full parameter name.  This is because of something called parameter disambiguation.

When it works

For instance, instead of saying Get-ChildItem -Recurse, you can say Get-ChildItem -R.  Get-ChildItem only has one (non-dynamic) parameter that started with the letter ‘R’..  Since only one parameter matches, PowerShell figures you must mean that one.  As a side note, dynamic parameters like -ReadOnly are created at run-time and are treated a bit differently.

Here’s the error message.  Notice that it included a couple of other parameters as possibilities:

AmbiguousParameter error

AmbiguousParameter error

When it doesn’t work

This doesn’t always work, though. An easy example is with Get-Service. You can’t say Get-Service -In because you haven’t specified enough of the parameter name for PowerShell to work out what parameter you meant.  With Get-Service, both -Include and -InputObject start with -In, so PowerShell can’t tell which of these you meant.

Trying it ourselves

Let’s write a quick function to make sure we understand what’s going on.

function test-param{
Param($da,$de)
$true
}

Calling this function with test-param -d gives us the same kind of error as before:

Interestingly (this is the surprise) if we make this an advanced function (and we should almost always do that), something strange happens.

function test-param{
[CmdletBinding()]
Param($da,$de)
$true
}

Remember that one of the benefits of having an advanced function is that it now supports common parameters (like -debug).

When we call it with test-param -de, however, we don’t get an ambiguous parameter message! It’s asking for a value for -de!

So, even though we got a couple of common parameters in the error message for Get-Service -In, the -Debug common parameter isn’t considered in the disambiguation for this function.

Not earth-shattering, but something to take note of.

If I’m missing something (and it’s entirely possible), let me know in the comments.

-Mike

P.S.  Remember that it is a best practice to spell out parameter names fully when writing a script.  Abbreviating (and aliases) are considered fair game for the command-line, though.

When the PowerShell pipeline doesn’t line up

The PowerShell Pipeline

One of the defining features of PowerShell is the object-oriented pipeline.  The ability to “wire-up” parameters to the pipeline and allow objects (or properties) to be automatically assigned to them allows us to write code that is often variable-free.

By “variable-free”, I mean that instead of doing something like this:

 $services=Get-Service *SQL* 
foreach($service in $services){ 
    Stop-Service -Name $Service.Name 
} 

we can write things like this:

Get-Service *SQL* | Stop-Service

There’s nothing wrong with the first script. It is logically laid out, it is clear what’s going on, and accomplishes the same goal. On the other hand, by introducing more variables (and more statements), we have added many more places where we can make mistakes.

When possible, you should write your functions so that they allow pipeline input wherever it makes sense.

When it doesn’t work

I was helping a co-worker with a script the other day and we found something unusual.  The module he was using (open-source) allowed pipeline input, but it didn’t work quite right.  The library (which dealt with processes running on specified computers) allowed you to pipe objects into the Stop function, but instead of using the objects as-is, it only used the PID from each object.  The problem with that was that the Stop function then prompted for a computername for each object, although the incoming objects had properties which contained that value.

The solution was to hand-wire the pipeline like this:

Get-RemoteProcess <criteria> | foreach {
           Stop-RemoteProcess  -ID $_.PID -ComputerName $_.ComputerName
}

(note that these are not the actual function/parameter names…I’m not writing this to shame the original module author)

If the pipeline support had been implemented more reasonably, that could have been written like this:

Get-RemoteProcess <criteria> | Stop-RemoteProcess

As I said before, not supporting the pipeline (correctly) introduces places where we can make mistakes. And if you’re like me,
you will make mistakes in those places.

–Mike