Scope delimiter in interpolated strings

I’ve been meaning to write about this for a while.  It’s a simple thing that broke some code from PowerShell 1.0.  Yes, I still have some code running in production that was written back before 2.0 came out. And before I go any further let me say that PowerShell has done a remarkable job in keeping backward compatibility. I very rarely have old code break due to new PowerShell features or parsing.

Anyway, when writing messages out to the screen to show what’s going on in a script, I would often use a pattern like this:

write-host "$setting1 and $setting2"

This code upgraded just fine and is not a problem.

Where I ran into a problem was when I varied the pattern slightly.  The following code is not so happy:

write-host "`$setting1:$setting2"

This was valid 1.0 code, but it doesn’t run in 2.0 or above.

screenshot_variables

The problem stems from the addition of scope labels for variables in 2.0. To refer to scoped variables, you prefix the name of the variable with the scope modifier (local, global, script, private) followed by a colon. So the parser is seeing $setting1:$setting2 and thinking that “setting1” is a scope modifier.

Easy workarounds for this are adding a space before the colon or escaping it with a backtick. Also, I guess you could use subexpressions $() for setting1.

Have you run into this before? What other problems have you found in old code running in newer versions of PowerShell?

–Mike

Quick Tip – Avoid abbreviating parameter names!

Looking at some of the solutions to the July scripting games problems (here) I noticed that several of them used abbreviations for parameter names.  For instance:

gwmi win32_operatingsystem -co @(".")

I understand that this is a competition of sorts and that part of the challenge is to get a solution with the smallest number of characters, but I realized that I really, really don’t like abbreviated parameter names.

To be clear, these are fine on the command-line (as are aliases, for instance), but I really want to avoid using parameter name abbreviations in my code. For one thing, since PowerShell allows you to use as short an abbreviation as you want as long as it is unambiguous, there is not a single “short form” for a given parameter. In the code above, for example, -co could have been -co, -com, -comp, etc. That leads to inconsistent code and reduces readability in my opinion.
Second, parameter abbreviations are not necessarily stable across PowerShell versions. It’s entirely possible, for instance, that a parameter starting with “co” could be added to in the next version of PowerShell which would make the parameter ambiguous.  At that point, the code is invalid (as well as not very readable).

I know this isn’t a huge deal, but wanted to get my thoughts out here.

 

Let me know what you think.

p.s.  somehow this got published without the ending.  I just now noticed and updated so it didn’t end in the middle of a sentence.

 

-Mike

Cleaning the Path – A PowerShell One-liner

I’m not super crazy about writing one-liners in PowerShell, but I ran across a fun problem which was quick to write as a one-liner.  I’ll give that here with a little explanation, and follow up in a couple of days with a more polished advanced function solution.

Anyway, the problem was that I was working on a computer and happened to take a look at the PATH environment variable and saw a lot of directories in the path that were no longer valid. Apparently efforts to clean up the machine (e.g. removing old Visual Studio and SQL Server installs) didn’t include fixing the path.
To see if you have this problem, you can easily see your PATH with the following line of PowerShell (and no, this isn’t the one-liner)

$env:Path

When I saw the output (which included 48 different folders) I knew I needed to fix it.
Since Test-Path is an easy way to see if a folder exists, I quickly wrote the following to see which entries were bad:

$env:path -split ';' | where {!(Test-Path $_  )}

That listed 11 that were bad, but I also got an error because apparently there were some adjacent semicolons, meaning that $_ was set to an empty string which Test-Path didn’t like.
A quick addition made it not complain:

$env:path -split ';' | where {$_ -and !(Test-Path $_  )}

This gave me the list of directories that I needed to eliminate. Reversing the logic a bit to get the directories I want to retain looked like this:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}

Looking better, but now I notice that some directories are listed more than once ($PSHOME, for example is listed 6 times).
Adding a quick uniqueness check:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique

That gives a much better list.
I then added -Join to paste these back together.

($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

And that looks like a good value for $env:Path.
If I just needed to set it for the current session, I could do this:

$env:path=($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

But that isn’t particularly useful. Before I do anything permanent, though, I should save the current PATH somewhere. We can use the System.Environment class to set environment variables for the machine and copy the existing PATH to a new environment variable.

[System.Environment]::SetEnvironmentVariable('Path_SAVED',($env:path),'Machine')

The final one-liner is this (and it’s not pretty):

[System.Environment]::SetEnvironmentVariable('Path',($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';','Machine')

Ok…that was a fun exercise. PowerShell continually impresses me with how easy it is do to things.

Got any good ideas for ways to improve this?

Let me know in the comments, and watch for the follow-up article in a couple of days about rewriting this as a real advanced function.

-Mike

PowerShell Summit 2015 North America Videos!!

If you, like me, aren’t fortunate enough to be able to be at the PowerShell Summit going on right now in Charlotte, NC, you can at least watch/listen to the videos of the sessions.

I’ve watched a couple already and even though it’s not as good as being there, it’s still really good.

The quality of the information in the presentations so far has been awesome.

The videos can be found on the PowerShell.org channel on youtube. They are slides and audio, so you don’t get to watch the presenters, but that doesn’t really diminish the value.

Here’s the link:
https://www.youtube.com/channel/UCqIw7UUwC5fUBFXYX68aMrQ

PowerShell will not fix all of your problems

I’m definitely guilty of using PowerShell in situations where it’s not the best answer. Some of that is curiosity (can I make it work) and some of it is stubbornness (I bet I can make it work). But I never want to give the impression that PowerShell is “fixing” my problems.

For instance, if you don’t have defined processes or clear requirements, trying to apply automation is going to end up an exercise in frustration. You’ll be asking “why did it do that?” when the answer is clearly that the script is written to do things that way.

So if you’re in over your head and know that you need automation to give you some leverage to get out of your bad situation, the first step is almost never to throw PowerShell into the mix. The first step should always be to make sure that you have a well-defined process. If that means that you continue manually for a bit so you can get everyone on-board with the process that’s fine. Once the process is defined, scripting it with PowerShell (or whatever is your automation tool of choice) will be much easier and the results more predictable.

Will PowerShell solve all of your problems? No.

Can PowerShell automate the solutions to problems that you have a process to handle? Definitely.

Perhaps you’re so busy you can’t get a handle on things enough to specify a full solution. That definitely happens and I don’t want to give the impression that you have to have 100% of things under control to apply automation to the mix. What you can do, though, is find a small subset of the problems you’re dealing with that are simple. Maybe that’s only 10% of your work and it doesn’t seem like it would be worth automating. If you automated that 10%, though, you’d get almost an hour each day back to enable you to focus on the things that are really eating up your time. And since the 10% is “simple”, it shouldn’t be difficult to automate, at least compared to the rest of your work.

Something else that I’ve found is that once you have automated the simple cases, more and more things begin to fall into that classification. Once you’ve got a solution that’s proven, it’s easy to build on that to start pulling in some of the more complex tasks. Pretty soon you will find that you some free time on your hands.

The point is that you can use automation to gain traction when it doesn’t seem like you’re making any headway. Once you get traction, you can accomplish a lot on your own. With PowerShell, you can accomplish a lot in a repeatable way, accurately, and in many cases without human intervention.

What do you think?

Mike

My PowerShell goals for 2015

I’m not much on New Year’s resolutions but I’ve seen a few people post their PowerShell-related goals and thought I’d jump on that bandwagon.

Here are a few things I want to get accomplished this year:

1.  50 blog posts
2.  New release of SQLPSX
3.  Separate release of ADOLIB
4.  Second book (maybe in a different format, like Pluralsight?)
(if you missed it, my first book was released late last year here).
5. Teach 10 PowerShell classes at work
6. Work through the IIS and AD month of lunches books
7. Build a virtualization lab at home and practice Hyper-V and VMWare
8. Do something cloudy (no idea what)

That sounds like a full plate for me. If you have any suggestions for posts (or series of posts :-) ) that would be awesome!

Mike

Packt’s $5 eBook Bonanza and what I’ve been doing all year

Early this year I was contacted by Packt Publishing to see if I had any interest in writing a PowerShell book. After I got up off the floor and thought about it a bit, I decided that it was something I wanted to do. I have spent the majority of the year struggling with my undisciplined, procrastinating nature and finally have hardcopies of my book in hand.  It has been a fun, rewarding process and I might just be hooked.  More on that to come.  :-)

The book is called “PowerShell TroubleShooting Guide“, and its focus is on understanding the PowerShell language and engine in order to give you more “traction” when coding and allowing you to spend less time debugging.

Here’s the great part. Just like last year, Packt is having their $5 eBook Bonanza, where all eBooks and videos are only $5. The sale is going until January 6, 2015, so you have some time.

I’m looking hearing your thoughts on the content I have chosen.

–Mike

PSModulePath issue with 5.0 Preview

At work, I have a library of modules stored on a network share. In order to make things work well when I’m not on the network, I include the network share in my PSModulePath, but later in the PSModulePath I point to a local copy of the library.
Since installing the 5.0 preview (which I love, btw), I’ve seen some really strange errors, like this one:
error
Obviously, I am not redefining the set-variable cmdlet in my scripts. I’ve had similar kinds of errors with clear-host and other “core” cmdlets. FWIW, the cmdlets that error while loading the profile seem to work fine after everything is done loading. Clearing nonexistent paths out of the PSModulePath makes the errors go away.
If you have to include network shares in your PSModulePath, I would recommend adding them in your profile, and use test-path to make sure that they are available before making the modification. T

I’ll chalk this one up to it being pre-release software. It’s encouraging to see the PowerShell team continue to deliver new and exciting features with the speed that they have.

-Mike

Pump up your PowerShell with ISESteroids

I’ve mentioned before that although there are several free PowerShell development environments, I always seem to come back to using the ISE. With each release, the ISE becomes more stable and functional. With the other tools, I always seem to bump up against bugs that keep me from enjoying the many features they provide.

I was excited when I heard that Tobias Weltner was in the process of releasing a new version of his ISESteroids product. The 1.0 product had a number of useful features, but the 2.0 version (which is still in beta) is crammed so full of features that it’s hard to comprehend. And best of all, it feels like a natural extension of the ISE, so I don’t have to relearn anything.

The trial package can be downloaded from here. It comes pacakaged as a zip file, and the download page has clear instructions on how to unblock the file and get it extracted to the appropriate place on your hard drive. Once it’s there, you start ISESteroids by simply importing the module:

import-module ISESteroids

The first thing you will notice is that the toolbar just got fancy. Here’s the initial toolbar:

TopToolbar

Clicking the down-arrow on the left brings up another toolbar:

BottomToolbar

Clicking the settings button (the gear) brings up a drop-down panel:

SettingsToolbar

 

 

At the bottom of the screen, you will see that the status bar is no longer so bare (it usually only has the line/col and zoom slider):

StatusBar

The menus are similarly enhanced. I’ll just show you the file menu to give you some idea of the kinds of changes:

FileMenu

Opening profile scripts (including both console and ISE as well as allhosts) and printing are two huge pluses!

Looking through the new toolbar buttons and the menus (almost all of which have new entries), I was like a kid in a candy store. Here are some of the highlights:

  • Built-in versioning and comparing (using a zip file that sits next to your script)
  • A variable watch window (one of the main reasons I occasionally stray from the ISE)
  • Code refactoring
  • Code risk analysis
  • Code signing (and cert generation)
  • A Navigation bar (search for strings or functions)
  • A Pop-out console (super handy on multiple monitors
  • Run code in a new console (or 2.0, or 32-bit) from a button
  • Brace-matching
  • Show whitespace

This is barely scratching the surface. In the few days that I’ve used ISESteroids, the main thing that I have noticed is that it is not in my way. Even with gadgets turned on and all of it updating in realtime, I don’t notice a lag or any kind of performance hit. The features feel like they were built in to the ISE. The product is still a beta, so some of the features aren’t connected or don’t have documentation, but even with these shortcomings the experience is still something that is hard to imagine.

Opening a script, you immediately see feedback about problems (squiggle underlining), and references (small text just above function declaration).  I’ve zoomed in on this function definition so you can see the “3 references”

functionref

 

 

Clicking on the “3 references” brings up a “pinnable” reference window:

ReferenceWindow

 

 

 

 

If you place the cursor on one of the underlined sections, you get instructions in the status bar about what the problem is and have an opportunity to fix it there or everywhere in your script:

squiggle

FixSquiggle

 

 

The “variable monitor addon” (usually called a watch window) is one of the reasons that I occasionally stray to one of the other editors.  No need to do that now!

watchWindow

 

 

 

 

 

 

 

 

 

 

 

 

 

It’s not so obvious in the screenshot, but there’s a button in the left side just under the title  (Variables) which clears all user-defined variables. I’ve wanted something like that for debugging a number of times. Clearing variables between troubleshooting runs can really help out.

One other “random” thing that I just found is accessed by right-clicking on the filename in the editor. In the “stock” ISE, you don’t get any menu at all. Look at all of the options now:
FileTabMenu

 

 

 

 

 

 

 

 

 

I haven’t come close to showing all of the features that are included. In fact, while preparing for this post I took over 70 screenshots of different features in action. I’ll take pity on you and not go through every one of them individually . Rest assured that you’ll find ISESteroids to be amazingly helpful right out of the box (so to speak) and be delighted often as you continue to encounter new features.  The features seem to be well thought out and are implemented very smoothly.

Since this is a beta product it’s not all sunshine and roses. I did encounter one ISE crash which I think was related to ISESteroids, and a few of the features didn’t work or didn’t match the documentation. That didn’t stop me from showing everyone around me how cool it was.  They were all suitably impressed.

I heartily recommend ISESteroids for every PowerShell scripter.  The ISE with ISESteroids feels like a Version 10.0 product instead of a 2.0 product.   It can be downloaded from the PowerTheShell site.  A trial version is available or licenses can be purchased.

My hat is off to Tobias Weltner, who has now been featured twice in my blog (here is the previous instance). Both times I have been very happy to see what he is providing and I can’t wait to see what he has coming up next.

–Mike

Why Use PowerShell?

After a presentation about PowerShell at a recent user group meeting, one of the attendees asked, in effect, why he should bother learning PowerShell. He has been in IT for a long time and has seen lots of different approaches to automation.

I was somewhat taken aback. I expected these kinds of questions 5 years ago. I wasn’t surprised 3 or 4 years ago when I heard questions like this. But PowerShell has been around for 7 years now, and it is clearly Microsoft’s go-forward automation technology. I’m not quite ready to seriously say “Learn PowerShell or learn to say ‘Would you like fries with that'”, but I definitely feel that not learning PowerShell is a serious detriment to a career in IT.

With every new product release, more and more of the Microsoft stack is wired up with PowerShell on the inside. PowerShell gives a common vocabulary for configuring, manipulating, querying, monitoring, and integrating just about anything you can think of.

PowerShell gives us a powerful platform for coding, with hooks in the environment for building reusable tools both in script, and in managed code. The language is built from the ground up to be flexible and extensible with a vision of the future of Microsoft technology that is not knee-jerk, but long-term.

Personally, I use PowerShell for all of these things, but also because I truly enjoy scripting in PowerShell. I am able to spend more of my time engaging the problems I deal with and less time dealing with scaffolding. I can create tools that I can leverage in flexible ways and share easily.

The best part is, programming is fun again.

Mike