PowerShell will not fix all of your problems

I’m definitely guilty of using PowerShell in situations where it’s not the best answer. Some of that is curiosity (can I make it work) and some of it is stubbornness (I bet I can make it work). But I never want to give the impression that PowerShell is “fixing” my problems.

For instance, if you don’t have defined processes or clear requirements, trying to apply automation is going to end up an exercise in frustration. You’ll be asking “why did it do that?” when the answer is clearly that the script is written to do things that way.

So if you’re in over your head and know that you need automation to give you some leverage to get out of your bad situation, the first step is almost never to throw PowerShell into the mix. The first step should always be to make sure that you have a well-defined process. If that means that you continue manually for a bit so you can get everyone on-board with the process that’s fine. Once the process is defined, scripting it with PowerShell (or whatever is your automation tool of choice) will be much easier and the results more predictable.

Will PowerShell solve all of your problems? No.

Can PowerShell automate the solutions to problems that you have a process to handle? Definitely.

Perhaps you’re so busy you can’t get a handle on things enough to specify a full solution. That definitely happens and I don’t want to give the impression that you have to have 100% of things under control to apply automation to the mix. What you can do, though, is find a small subset of the problems you’re dealing with that are simple. Maybe that’s only 10% of your work and it doesn’t seem like it would be worth automating. If you automated that 10%, though, you’d get almost an hour each day back to enable you to focus on the things that are really eating up your time. And since the 10% is “simple”, it shouldn’t be difficult to automate, at least compared to the rest of your work.

Something else that I’ve found is that once you have automated the simple cases, more and more things begin to fall into that classification. Once you’ve got a solution that’s proven, it’s easy to build on that to start pulling in some of the more complex tasks. Pretty soon you will find that you some free time on your hands.

The point is that you can use automation to gain traction when it doesn’t seem like you’re making any headway. Once you get traction, you can accomplish a lot on your own. With PowerShell, you can accomplish a lot in a repeatable way, accurately, and in many cases without human intervention.

What do you think?

Mike

My PowerShell goals for 2015

I’m not much on New Year’s resolutions but I’ve seen a few people post their PowerShell-related goals and thought I’d jump on that bandwagon.

Here are a few things I want to get accomplished this year:

1.  50 blog posts
2.  New release of SQLPSX
3.  Separate release of ADOLIB
4.  Second book (maybe in a different format, like Pluralsight?)
(if you missed it, my first book was released late last year here).
5. Teach 10 PowerShell classes at work
6. Work through the IIS and AD month of lunches books
7. Build a virtualization lab at home and practice Hyper-V and VMWare
8. Do something cloudy (no idea what)

That sounds like a full plate for me. If you have any suggestions for posts (or series of posts :-) ) that would be awesome!

Mike

Packt’s $5 eBook Bonanza and what I’ve been doing all year

Early this year I was contacted by Packt Publishing to see if I had any interest in writing a PowerShell book. After I got up off the floor and thought about it a bit, I decided that it was something I wanted to do. I have spent the majority of the year struggling with my undisciplined, procrastinating nature and finally have hardcopies of my book in hand.  It has been a fun, rewarding process and I might just be hooked.  More on that to come.  :-)

The book is called “PowerShell TroubleShooting Guide“, and its focus is on understanding the PowerShell language and engine in order to give you more “traction” when coding and allowing you to spend less time debugging.

Here’s the great part. Just like last year, Packt is having their $5 eBook Bonanza, where all eBooks and videos are only $5. The sale is going until January 6, 2015, so you have some time.

I’m looking hearing your thoughts on the content I have chosen.

–Mike

PSModulePath issue with 5.0 Preview

At work, I have a library of modules stored on a network share. In order to make things work well when I’m not on the network, I include the network share in my PSModulePath, but later in the PSModulePath I point to a local copy of the library.
Since installing the 5.0 preview (which I love, btw), I’ve seen some really strange errors, like this one:
error
Obviously, I am not redefining the set-variable cmdlet in my scripts. I’ve had similar kinds of errors with clear-host and other “core” cmdlets. FWIW, the cmdlets that error while loading the profile seem to work fine after everything is done loading. Clearing nonexistent paths out of the PSModulePath makes the errors go away.
If you have to include network shares in your PSModulePath, I would recommend adding them in your profile, and use test-path to make sure that they are available before making the modification. T

I’ll chalk this one up to it being pre-release software. It’s encouraging to see the PowerShell team continue to deliver new and exciting features with the speed that they have.

-Mike

Pump up your PowerShell with ISESteroids

I’ve mentioned before that although there are several free PowerShell development environments, I always seem to come back to using the ISE. With each release, the ISE becomes more stable and functional. With the other tools, I always seem to bump up against bugs that keep me from enjoying the many features they provide.

I was excited when I heard that Tobias Weltner was in the process of releasing a new version of his ISESteroids product. The 1.0 product had a number of useful features, but the 2.0 version (which is still in beta) is crammed so full of features that it’s hard to comprehend. And best of all, it feels like a natural extension of the ISE, so I don’t have to relearn anything.

The trial package can be downloaded from here. It comes pacakaged as a zip file, and the download page has clear instructions on how to unblock the file and get it extracted to the appropriate place on your hard drive. Once it’s there, you start ISESteroids by simply importing the module:

import-module ISESteroids

The first thing you will notice is that the toolbar just got fancy. Here’s the initial toolbar:

TopToolbar

Clicking the down-arrow on the left brings up another toolbar:

BottomToolbar

Clicking the settings button (the gear) brings up a drop-down panel:

SettingsToolbar

 

 

At the bottom of the screen, you will see that the status bar is no longer so bare (it usually only has the line/col and zoom slider):

StatusBar

The menus are similarly enhanced. I’ll just show you the file menu to give you some idea of the kinds of changes:

FileMenu

Opening profile scripts (including both console and ISE as well as allhosts) and printing are two huge pluses!

Looking through the new toolbar buttons and the menus (almost all of which have new entries), I was like a kid in a candy store. Here are some of the highlights:

  • Built-in versioning and comparing (using a zip file that sits next to your script)
  • A variable watch window (one of the main reasons I occasionally stray from the ISE)
  • Code refactoring
  • Code risk analysis
  • Code signing (and cert generation)
  • A Navigation bar (search for strings or functions)
  • A Pop-out console (super handy on multiple monitors
  • Run code in a new console (or 2.0, or 32-bit) from a button
  • Brace-matching
  • Show whitespace

This is barely scratching the surface. In the few days that I’ve used ISESteroids, the main thing that I have noticed is that it is not in my way. Even with gadgets turned on and all of it updating in realtime, I don’t notice a lag or any kind of performance hit. The features feel like they were built in to the ISE. The product is still a beta, so some of the features aren’t connected or don’t have documentation, but even with these shortcomings the experience is still something that is hard to imagine.

Opening a script, you immediately see feedback about problems (squiggle underlining), and references (small text just above function declaration).  I’ve zoomed in on this function definition so you can see the “3 references”

functionref

 

 

Clicking on the “3 references” brings up a “pinnable” reference window:

ReferenceWindow

 

 

 

 

If you place the cursor on one of the underlined sections, you get instructions in the status bar about what the problem is and have an opportunity to fix it there or everywhere in your script:

squiggle

FixSquiggle

 

 

The “variable monitor addon” (usually called a watch window) is one of the reasons that I occasionally stray to one of the other editors.  No need to do that now!

watchWindow

 

 

 

 

 

 

 

 

 

 

 

 

 

It’s not so obvious in the screenshot, but there’s a button in the left side just under the title  (Variables) which clears all user-defined variables. I’ve wanted something like that for debugging a number of times. Clearing variables between troubleshooting runs can really help out.

One other “random” thing that I just found is accessed by right-clicking on the filename in the editor. In the “stock” ISE, you don’t get any menu at all. Look at all of the options now:
FileTabMenu

 

 

 

 

 

 

 

 

 

I haven’t come close to showing all of the features that are included. In fact, while preparing for this post I took over 70 screenshots of different features in action. I’ll take pity on you and not go through every one of them individually . Rest assured that you’ll find ISESteroids to be amazingly helpful right out of the box (so to speak) and be delighted often as you continue to encounter new features.  The features seem to be well thought out and are implemented very smoothly.

Since this is a beta product it’s not all sunshine and roses. I did encounter one ISE crash which I think was related to ISESteroids, and a few of the features didn’t work or didn’t match the documentation. That didn’t stop me from showing everyone around me how cool it was.  They were all suitably impressed.

I heartily recommend ISESteroids for every PowerShell scripter.  The ISE with ISESteroids feels like a Version 10.0 product instead of a 2.0 product.   It can be downloaded from the PowerTheShell site.  A trial version is available or licenses can be purchased.

My hat is off to Tobias Weltner, who has now been featured twice in my blog (here is the previous instance). Both times I have been very happy to see what he is providing and I can’t wait to see what he has coming up next.

–Mike

Why Use PowerShell?

After a presentation about PowerShell at a recent user group meeting, one of the attendees asked, in effect, why he should bother learning PowerShell. He has been in IT for a long time and has seen lots of different approaches to automation.

I was somewhat taken aback. I expected these kinds of questions 5 years ago. I wasn’t surprised 3 or 4 years ago when I heard questions like this. But PowerShell has been around for 7 years now, and it is clearly Microsoft’s go-forward automation technology. I’m not quite ready to seriously say “Learn PowerShell or learn to say ‘Would you like fries with that'”, but I definitely feel that not learning PowerShell is a serious detriment to a career in IT.

With every new product release, more and more of the Microsoft stack is wired up with PowerShell on the inside. PowerShell gives a common vocabulary for configuring, manipulating, querying, monitoring, and integrating just about anything you can think of.

PowerShell gives us a powerful platform for coding, with hooks in the environment for building reusable tools both in script, and in managed code. The language is built from the ground up to be flexible and extensible with a vision of the future of Microsoft technology that is not knee-jerk, but long-term.

Personally, I use PowerShell for all of these things, but also because I truly enjoy scripting in PowerShell. I am able to spend more of my time engaging the problems I deal with and less time dealing with scaffolding. I can create tools that I can leverage in flexible ways and share easily.

The best part is, programming is fun again.

Mike

It’s 10 O’Clock. Do you know where your servers are?

Ok…that’s a strange title, but let me finish before you decide its lame. (On a side note, I’m a dad, so my humor tends to run in that direction naturally).

I see lots of examples in books and on the web about how to use pipeline input to functions. I’m not talking about how to implement pipeline input in your own advanced functions, but rather examples of using pipeline input with existing cmdlets.
The examples invariably look like this:

‘server1’,’server2’ | get-somethingInteresting –blah –blah2

This is a good thing. The object-oriented pipeline is in my opinion the most distinguishing feature of PowerShell, and we need to be using the pipeline in examples to keep scripters from falling back into their pre-PowerShell habits. There is an aspect of this that concerns me, though.

How many of you are dealing with a datacenter comprised of two servers? I’m guessing that if you only had two servers, you probably wouldn’t be all gung-ho about learning PowerShell, since it’s possible to manage two of almost anything without needing to resort to automation. Not to say that small environments are a bad fit for PowerShell, but just that in such a situation you probably wouldn’t have a desperate need for it.
How would you feel about typing that example in with five servers instead of two? You might do that (out of stubbornness), but if it were 100, you wouldn’t even consider doing such a thing. For that matter, what made you pick those specific two servers? Would you be likely to pick the same two a year from now? If your universe is anything like mine, you probably wouldn’t be looking at the same things next week, let alone next year.
My point is that while the example does show how to throw strings onto the pipeline to a cmdlet, and though the point of the example is the cmdlet rather than the details of the input, it feels like we’re giving a wrong impression about how things should work in the “real world”.

As an aside, I want to be very clear that I’m not dogging the PowerShell community. I feel that the PowerShell community is a very vibrant group of intelligent individuals who are very willing to share of their time and efforts to help get the word out about PowerShell and how we’re using it to remodel our corners of the world. We also are fortunate to have a group of people who are invested so much that they’re not only writing books about PowerShell, they’re writing good books. So to everyone who is working to make the PowerShell cosmos a better place, thanks! This is just something that has occurred to me that might help as well.

Ok..back to the soapbox.

If I’m not happy about supplying the names of servers on the pipeline like this, I must be thinking of something else. I know…we can store them in a file! The next kind of example I see is like this:

Get-content c:\servers.txt | get-somethingInteresting –blah –blah2

This is a vast improvement in terms of real-world usage. Here, we can maintain a text file with the list of our servers and use that instead of constant strings in our script. There’s some separation happening, which is generally a good thing (when done in moderation :-)). I still see some problems with this approach:

  • Where is the file? Is it on every server? Every workstation? Anywhere I’m running scripts in scheduled tasks or scheduled jobs?
  • What does the file look like? In this example it looks like a straight list of names. What if I decide I need more information?
  • What if I don’t want all of the servers? Do I trust pattern matching and naming conventions?
  • What if the file moves? I need to change every script.

I was a developer for a long time and a DBA for a while as well. The obvious answer is to store the servers in a table! There’s good and bad to this approach as well. I obviously can store more information, and any number of servers. I can also query based on different attributes, so I can be more flexible.

  • Do I really want to manage database connections in every script?
  • What about when the SQL Server (you are using SQL Server, right?) gets replaced. I have to adjust every script again!
  • Database permissions?
  • I have to remember what the database schema looks like every time I write a script?

What about querying AD to get the list? That would introduce another dependency, but with AD cmdlets I should be able to do what I need. But…

  • What directory am I going to hit (probably the same one most of the time, but what about servers in disconnected domains?)
  • Am I responsible for all of the computers in all of the OUs? If not, how do I know which ones to return?
  • Does AD have the attributes I need in order to filter the list appropriately?

At this point you’re probably wondering what the right answer is. The problem is that I don’t have the answer. You’re going to use whatever organizational scheme makes the most sense to you. If your background is like mine, you’ll probably use a database. If you’ve just got a small datacenter, you might use a text file or a csv. If you’re in right with the AD folks, they’ve got another solution for you. They all work and they all have problems. You’ll figure out workarounds for the stuff you don’t like. You’re using PowerShell, so you’re not afraid.

Now for the payoff: Whatever solution you decide to use, hide it in a function.

You should have a function that you always turn to called something like “get-XYXComputer”, where XYZ is an abbreviation for your company. When you write that function, give it parameters that will help you filter the list according to the kinds of work that you’re doing in your scripts. Some easy examples are to filter based on name (a must), on OS, the role of the server (web server, file server, etc.), or the geographical location of the server (if you have more than one datacenter). You can probably come up with several more, but it’s not too important to get them all to start with. As you use your function you’ll find that certain properties keep popping up in where-object clauses downstream from your new get-function, and that’s how you’ll know when it’s time to add a new parameter.

The insides of your function are not really important. The important thing is that you put the function in a module (or a script file) and include it using import-module or dot-sourcing in all of your scripts.
Now, you’re going to write code that looks like this:

Get-XYZComputer –servertype Web | get-somethinginteresting

A couple of important things to do when you write this function. First of all, make sure it outputs objects. Servernames are interesting, but PowerShell lives and breathes objects. Second of all, make sure that the name of the server is in a property called “Computername”. If you do this, you’ll have an easier time consuming these computer objects on the pipeline, since several cmdlets take the computername parameter from the pipeline by propertyname.

If you’re thinking this doesn’t apply to you because you only have five servers and have had the same ones for years, what is it that you’re managing?

  • Databases?
  • Users?
  • Folders?
  • WebSites?
  • Widgets?

If you don’t have a function or cmdlet to provide your objects you’re in the same boat. If you do, but it doesn’t provide you with the kind of flexibility you want (e.g. it requires you to provide a bunch of parameters that don’t change, or it doesn’t give you the kind of filtering you want), you can still use this approach. By customizing the acquisition of domain objects, you’re making your life easier for yourself and anyone who needs to use your scripts in the future. By including a reference to your company in the cmdlet name, you’ve making it clear that it’s custom for your environment (as opposed to using proxy functions to graft in the functionality you want). And if you decide to change how your data is stored, you just change the function.

So…do you know where your servers are? Can you use a function call to get the list without needing to worry about how your metadata is stored? If so, you’ve got another tool in your PowerShell toolbox that will serve you well. If not, what are you waiting for?
Let me know what you think.

–Mike

A PowerShell Puzzler

It has been said that you can write BASIC code in any language. When I look at PowerShell code, I tend to see a lot of code that looks like transplanted C# code. It’s easy to get confused sometimes, since C# and PowerShell syntax are similar, and when you are dealing with .NET framework objects the code is often nearly identical. Most of the time, though, the differences between the semantics are small and there aren’t a lot of surprises.

I recently found one case, however, that stumped me for a while. What makes it more painful is that I found it while conducting a PowerShell training session and was at a loss to explain it at the time. Please read the following line and try to figure out what will happen without running the code in a PowerShell session.

$services=get-wmiobject -class Win32_Service -computername localhost,NOSUCHCOMPUTER -ErrorAction STOP

.
.
.
.
You’re thinking about this, right?
.
.
.
.
.
.
Once you’ve thought about this for a few minutes, throw it in a command-line somewhere and see what it does.

The first thing (I think) that’s important to notice is that the behavior is completely different from anything that you will see in any other language (at least in my experience).

In most languages, if you have an assignment statement and a function call one of three things will happen:

  1. The assignment statement is successful (i.e. the variable will be set to the result of the function call)
  2. The function call will fail (and throw an exception), leaving the variable unchanged
  3. The assignment could fail (due to type incompatibility), leaving the variable unchanged

In PowerShell, though, we see a 4th option.

  • The function call succeeds for a while (generating output) and then fails, leaving the variable unchanged but sending output to the console (or to be captured by an enclosing scope).

Here’s what the output looks like when it’s run (note: I abbreviated some to make the command fit a line):
Puzzler

Not shown in the screenshot is that at the end of the list of localhost services is the expected exception.

How this makes sense is that an assignment statement in PowerShell assigns the final results of the pipeline on the RHS to the variable on the LHS. In this case, the pipeline started generating output when it used the localhost parameter value. As is generally the case with PowerShell cmdlets, that output was not batched. When the get-wmiobject cmdlet tried to use the NOSUCHCOMPUTER value for the ComputerName parameter, it obviously failed and since we specified -ErrorAction Stop, the pipeline execution immediately terminated by throwing an exception. Since we didn’t reach the “end” of the pipeline, the assignment never happens, but there is already output in the output stream. The rule for PowerShell is that any data in the output stream that isn’t captured (by piping it to a cmdlet, assigning it, or casting to [void]) is sent to the console, so the localhost services are sent to the console.

It all makes sense if you’re wearing your PowerShell goggles (note to self—buy some PowerShell goggles), but if you’re trying to interpret PowerShell as any other language this behavior is really unexpected.

Let me know what you think. Does this interpretation make sense or is there an easier way to see what’s happening here?

-Mike

PowerShell-Specific Code Smells: Building output using +=

Before I cover this specific code smell, I should probably explain one thing. The presence of code smells doesn’t necessarily mean that the code in question isn’t functional. In the example I gave last time (the extra long method), there’s no reason to think that just because a method is a thousand lines long that it doesn’t work. There are lots of examples of code that is not optimally coded that works fine nonetheless. The focus here is that you’re causing more work: Either up-front work in that the code is longer or more complicated than necessary, or later on, when someone (maybe you?) needs to maintain the code.

With that said, we should talk about aggregating output using a collection object and the += compound assignment operator. This is such a common pattern in programming languages that it’s a hard thing not to do in PowerShell, but there are some good reasons not to. To help understand what I mean, let’s look at some sample code.

function get-sqlservices {
param($computers)
    foreach ($computer in $computers){
           $output+=get-wmiobject -class Win32_Service -filter "Name like 'SQL%'"
    }
    return $output
}

$mycomputers='localhost','127.0.0.1',$env:COMPUTERNAME
measure-command{
get-sqlservices -computers $mycomputers | select -first 1
}

Before we discuss this code let me be clear: this is not great code for several reasons. For the purposes of discussion, though, let’s just look at how the output is handled. As I mentioned, this is how you’d do something like this in most programming languages and it works fine. On my laptop it ran in 723 milliseconds. If we change the list of computers to a longer list it takes considerably longer:

$mycomputers=('localhost','127.0.0.1',$env:COMPUTERNAME) * 100

Days : 0
Hours : 0
Minutes : 0
Seconds : 51
Milliseconds : 486
Ticks : 514863437
TotalDays : 0.000595906755787037
TotalHours : 0.0143017621388889
TotalMinutes : 0.858105728333333
TotalSeconds : 51.4863437
TotalMilliseconds : 51486.3437

Changing the function to send the output to the pipeline looks like this:

function get-sqlservices2 {
param($computers)
    foreach ($computer in $computers){
           get-wmiobject -class Win32_Service -filter "Name like 'SQL%'"
    }
}

$mycomputers=('localhost','127.0.0.1',$env:COMPUTERNAME) * 100
measure-command{
get-sqlservices2 -computers $mycomputers | select -first 1
}

Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 478
Ticks : 4782609
TotalDays : 5.53542708333333E-06
TotalHours : 0.00013285025
TotalMinutes : 0.007971015
TotalSeconds : 0.4782609
TotalMilliseconds : 478.2609

The code doesn’t look much different. The only changes are that we’re not assigning the output of the get-wmiobject cmdlet to anything and we don’t have an explicit return. This is a point of confusion to most people who come to PowerShell from a traditional imperative language (C#, Java, VB, etc.). In a PowerShell script, any value that isn’t “captured” either by assigning it to a variable or piping it somewhere is added to the output pipeline. The “return value” of the function is the combination of all such values and the value in a return statement (if present). So in this case, the output of the new function is the same as the output of the second. Changing it to use the pipeline didn’t change the value at all. So why is this considered a code smell? The reason is that the second script runs faster than the first did. In fact, it runs faster with 300 computers (100 copies of the list of 3) than the first did with 3 computers. Why is it so much faster? In PowerShell 3.0, the implementation of select-object was changed to stop the pipeline after the number of objects requested in the -first parameter. In other words, even though we passed 300 servers to the function, it stopped after it got the first result back from get-wmiobject from the first server.
You’re not always going to be using -first, but even when you’re not the values in the pipeline are available to downstream cmdlets before the function is done completing (if you don’t use +=). If you’re simply sending the output to the console you will begin to see the results immediately rather than having to wait. Another issue arises when your aggregating function has throws an exception before it’s done. If you didn’t hit the return statement, you won’t see any results at all. Being able to see the results up to the point of the error will probably help you track down where the error was. What if there were thousands of servers (or your dataset was considerably larger for some other reason)? Your process would eat memory as it built a huge collection. With pipeline output there’s no reason for the process to be using much memory at all. Finally, with pipeline output there’s one less thing to keep track of. One less variable means one less place to make a mistake (accidentally use = at some point instead of +=, misspell the variable name, etc.).

I hope you can see that with PowerShell, following this common pattern is not a good thing.

Let me know what you think.

Mike

PowerShell-Specific Code Smells

A code smell is something you find in source code that may indicate that there’s something wrong with the code. For instance, seeing a function that is over a thousand lines gives you a clue that something is probably wrong even without looking at the specific code in question. You could think of code smells as anti-“Best Practices”. I’ve been thinking about these frequently as I’ve been looking through some old PowerShell code.

I’m going to be writing posts about each of these, explaining why they probably happen and how the code can be rewritten to avoid these “smells”.

A few code smells that are specific to PowerShell that I’ve thought of so far are:

  1. Missing Param() statements
  2. Artificial “Common” Parameters
  3. Unapproved Verbs
  4. Building output using +=
  5. Lots of assignment statements
  6. Using [object] parameters to allow different types

Let me know if you think of others. I’ll probably expand the list as time goes on.

-Mike