In case you haven’t heard, the PowerShell Community Exetensions (PSCX) and SQL PowerShell Extensions (SQLPSX) projects have both recently released version 2.0 (and each followed shortly after with quick bug fixes). Both 2.0 releases are module-based and include advanced functions to solve lots of frequently encountered problems. If you haven’t ever used these toolsets, I would recommend giving them a try.
Passing Predicates as Parameters in PowerShell
This is just a quick trick that I figured out today. I had a process that manipulated a dataset, and I needed to be able to change the process to allow me to filter the data that was processed. Also, it wasn’t clear exactly what kind of filter would specifically be needed in any given scenario.
Normally, I would just filter the data using where-object and pass it in to the function in question. The problem here was that the data retrieval was somewhat cumbersome, and I didn’t want to push that complexity outside of the function. And since the filtering criteria wasn’t clear-cut, I couldn’t (and didn’t want to) use a bunch of switches and parameters along with a nest of if/else conditions.
What I wanted, was to pass a predicate (an expression that would evaluate to true or false depending on whether I want a row in the dataset) in to the function. Essentially, I wanted to insert a where-object into the middle of the function.
Amazingly, PowerShell allows me to do that. The code looks a bit strange to me at first, but it works very well and isn’t complicated at all.
Here’s an example:
function process-data{ Param( [scriptblock]$filter = {$true}) #retrieve the data #filter the data $data = $data | where-object $filter #process the data } process-data -filter {$_.UpdatedDateTime -gt (get-date '1/1/2010')}
It’s not earth-shattering, but I think this will come in handy in several places for me.
Let me know what you think.
-Mike
Checking a Field for NULL in PowerShell
It’s been a long time (over 2 months) since I last posted. I’ll try to get back into a rhythm of posting at least weekly. Anyway, this is something that occurred to me at work when writing a script.
I usually avoid nullable columns, but sometimes date fields make sense to be null (rather than use sentinel values like 1/1/1900). In this case, I had a nullable date column and I needed to check in PowerShell whether the field was in fact null or not. In SQL, I would have just used an IS NULL, or used the IsNull() function to replace the null value with something a little easier to deal with. My first (feeble) attempt was to do this:
if (!$_.completedDate){ # it’s null }
Unfortunately for me, that doesn’t work. Next, I used this (which worked, but wasn’t very satisfactory either):
if ($_.completedDate.ToString() -eq ''){ # it’s null }
Realizing that I was being stupid, I googled “PowerShell SQL NULL and after looking at several pages which didn’t really address the issue, I found this. A little work to change it into a function, and voilà.
function is-null($value){ return [System.DBNull]::Value.Equals($value) }
A few quick tests and this is what I wanted. Now, my code looks like this:
if (is-null $_.completedDate){ # it’s null }
I find it hard to believe I haven’t written this function before (or seen it).
By the way…be watching the SQL PowerShell Extensions project. Chad released version 2.1, which includes SQL mode for the ISE (really nice). I know he and several others are collaborating on an update which should be out sometime soon.
-Mike
The PowerShell Bug That Wasn’t, and More Package Management
Have you ever tracked down a bug, been confident that you had found the root of your problems, only to realize shortly afterwords that you missed it completely?
What I posted yesterday as a bug in PowerShell (having to do with recursive functions, dot-sourcing, and parameters) seemed during my debugging session to clearly be a bug. After all, I watched the parameter value change from b to a, didn’t I? Sure did. And in almost every language I’ve ever used, that would be a bug. On the other hand, PowerShell is the only language that I know of that has dot-sourcing. Here’s a much simpler code example which shows my faulty thinking:
function f($x){ if ($x -eq 1){ write-host $x . f ($x+1) write-host $x } } f 1
Here, we have a simple “recursive function” which uses dot-sourcing to call itself. In my mind, how this would have worked is as follows:
- We call the function, passing 1 for $x
- The if condition is true, so it prints 1 and calls the function, passing 2 for $x
- In the inner call, the if condition is false, so nothing happens
- We pop back to the calling frame, where $x is 1 and print it
If it weren’t for that pesky dot operator, that would have been accurate.
The problem is, the dot operator changes the scoping of the inner call. Here’s what the about_operators help topic, has to say about the dot sourcing operator:
Description: Runs a script so that the items in the script are part of the calling scope.
Which is not a surprise…really. The reason I was using the dot operator in my package management code was to make sure that functions defined in the scripts it was calling would be included in the existing scope, rather than their script scope. The problem was one of nearsightedness. I was so focused on the fact that the dot sourcing was making the functions part of the caller’s scope that I didn’t consider that variable declarations (including parameters) would also be in the caller’s scope.
So, the correct interpretation of the above script is:
- We call the function, passing 1 for $x
- The if condition is true, so it prints 1 and calls the function, passing 2 for $x
- The parameter is named $x, so $x in is set to 2 (overwriting the $x that was set to 1)
- In the inner call, the if condition is false, so nothing happens
- We pop back to the calling frame, where $x is 2 and print 2.
The trick here is that the function f dot-sourced something that set $x to 2. The fact that it was f is incidental. It didn’t have to be.
Maybe this example will make it more clear:
function f($x){ write-host $x . g write-host $x } function g{ $x = "Hello, World!" } f 1
If we were doing this without dot-sourcing, we would expect to see the number 1 printed out twice. However, since we dot-sourced g, the assignment in the function body of g happens in the scope of f. In other words, it’s as if the $x=”Hello, World!” were executed inside f. Thus, the output of this script is 1, followed by “Hello, World!”.
So, it wasn’t a bug, it was just me not being thorough in applying my understanding of dot-sourcing.
Now, on with Package Management.
First, to fix the problem caused by the parameter being overwritten (which it is, it’s just that it’s expected to be). I hadn’t worked out a way to fix the problem before I went to bed last night, but as I was rolling this stuff around in my head (which is when I figured out that it wasn’t really a bug), I thought of a simple solution. Since we can expect that sometimes the $filename parameter in the require (and reload) function will be overwritten by the a value in the dot-sourced script, we just need to make sure we’re done using it at that point. So, I simply made the assignment to the dictionary before dot-sourcing. Here’s the updated code:
$global:loaded_scripts=@{pkg_utils='INITIAL'} function require($filename){ if (!$global:loaded_scripts[$filename]){ $global:loaded_scripts[$filename]=get-date . scripts:$filename.ps1 } } function reload($filename){ $global:loaded_scripts[$filename]=get-date . scripts:$filename.ps1 }
To add modules, we need to do a few extra things:
- We need to detect if we’re running in 2.0 or not
- We need to see if there is a module with the given name
- We need to see if the module is already loaded or not (in the case of require…it won’t matter for reload
Fortunately, none of those are very difficult. Here’s the updated code (including modules). I even added some comments to make the flow more clear:
$global:loaded_scripts=@{pkg_utils='INITIAL'} function require($filename){ if ($global:loaded_scripts[$filename]){ # this function has already loaded this (script or module) return } if ($psversiontable){ # we're in 2.0 if (get-module $filename -listavailable){ #the module exists in the module path $global:loaded_scripts[$filename]=get-date import-module $filename return } } #it wasn't a module...so dot-source the script $global:loaded_scripts[$filename]=get-date . scripts:$filename.ps1 } function reload($filename){ if ($psversiontable){ # we're in 2.0 if (get-module $filename -listavailable){ #the module exists in the module path $global:loaded_scripts[$filename]=get-date import-module $filename return } } # it wasn't a module...so dot-source the script. $global:loaded_scripts[$filename]=get-date . scripts:$filename.ps1 }
That’s it for today. Let me know what you think.
-Mike
Package Management and a PowerShell Bug
UPDATE: I have worked out how the behavior described at the end of this post is not a bug, but in fact just PowerShell doing what it’s told. Don’t have time to explain right now, but I’ll write something up later today. I also worked out how to “fix” the behavior.
For a long time now, I’ve been dissatisfied with what I call “package management” in PowerShell. Those of you who know me will be shocked that anything in PowerShell is less than perfect in my eyes, but this is one place that I feel let down. Modules in 2.0 remedy the situation somewhat, but it still isn’t quite what I want or am used to in other languages.
Let me give an example. In VB.NET, if you need to use the functions in an assembly, you put “Imports AssemblyName” at the top of your script. In C#, you would have “Using AssemblyName”. In Python, there would be “Import Something”.
In PowerShell 1.0, you had nothing. In 2.0, you could create a module manifest which would specify either RequiredModules or ScriptsToProcess (or several other things to do upon loading the module). The problems I see with using the module manifest are:
- What if I’m not writing a module? There’s no such thing as a “script manifest”
- What if the script or module that is required performs some initialization that should only be done once per session?
- What if the script or module that is required performs expensive initialization?
Because of these reasons (and because I only started using 2.0 when it went RTM) I wrote a couple of quick functions to do what I thought made sense.
$global:loaded_scripts=@{pkg_utils='INITIAL'} function require($filename){ if (!$global:loaded_scripts[$filename]){ . scripts:$filename.ps1 $global:loaded_scripts[$filename]=get-date } } function reload($filename){ . scripts:$filename.ps1 $global:loaded_scripts[$filename]=get-date }
To use these you need to create a psdrive called scripts: with code like this (probably in your profile):
New-PSdrive -name scripts -PSprovider filesystem -root PathToYourLibraries | Out-Null
Then, also in your profile, you’ll want to dot-source the file you put these functions in (for example, package_tools.ps1):
. scripts:package_tools.ps1
Once you have those set up, you can dot-source the require function to make sure that a script has been loaded as such:
. require somelibrary
I have the functions I use divided by “subject” into several library scripts, and make sure that at the top of each script, I use “. require” to ensure that any prerequisites are already loaded.
Now for the PowerShell bug (which took me a long time to track down).
Create 2 files, a.ps1 and b.ps1 in your scripts: directory.
# a.ps1 write-host "this is script a"
#b.ps1 write-host "this is script b" write-host "this script loads a" . require a
After dot-sourcing package_tools, run the following commands:
. require b
You should get output that looks something like this:
this is script b this script loads a this is script a
Everything looks good until you inspect the $global:loaded_scripts variable:
ps> $loaded_scripts Name Value ---- ----- a 1/19/2010 11:23:09 PM package_tools INITIAL
Although b.ps1 was indeed dot-sourced (you can see the output), and the only code-path through the require function that would dot-source it would also add an entry to $loaded_scripts, there is no such entry. The problem is that when b.ps1 called the require function (to load a.ps1), the $filename variable in the calling context (where it should have been “b”) was overwritten by the call with “a” as a parameter. Walking through the code in a debugger confirms the problem.
Have you ever seen problems with recursion and dot-sourcing in PowerShell? Can you see any way around the problem I’ve described? For instance, saving the $filename in a variable and restoring it after the dot-source call (line 5 above) doesn’t help, because the same code-path is followed in the recursive call, and that variable is overwritten as well.
Even with this bug, I find the require function (and reload, which I didn’t discuss, but always loads the script in question) to be very helpful. I also have extended them to include importing modules, if they exist. I’ll discuss them in my next post, coming soon.
-Mike
P.S. Here‘s a question I posted to StackOverflow.com about these functions back in November of 2008.
SQL PowerShell Extensions (SQLPSX) 2.0 Released
The first module-based release of the SQL PowerShell Extensions (SQLPSX) was released recently on CodePlex. It features very handy wrappers for most of the SMO objects used to manipulate SQL Server metadata, SSIS packages, Replication, and (new in the 2.0 release) an ADO.NET module which I wrote based on the code in this post. There’s also a data-collection process and Reporting Services reports to help you get your SQL Server installations under control.
Chad Miller, the driving force behind SQLPSX, has put a lot of effort into this release, and you’ll find really good examples of advanced functions (with comment-based help, even).
If you deal with SQL Server in any way, you’ll almost certainly be able to use this set of modules to streamline your scripting experience (and probably learn something about SMO in the process).
You can find the release here.
Get-EventLog and Get-WMIObject
Recently, we had an occasion to write a process to read event logs on several sql servers to try to determine login times for different sql and Windows logins. Since we have begun using PowerShell v2.0, and since get-eventlog now has a -computername parameter, it seemed like an obvious solution.
The event message we were interested in looked something like “Login succeeeded for uesr ‘UserName’ ….”. The code we were trying to use was:
get-eventlog -computername $servername -logname Application -message "Login succeeded for user*" -after ((get-date).AddDays(-1))
I expected that, given a date parameter and a leading string to match wouldn’t be too bad, but this ended up taking several minutes per server. As there are over a hundred servers to scan, that didn’t work well for us.
We ended up falling back to get-wmiobject.
$BeginDate=[System.Management.ManagementDateTimeConverter]::ToDMTFDateTime((get-date).AddDays(-1)) get-wmiobject -class win32_ntlogevent -computerName $servername -filter "(EventCode=18453) and (LogFile='Application') and (TimeGenerated >'$BeginDate')"
Cons:
- We have to encode the date parameter (instead of using a nice datetime parameter like get-eventlog has)
- We have to write a WQL where-clause to match the parameters
Pros:
- We get to use the event code (rather than a string match)
- The code is orders of magnitude faster (39 servers in 13 minutes as a test case)
I think that you might have a positive experience using get-eventlog if you need to scan a range of time (for instance if you’re reporting on what happened on the server), but if you need to look for a specific event (or set of events) you’re probably going to want to use get-wmiobject.
-Mike
Writing your own PowerShell Hosting App (the epilog)
As I mentioned before, I have created a CodePlex project to track the development of a WPF PowerShell host using AvalonDock and AvalonEdit.
It’s still in the very beginning stages, but it’s comparable to the code I used in this tutorial series (except that it’s using different technologies, all of which I’m new to).
PowerShellWorkBench will eventually include:
- Treeview controls
- Node/Edge Graphs (using the GraphXL library)
- Context-menus based on powershell ETS
- Whatever you think of and submit
If you’re interested in contributing to PowerShellWorkBench, drop me a line (mike).
-Mike
[EDIT]: The windows forms-based powershell workbench project can be downloaded here.
Writing your own PowerShell Hosting App (part 6…the final episode)
Before we proceed with putting powershell objects in a treeview (which I promised last time), I need to explain some changes I have made to the code.
- Refactoring the InvokeString functionality ouf of the menu item event
- Merging the error stream into the output stream
- Replacing the clear-host function with a custom cmdlet
First, we had been calling the invoke method in the OnClick event of the menu item. While that works fine as a proof-of-concept, we’re going to need that functionality elsewhere, so it’s a simple matter to extract the logic into a function as follows:
Sub RunToolStripMenuItem1Click(sender As Object, e As EventArgs) InvokeString(txtScript.Text) End Sub Private Sub InvokeString(strScript As String) dim ps As powershell=PowerShell.Create() ps.Runspace=r ps.AddScript(strScript) ps.AddCommand("out-default") ps.Commands.Commands.Item(ps.Commands.Commands.Count-1).MergeUnclaimedPreviousCommandResults = PipelineResultTypes.Error + PipelineResultTypes.Output Dim output As Collection(Of psobject) output=ps.Invoke() End Sub
In this new InvokeString method you see highlighted (if you allow javascript :-)) the line of code that merges the error stream into the output stream (so that errors we throw with our new cmdlets will show up in the console). We’ll still need to update the our PSHostUserInterface class to handle the WriteError method, but that’s pretty easy (as are the debug, verbose, and warning methods):
Public Overloads Overrides Sub WriteErrorLine(value As String) MainForm.PowerShellOutput.AppendText("ERROR:"+value +vbcrlf) End Sub Public Overloads Overrides Sub WriteDebugLine(message As String) MainForm.PowerShellOutput.AppendText("DEBUG:"+message +vbcrlf) End Sub Public Overloads Overrides Sub WriteProgress(sourceId As Long, record As System.Management.Automation.ProgressRecord) Throw New NotImplementedException() End Sub Public Overloads Overrides Sub WriteVerboseLine(message As String) MainForm.PowerShellOutput.AppendText("VERBOSE:"+message +vbcrlf) End Sub Public Overloads Overrides Sub WriteWarningLine(message As String) MainForm.PowerShellOutput.AppendText("WARNING:"+message +vbcrlf) End Sub
With that, we can see that the built-in clear-host isn’t going to work:
ERROR:Exception setting "CursorPosition": "The method or operation is not implemented ERROR:." ERROR:At line:8 char:16 ERROR:+ $Host.UI.RawUI. <<<< CursorPosition = $origin ERROR: + CategoryInfo : InvalidOperation: (:) [], RuntimeException ERROR: + FullyQualifiedErrorId : PropertyAssignmentException ERROR: ERROR:Exception calling "SetBufferContents" with "2" argument(s): "The method or oper ERROR:ation is not implemented." ERROR:At line:9 char:33 ERROR:+ $Host.UI.RawUI.SetBufferContents <<<< ($rect, $space) ERROR: + CategoryInfo : NotSpecified: (:) [], MethodInvocationException ERROR: + FullyQualifiedErrorId : DotNetMethodException ERROR:
You can see that, by default, “clear-host” is a function that relies on the RawUI class in the host (using rectangles, and filling with spaces, it looks like). We really don’t want that kind of access in our interface, so we’re going to replace this function with a cmdlet that simply clears the textbox.
That brings up another “benefit” of writing your own GUI host…the ability to implement cmdlets without writing SnapIns. With 2.0, you can write advanced functions (and I encourage you to do that), but with 1.0 you didn’t have that option. With your own host, you get to add cmdlets without the pain of a SnapIn installer. The two things we need to do are:
- Create a cmdlet class to do the work
- Add the cmdlet to the runspace configuration
When we replace the clear-host function, we’re going to also want to remove the existing function, but that’s not typical. Here’s the code:
First, the cmdlet class (I usually put all of the cmdlets in the same file, rather than having a single file for each class, but that’s just a preference):
Imports System.Management.Automation Imports System.ComponentModel _ Public Class ClearHost Inherits Cmdlet Protected Overrides Sub EndProcessing() MainForm.PowerShellOutput.Clear End Sub End Class
To add the cmdlet to the runspace (and remove the function), I added these lines after the r.Open() call:
InvokeString("remove-item function:clear-host") r.RunspaceConfiguration.Cmdlets.Prepend(New CmdletConfigurationEntry("clear-host",GetType(ClearHost),Nothing)) r.RunspaceConfiguration.Cmdlets.Update()
Now, finally, on to the promised treeview manipulation. I want the cmdlet to be fairly simple, allowing you to specify the name of the label of the new node, and optionally the label of the parent node and an object to attach to the node (we’ll put it in the tag property of the treenode). We’ll also need to expose the treeview control in a shared member of the form (since the cmdlet doesn’t have a reference to the specific window we instantiate).
First, here’s the cmdlet. I’ve tried to make the code as simple as possible, so there are no tricks involved.
_ Public Class NewTreeNode Inherits Cmdlet private _nodename as String="" private _parentnodename as String="" private _object as PSObject=nothing _ Public Property NodeName() As String Get Return _nodename End Get Set(ByVal value As String) _nodename = value End Set End Property _ Public Property ParentNodeName() As String Get Return _parentnodename End Get Set(ByVal value As String) _parentnodename = value End Set End Property _ Public Property PSObject() As PSObject Get Return _object End Get Set(ByVal value As PSObject) _object = value End Set End Property Protected Overloads Overrides Sub EndProcessing() MyBase.EndProcessing() Dim _node As TreeNode Dim _parent As TreeNode _parent=PWBUIHandling.FindNodeInTree(_parentnodename,mainform.Tree.Nodes) If _parent is nothing then _node=MainForm.Tree.Nodes.Add(_nodename,_nodename) Else _node=_parent.Nodes.Add(_nodename,_nodename) End If _node.Tag=_object End Sub End Class
In the form, we’ll need to add a treeview (I also added a second splitter to help organize the UI, but that’s obviously not necessary). Adding the shared property,setting it, and adding the cmdlet to the runspace complete the changes:
Public Partial Class MainForm Public Shared PowerShellOutput As textbox public shared Tree as TreeView private host as new PowerShellWorkBenchHost private r As Runspace=RunspaceFactory.CreateRunspace(host) Public Sub New() ' The Me.InitializeComponent call is required for Windows Forms designer support. Me.InitializeComponent() PowerShellOutput=txtOutput Tree=treeView1 r.ThreadOptions=PSThreadOptions.UseCurrentThread r.Open() InvokeString("remove-item function:clear-host") r.RunspaceConfiguration.Cmdlets.Prepend(New CmdletConfigurationEntry("clear-host",GetType(ClearHost),Nothing)) r.RunspaceConfiguration.Cmdlets.Append(New CmdletConfigurationEntry("new-treenode",GetType(NewTreeNode),Nothing)) r.RunspaceConfiguration.Cmdlets.Update() End Sub
With that, let’s see how it works:
Obviously, I haven’t built an application that’s ready for use, but I think it is a good example of how you can use the PowerShell APIs to create a scriptable environment that you can customize. And the fact that the code written to make it happen is less than 200 lines is a testament to the useful nature of the API (actual hand-coded lines, that is, there are about 400 lines in the whole project).
What’s next? I think I’ll stop on the tutorial and segue into the codeplex project I’m starting (it should be live in the next week or 2). In it, you should find things like
- Syntax Highlighting (thanks to AvalonEdit)
- Advanced docking interface (thanks to AvalonDock)
- Tab Expansion
- Custom pop-up menus for UI objects (like the nodes in the tree, for example)
- Whatever else I (or anyone who wants to contribute) think of
-Mike
P.S. I just realized that I forgot to include the FindNodeInTree function that the cmdlet called. I hate that the treeview class doesn’t include a Find method. Here’s the code:
Function FindNodeInTree(nodename As String, nodes As TreeNodeCollection) as TreeNode dim rtn as TreeNode =nothing If nodes.ContainsKey(nodename) Then Return nodes(nodename) Else For Each node As treenode In nodes rtn=FindNodeInTree(nodename,node.Nodes) If rtn IsNot Nothing Then return rtn End If Next End If return rtn End Function
Writing your own PowerShell Hosting App (part 5)
In the last post, we got to the point that we were actually using the new host objects that we implemented, but we still hadn’t provided anything more than trivial implementations (throwing an exception) for the methods that make a custom host useful, e.g. the write-* functions.
Before we do that, we need to discuss interaction between PowerShell (the engine) and windows forms (though we would have had the same issue with WPF). In PowerShell 1.0, the engine creates its own thread to run the Invoke() method, and doesn’t provide a way to change that thread’s apartment model, which is MTA. The reason that is important is that to interact safely with Windows Forms (or WPF), you need to be in the same thread. The bottom line is that when using the 1.0 object interface, you can’t directly interact with the window environment. Which means that any hopes you had of writing some simple code to append text to the textbox in the WriteHost method are going to be dashed. Unless, of course, you use the 2.0 object model. The designers realized the shortcoming, and in 2.o they allow you to change the child thread to STA.
So now we have a couple of choices. As I mentioned in part 3, I was purposely using the 1.0 object model, since 2.0 wasn’t final, and the 1.0 methods would work fine in a 2.0 install. One thing we could easily do is switch the code to 2.0, set the threading model to STA, and go on our merry way. Another approach would be to have the Host objects interact with the user interface indirectly. One way to do that would be to simply have the host methods package their arguments into an object, and add the object into a queue that is consumed in a timer event handler on the form. This works quite nicely, and provides an easy separation between the host and the interface.
For now, though, for the sake of simplicity (and to keep the code from getting longer than anyone would care to read), we’ll just use the 2.0 object model. As I mentioned in part 4, I plan to create a project on Codeplex for a more complete host than I can really create in a tutorial. It will include code to keep the host and interface separate (which I think I like better).
Here is the revised code in the form to use the 2.0 model (I’ve moved some of the declarations out of the Click method because the objects don’t need to be recreated each time):
Public Partial Class MainForm public shared PowerShellOutput as textbox private host as new PowerShellWorkBenchHost private r As Runspace=RunspaceFactory.CreateRunspace(host) Public Sub New() ' The Me.InitializeComponent call is required for Windows Forms designer support. Me.InitializeComponent() PowerShellOutput=txtOutput r.ThreadOptions=PSThreadOptions.UseCurrentThread r.Open() End Sub Sub RunToolStripMenuItem1Click(sender As Object, e As EventArgs) dim ps As powershell=PowerShell.Create() ps.Runspace=r ps.AddScript(txtScript.Text) ps.AddCommand("out-default") Dim output As Collection(Of psobject) output=ps.Invoke() End Sub End Class
The line of code that will allow the host methods to interact with the form is:
r.ThreadOptions=PSThreadOptions.UseCurrentThread
A few other changes that should be noted are:
- Adding a shared member PowerShellOutput to use in the host to update the textbox
- Switching from out-string to out-default (now that we’re handling host output, we can let the default behavior send the objects in the pipeline to the host)
- Removing the loop through the output (because of the previous point)
With that being said, I’ll make another comment. If you’re trying to follow along with this series (as in, you have an editor open and are copying code in and trying it as you go), you’ll want to make sure you set a breakpoint on each of the throw statements in the host classes. If you don’t do this, you won’t know what methods you need to implement (except by trial and error). I have spent several hours debugging when the breakpoints would have showed me the problem immediately. Please learn from my mistakes.
Now, we can finally get to coding the output routines. We obviously need to implement some write* method, but there are several. To figure out which one, I tried to run write-host “hello” and dir (fairly simple commands) and it turns out that we need to implement these methods for those to work:
- WriteLine
- Write –both versions
- PSHostRawUserInterface.ForegroundColor
- PSHostRawUserInterface.BackgroundColor
I really wasn’t expecting the last color methods to come into play until we started passing them to the write-host cmdlet (which is why I lost so much time debugging). Here are the implementations I’m using for now. Note that we’re somewhat limited by the choice of a textbox (rather than a more fully-featured control) for output.
In PSHostUserInterface:
Public Overloads Overrides Sub Write(value As String) If value=vblf Then mainform.PowerShellOutput.AppendText(vbcrlf) else MainForm.PowerShellOutput.AppendText(value ) End If End Sub Public Overloads Overrides Sub Write(foregroundColor As ConsoleColor, backgroundColor As ConsoleColor, value As String) MainForm.PowerShellOutput.AppendText(value ) End Sub Public Overloads Overrides Sub WriteLine(value As String) MainForm.PowerShellOutput.AppendText(value+vbcrlf) End Sub
and in PSHostRawUserInterface:
Public Overloads Overrides Property ForegroundColor() As ConsoleColor Get return ConsoleColor.Black End Get Set Throw New NotImplementedException() End Set End Property Public Overloads Overrides Property BackgroundColor() As ConsoleColor Get return ConsoleColor.White End Get Set Throw New NotImplementedException() End Set End Property
With those changes, we are (finally) using the custom host for output. Here’s an obligatory screenshot:
So what’s next? Obviously, we should fill in appropriate implementations for the other write-* functions. Other than write-progress, they shouldn’t prove any challenge. Write-progress, on the other hand, would really look nice as a progressbar (possibly in a status bar?). There are a few other things to consider:
- clear-host is implemented as a function which uses the rawUI class to perform it’s stuff…that probably won’t work in out GUI app
- Colors (if you choose to implement them) are going to be specified using the ConsoleColor class (which is different from the Color class used by Windows Forms)
- Profiles….do you want to load them? Do you want to have a profile specific to your new host?
- What about interacting with the GUI in other ways?
The last point is the main thing that drove me to write my own host. You may be fortunate enough to have a GUI tool to do all of your administration duties, but I suspect that most of us have several tools that we have to switch between to get stuff done. And those tools are probably not powershell-ready. Writing your own host allows you to build your “dream environment”, combining the best features of your favorite tools, and adding script-support in the process.
Next time, we’ll see about doing something different…adding data from powershell into a treeview control (in the host, of course).
As usual, please let me know if you’re enjoying this series.
Mike