The Two Faces of the ISE Run Button (and a bit of bonus evil)

I love the ISE. I’ve used other “environments”, but always end up using the good old ISE. I do use the awesome ISESteroids module by Tobias Weltner (, but most of the time you can find me in the unadorned, vanilla ISE.

With that bit of disclaimer out of the way, there is something that came to my attention recently. The Run button on the toolbar does two different things, although it doesn’t make a big deal about it. The two things are similar enough that it’s easy to miss, and subtle enough that the difference isn’t important most of the time.

The two things are, unsurprisingly, both concerned with running what’s in the current tab. Since it’s the Run button, you’d expect that to be the case.

Face Number 1
The first thing that the Run button does, is that it runs the code that’s in the current editor tab. It does this by copying the text as input down in the console area. An example is seen in the image below:

You can clearly see that the text in the editor has been copied to the command-line.

Face Number 2

The second thing it does it it runs the script that’s loaded in the current tab. It doesn’t just run the script either, it actually dot-sources it (i.e. runs the script in the global scope).

The behavior of the Run button depends entirely on whether the tab has been saved as a script file (.ps1) before. If so, it runs (dot-sources) the script. If not, it executes the text that’s in the tab. Note in the first screenshot that the tab in the ISE says “Untitled.ps1”, which means it has not been saved. In the second, it says “RunButton.ps1”, so it obviously has been saved at that point.screenshot_run_saved

The great thing about this behavior is that you can run stuff without saving it. Once you decide to save it, though (perhaps because you want to debug it), the same button and hotkeys run the script in almost exactly the same way.

If you remember in my last post Blogging and Rubber Duck Debugging, I discussed how sometimes writing a blog post makes things more clear.  Fortunately I usually realize where my thinking has gone wrong before I hit “publish”, but not always.  This post, for instance, has sat in my drafts folder since October of 2014 because I wasn’t sure about it.

I was certain that I had a script which worked differently in the two “modes” of the Run button.  I remember vividly typing the (not very complex) script in my ISE and running it successfully.  I saved the file and gave it to someone else to run “for real”, and it failed.  I tracked the failure down to the fact that I was using scope modifiers (script: or global:) and they acted differently in an unsaved editor versus in a file.  I am unable to reproduce the result now, though, so I am doubting my sanity.  It does seem possible, though, that the script: scope in an actual script and in the global scope


Typing the above confession paragraph was enough to dislodge the bad thinking!  Rubber duck debugging to the rescue.

Here’s the simplified code that I started to blog about 13 months ago:

$values = 
function ProcessValue{
 if($processed -contains $value){
    "$value already processed"
 } else {
    "Processing $value"
'Value1','Value2','Value3','Value1'| foreach {ProcessValue $_}

The code is pretty simple. It “processes” values as long as they haven’t already been “processed” by the function.

My expectation running the script (and example at the bottom) is that it would show that it processed the first three values and then reported that “value1” was already processed. Pretty simple, and that’s what it shows in the ISE when you click run.

The problem isn’t in fact because the run button works differently if you’ve saved the file or not. The script failed when the other user ran it because he “ran” it. He didn’t load it into the ISE and click the Run button, he executed the script. The issue arises because dot-sourcing a script and running the script are not the same.

To illustrate, here’s what it looks like when you run the function:

Notice that it failed to see that value1 had already been processed. Dot-sourcing the script works just like the Run button.

The “bug” in the script is that the first statement doesn’t include the scope modifier when it initializes the $processed variable. Since when the script is dot-sourced, that first instruction is already in the global scope, the variable is initialized as a list and it all works fine. When you run the script without dot-sourcing it, the initialization runs in the script: scope rather than the global scope and the line in the loop that is supposed to be adding the value to a list is instead concatenating the values as strings. Because of that, the -contains operator never returns true and everything gets processed every time. One more screenshot to confirm that:


So apparently the two faces of the Run button aren’t so bad. So what’s the bit in the title about “bonus evil”? One tiny problem with the Run button. When the run button dot-sources a file, it doesn’t use the dot-source syntax in the ISE. If you understand what’s going on, it’s not a big deal. If you don’t understand the difference (between running and dot-sourcing), you can end up beating your head against the wall trying to figure out what’s going on.

I’m starting to think that using scope modifiers is a code smell. Not necessarily bad, but might point out that something could be done better.

Thanks for sticking with me on this “longer than usual” post. Let me know what you think in the comments!


Blogging and Rubber Duck Debugging

Have you ever heard of Rubber Duck Debugging? The idea is simple. If you’re having trouble debugging code, just put a rubber duck on your desk and explain what’s happening in your code to the duck. Seems absurd, but the act of verbalizing the code situation is usually enough to break the log-jam in your mind and allow you to see the issue.

Another similar technique is “another set of eyes”. I can’t count the number of times I’ve asked someone to look at my code (or had someone ask me to look at theirs) only to find a really simple bug. “I’ve been looking at this for an hour!!!!” A different perspective is all it takes sometimes to spot the problem.

I’ve noticed more than a few times that I start to write a post about something that I think I understand. The more I write, however, the more I feel uncertain. By the three-quarter mark on the post, I save a draft and break out the ISE (or spin up a new virtual machine, or something) and find out that what I thought I knew well enough to share with the world I had completely wrong.

In that way, blogging is like using the entire world as a rubber duck or another set of eyes.

Just a thought I had (almost exactly a year ago) and finally got around to sharing.

Has this ever happened to you? I’d love to hear your stories in the comments.


Why Adolib (and POSH_Ado)?

I’ve realized that in my explanations of Adolib and POSH_Ado, I left something important out. Why in the world am I spending all of this time and effort writing database access modules when there are already tools out there (SQLPS, for instance) which work.

The simple answer is SQLPS is not good enough for several reasons.

First, SQLPS is part of the SQL Server install, which is a big download. That’s quite a burden to place on a user just to get access to Invoke-SQLCmd.

Second, when I started writing Adolib (and the predecessor which is used at my company), SQLPS was still a snap-in rather than a module. This was in PowerShell 1.0 days, so it was the normal distribution method, but snap-ins were not fun to work with and that made SQLPS even more of a burden.

Third, although Invoke-SQLCmd has a lot of flexibility, it does not allow you to re-use the same connection for multiple commands. You connect (and authenticate) each time you want to run a SQL command. This seems wasteful to me.

Fourth, Invoke-SQLCmd uses strings for variable substitution rather than real parameters, so it’s vulnerable to SQL injection. While the other problems in this list can be overlooked, I have a harder time with this one. I realize that Invoke-SQLCmd is modeled to work like the command-line SQL tools, and that explains the string subsitution, there’s no good reason not to also support T-SQL parameters in statements.

Finally, the code in Adolib (and to some extent POSH_Ado) is pretty simple. It’s a good, easy to understand example of using .NET classes in PowerShell code. A friend at work who saw Adolib for the first time (reading this post) said that it seemed too easy. Adolib is very easy to use and easy enough to understand that you might find yourself adding features.

I work with SQL Server a lot, and most of the modules I use at work involve reading and or writing values to SQL. Adolib doesn’t have all of the flexibility that SQLPS gives, but it does use parameters and allows connection re-use. It’s been with me for a long time (8 years?) and the more I use it the more I can’t imagine using anything else.

POSH_Ado is a natural progression from Adolib. If you need to work with multiple database platforms, it’s really nice to have a consistent interface to work with them all. The times I’ve needed this kind of functionality POSH_Ado has been very handy and saved a lot of time.

Have you used Adolib or POSH_Ado? Anything you think needs to be added or changed with either?

I look forward to hearing your opinions.


PowerShell and MySQL : POSH_Ado_MySQL

Using PowerShell and MySQL together with POSH_Ado is just as easy as SQL Server. You’ll need the POSH_Ado and POSH_Ado_MySQL Modules, and use this command to get started:

Import-Module POSH_Ado_MySQL

Once you’ve done that you’ll have the following functions at your disposal:

  • New-MySQLCommand
  • New-MySQLConnectionString
  • New-MySQLCommand
  • Invoke-MySQLCommand
  • Invoke-MySQLQuery
  • Invoke-MySQLStoredProcedure

These functions work just like the ones for SQLServer in AdoLib or POSH_Ado_SQLServer, except that they work with MySQL.

Inside POSH_Ado_MySQL, you’ll see that (just like POSH_Ado_SQLServer), it is simply importing the POSH_Ado module, specifying the MySQL ADO.NET provider name and the prefix (MySQL). Then, it calls the Set-MySQLADONetParameters function to add an option to the connection strings that are generated and to specify that there is no prefix for parameter names.

import-module POSH_Ado -args MySql.Data.MySqlClient -Prefix MySQL -force

# .NET (and PowerShell) do not like zero datetime values by default.  This option helps with that.
Set-MySQLADONetParameters -option @{'Allow Zero Datetime'='true'} -ParameterPrefix ''

Export-ModuleMember *-MySQL*

Hopefully by now you can see the power of the POSH_Ado project:

  1. All of the ADO.NET logic is in one place
  2. Consistent (but distinctly named) cmdlets for working with each platform
  3. Flexibility to set platform-specific options

There are currently 3 other platform-specific POSH_Ado modules: POSH_Ado_Oracle, POSH_Ado_Firebird, and POSH_Ado_DB2. It should be no trouble to create Postgres, SQLite, and OLEDB modules as well.

Do you have any projects where POSH_Ado could come in handy? What about other platforms to explore?

Let me know your thoughts in the comments.


P.S. POSH_Ado and the platform-specific modules can be found here

Breaking the rules with helper functions

One of my most popular answers on StackOverflow is also one which has a tiny bit of controversy. It involves how to “hide” helper functions in a module in order to keep them from being exported.

Export-ModuleMember Details
In case you’re unfamiliar with how exporting functions from a module works, here are the basic rules:

  1. If there are no Export-ModuleMember statements, all function are exported
  2. If there are any Export-ModuleMember statements, only the functions named in an Export-ModuleMember statement are exported

In a similar question (which I answered the same way) a couple of other solutions are presented. Those solutions involve invoking the PSParser to find all of the functions and while technically correct, I think they miss the point of the question.

Why hide helper functions?
In the context of a PowerShell module, a helper function is simply a function which supports the functionality of the “public” functions in the module, but isn’t appropriate for use by the end-user. A helper function may implement common logic needed by several functions or possibly interact with implementation details in the module which are abstracted away from the user’s viewpoint. Exporting helper functions provides no benefit for the public, and in fact can cause confusion as these extra functions get in the way of understanding the focus of the module. Thus, it is important to be able to exclude these helper functions from the normal export from the module.

Why it’s hard to hide helper functions
First, it’s not actually hard to hide helper functions, it’s just tedious. All you have to do is list each non-helper function in an Export-ModuleMember statement. Unfortunately, that means if you have 100 functions with only one helper function, you need to list each of the 99 functions in order to hide the single helper function. Also, if you add a function later, you need to remember to add it to the list of exported functions. Not a good prize in my book. The PSParser solutions are correct in that they work, but they are a big block of code that obscures the intent.

My easy solution and the broken rule
My solution is to name helper functions with a VerbNoun convention rather than the standard Verb-Noun convention and use Export-ModuleMember *-* to export all functions named like PowerShell cmdlets are supposed to be. Using a different naming conventions is breaking an important rule in the PowerShell community and you’ll see in the comments about my original answer that someone called me out on it.

Why the rule exists (and why I don’t care that I broke it)
PowerShell was designed and delivered as a very discoverable system. That is, you can use PowerShell to find out stuff about PowerShell, and once you know some PowerShell you can leverage that knowledge to use even more PowerShell. The Verb-Noun convention clearly marks PowerShell cmdlets (functions, scripts) as distictive items, and the verbs are curated to help guide you to the same functionality in different arenas. For instance, my favorite example is the verb Stop. You could easily have used End, Terminate, Kill, or any number of other verbs in place of Stop, but because Stop is the approved verb you know it’s the one to use. Thus, when you start to look at services, you know it’s going to be Stop-Service. When you look at jobs, you know it will be Stop-Job.

By using Verb-Noun in your functions you make them fit nicely into the PowerShell ecosystem. Running into improperly named (either not following the convention or using unapproved verbs) is uncommon, and because of this things work nicely and everyone is happy.

Helper functions are not meant to be discoverable. They exist only in the private implementation of a module, and users never need to know that they exist, let alone try to figure out how to use them. For this reason, I don’t really mind breaking the rule.

I’d rather have this:

Export-ModuleMember *-*

Than this:

Add-Type -Path "${env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\WindowsPowerShell\3.0\System.Management.Automation.dll"

Function Get-PSFunctionNames([string]$Path) {
    $ast = [System.Management.Automation.Language.Parser]::ParseFile($Path, [ref]$null, [ref]$null)
    $functionDefAsts = $ast.FindAll({ $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] }, $true)
    $functionDefAsts | ForEach-Object { $_.Name }
Export-ModuleMember -Function ( (Get-PSFunctionNames $PSCommandPath) | Where { $_ -ne 'MyPrivateFunction' } )

or this:

$errors = $null 
$functions = []::Tokenize($psISE.CurrentFile.Editor.Text, [ref]$errors) `
    | ?{(($_.Content -Eq "Function") -or ($_.Content -eq "Filter")) -and $_.Type -eq "Keyword" } `
    | Select-Object @{"Name"="FunctionName"; "Expression"={
$functions | ?{ $_.FunctionName -ne "your-excluded-function" }

I’m really interested in feedback on this, since I’m coloring outside the lines. Do you favor practicality or do you think I should follow the rule.

Let me know your thoughts in the comments.


POSH_Ado : Inside POSH_Ado_SQLServer

In a previous post I introduced the POSH_Ado “project” and explained that it is a way to use the same code-base to access several different database platforms. I illustrated it with some sample calls to a SQL Server database using the POSH_Ado_SQLServer module and promised to show the internals of the module later. The time has come. Here’s how POSH_Ado_SQLServer works:

import-module POSH_Ado -args System.Data.SqlClient -Prefix SQLServer -force

Export-ModuleMember *-SQLServer*

That’s it. The module simply imports the POSH_Ado module, telling it what ADO.NET provider to use (System.Data.SQLClient) and what prefix to use for the imported cmdlets (SQLServer). It then, in turn, exports all of the cmdlets with the SQLServer prefix.

With that tiny bit of effort you get:

  • SQL and NT authenticated connections
  • Parameterized queries and stored procedures
  • Input and output parameters (no in/out parameters yet, though)
  • Ad-hoc or stored connections

What’s missing in this list? I can think of a couple of things (which I need to enter as issues on GitHub):

  • In/out parameters
  • SQL BulkCopy (it’s there in Adolib…just need to copy it to POSH_Ado_SQLServer

Since the code for POSH_Ado is based on Adolib which targeted SQL Server, it shouldn’t be surprising to see that there’s not much to do to get POSH_Ado to work with SQL Server. In the next “episode”, I’ll connect to MySQL, and the real benefit of POSH_Ado should become apparent.

Let me know what you think in the comments!


PowerShell List Assignment

PowerShell and lists of data go together hand-in-hand. Any time you execute a cmdlet, function, or script, the output is a list of objects which is placed in the output stream for processing.

Assigning a list to a variable is not very interesting, either. You just assign it and it’s done. Like this, for instance:

$files=dir c:\temp

Nothing to see here, we do this every time we use PowerShell. Lists on the right-hand side of the assignment operator are boring.

You might have even seen a trick for swapping variables using a comma on the left-hand side like this:


That’s kind of cool, but it seems like a pretty specific kind of thing to do. Fortunately for us, lists on the left-hand side can do more that this.

As an example, consider this line:


If you look at $a, you’ll see that it got the 1, and $b got 2 and 3.

We can expand the example:


Now, $a gets 1, $b gets 2, and $c gets 3,4,5 and 6.

The pattern should be clear. Each variable on the left gets a single object, until the last one which gets all remaining objects. If we have more variables than values, the “extra” variables are $null. If you specify the same variable more than once, it keeps the last corresponding value.

Why is this useful?

Well, if you want to work with a collection but treat the first item specially, now you have an easy way to do that.

$first,$rest = <however you get your collection>
<process $first>
<process $rest>

Probably not something you’ll do all the time, but it’s another trick in the bag.

Do you have any scenarios where this would be helpful? Let me know in the comments.


Getting Started with POSH_Ado

This is kind of long-winded, if you want, skip down to the code and it should be clear (I hope).

Where the story starts
As I mentioned here, the original adolib module got copied a few times and with cut-and-paste programming got turned into modules for other database platforms. While it was cool that it worked, I felt for a long time that this wasn’t the right solution. It felt like an object-oriented exercise where a parent class (abstract?) was called for to implement the ADO.NET pattern. Then, “child” modules could somehow cause the parent module functions to use the correct platform.

An awesome clue!
When I went looking for a way to build a better solution, I found Invoke-ADOCommand in the PowerShell Community Extensions project. It used ADO.NET providers as a key to implement a “generic” cmdlet for querying different platforms. If I recall correctly (it has been several years now), I think someone on StackOverflow pointed me there.

Going my own way
So I knew the key ingredient in the solution would be ADO.NET providers, but how to use those and preserve the flow of the adolib module? I wanted the new solution to work as closely as possible to the old solution for a few reasons:

    I have used a module (which I wrote) similar to adolib at work for quite some time and I really like it.
    I didn’t want people who had used adolib to have to relearn anything
    I didn’t want to significantly rewrite the code in adolib

Introducing POSH_Ado
The solution I came up with is a parameterized module called POSH_Ado which allows you to specify the provider that you want to use. If you compare the functions in POSH_Ado to the corresponding functions in adolib, you will see that they are very similar, with a few extra variables to help deal with variations in the syntax on different platforms, and a modified way of creating new platform-specific objects.

Using POSH_Ado
In order to use POSH_Ado, you need to install the POSH_Ado module, as well as the helper module for the specific platform(s) that you want to access. For the purposes of this article, I will be using POSH_Ado_SQLServer.

Once the modules are installed in an appropriate location, you simply import the platform-specific module. The module in turn imports the POSH_Ado module with a platform-specific prefix and sets options which are appropriate to the platform you’re using.

Shut up! Let’s see some code!
To illustrate, I will recreate the commands from the previous post (about adolib) using POSH_Ado_SQLServer. The only changes here are the “SQLServer” prefix in the cmdlets, and the change from Invoke-SQL to Invoke-Command. Also, I’m lazy so the text is just copied from the previous post.

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-SQLServerConnection -server '.'

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-SQLServerConnection -server '.' -database AdventureWorks2012
invoke-SQLServerQuery -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-SQLServerConnection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-SQLServerCommand -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-SQLServerStoredProcedure (with parameters that match Invoke-SQLServerQuery):

invoke-SQLServerStoredProcedure -storedProcName sp_who2 -connection $conn

What’s next?
So, all of that work and I’ve basically got a module which works just like adolib did. Why bother? In the next post, I’ll show you:

    What the POSH_Ado_SQLServer module looks like
    How the POSH_Ado_Oracle module is different from POSH_Ado_SQLServer
    All of the other platforms that I’ve tested this code on. :-)

I think once you’ve seen those three things, you’ll agree that this was a worthwhile exercise.

For now, that’s all.

Let me know what you think!


P.S. You can find the source here

Getting Started with ADOLib

In the last post, I gave a quick history of my involvement with SQL and SQLPSX.  A big part of that was the ADOLib module in SQLPSX, which I haven’t ever really explained here.  Since it has been almost 6 years now, I don’t see how I have managed to skip it.

This post should correct that oversight.

First of all, SQLPSX is mainly about administering SQL.  ADOLib, on the other hand is specifically geared towards using SQL as a service, executing queries and SQL commands as reasonably possible.

To that end, here are some examples of ADOLib usage:

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-connection -server '.'

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-connection -server '.' -database AdventureWorks2012
invoke-query -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-connection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-sql -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-StoredProcedure (with parameters that match Invoke-Query):

invoke-storedprocedure -storedProcName sp_who2 -connection $conn

The final important piece in ADOLib is Invoke-BulkCopy which is used to move lots of data quickly between ADO providers. You will usually be moving data between different servers (because there are lots easier ways to move data on the same server), but in this example we will use the same server as source and destination.
I’ve created a copy of the DatabaseLog table in AdventureWorks2012 and called it DatabaseLog2. Databaselog2 is empty, but we can use the following commands to copy the data.

$cmd=new-sqlcommand -connection $conn -sql 'Select * from DatabaseLog'
invoke-bulkcopy -records $cmd -server '.' -table DatabaseLog2

Here you can see that I am creating a command object with the source query but not executing it. I pass it into Invoke-BulkCopy so the .NET SQLBulkCopy class can take care of handling the rows. I also have to provide new connection information to the cmdlet. I’m not sure why I didn’t allow you to pass an established connection in, but I’ve created an issue so I can follow up on that.

There is more in ADOLib (including transaction support and output parameters), but I’ll let you look inside and see them.

Next time I’ll show how these same kinds of things can be accomplished using the new POSH_Ado module.

Let me know what you think in the comments


P.S.  ADOLib can be found here

Executing SQL the right way in PowerShell (revisited)

Almost 6 years ago I wrote this post about how I thought you should go about executing SQL statements and queries in PowerShell. It included a function which allowed you to pass a hashtable of parameter names and values.  This post has been one of the most popular on the site.

I submitted a module to the SQLPSX project called adolib which included a greatly expanded version of this function, along with others that let you invoke commands, stored procedures, and queries, as well as use the .NET SQLBulkCopy class to do high-performance data manipulation.

The SQLPSX has seen a lot of contributors come and go, but unfortunately there hasn’t been a release in several years and behind the scenes, not even very much activity.  At the recommendation of members of the PowerShell community (and after discussing with Chad Miller, the founder of SQLPSX) I copied the project to GitHub.  The repository can be found here.  I’ve opened a few issues for some “low-hanging fruit”, and welcome input from anyone.

One interesting facet of working in SQLPSX on adolib was that it was copied and modified to create MySQLLib and OracleClient modules.  These modules were based on the same code, except instead of using the SQLClient namespace they used the Oracle.DataAccess.Client and MySql.Data.MySqlClient namespaces.  Because ADO.NET works much the same no matter which provider you use, this worked fine.

I wasn’t really satisfied with copy-paste programming, though, and wrote a new version of adolib which allows you to specify the provider as a parameter when importing the module.  I’ve tested the approach with several different providers (SQL Server, MySQL, DB2, Oracle, FireBird, and even Paradox).  The code sat untouched as a fork on the SQLPSX project, so when I moved SQLPSX to GitHub, I decided to remove that fork and create a separate repository for it as POSH_ADO.  Watch for a post or two on using POSH_ADO in the next week or so.