PowerShell and MySQL : POSH_Ado_MySQL

Using PowerShell and MySQL together with POSH_Ado is just as easy as SQL Server. You’ll need the POSH_Ado and POSH_Ado_MySQL Modules, and use this command to get started:

Import-Module POSH_Ado_MySQL

Once you’ve done that you’ll have the following functions at your disposal:

  • New-MySQLCommand
  • New-MySQLConnectionString
  • New-MySQLCommand
  • Invoke-MySQLCommand
  • Invoke-MySQLQuery
  • Invoke-MySQLStoredProcedure

These functions work just like the ones for SQLServer in AdoLib or POSH_Ado_SQLServer, except that they work with MySQL.

Inside POSH_Ado_MySQL, you’ll see that (just like POSH_Ado_SQLServer), it is simply importing the POSH_Ado module, specifying the MySQL ADO.NET provider name and the prefix (MySQL). Then, it calls the Set-MySQLADONetParameters function to add an option to the connection strings that are generated and to specify that there is no prefix for parameter names.

import-module POSH_Ado -args MySql.Data.MySqlClient -Prefix MySQL -force

# .NET (and PowerShell) do not like zero datetime values by default.  This option helps with that.
# http://dev.mysql.com/doc/refman/5.5/en/connector-net-connection-options.html
Set-MySQLADONetParameters -option @{'Allow Zero Datetime'='true'} -ParameterPrefix ''

Export-ModuleMember *-MySQL*

Hopefully by now you can see the power of the POSH_Ado project:

  1. All of the ADO.NET logic is in one place
  2. Consistent (but distinctly named) cmdlets for working with each platform
  3. Flexibility to set platform-specific options

There are currently 3 other platform-specific POSH_Ado modules: POSH_Ado_Oracle, POSH_Ado_Firebird, and POSH_Ado_DB2. It should be no trouble to create Postgres, SQLite, and OLEDB modules as well.

Do you have any projects where POSH_Ado could come in handy? What about other platforms to explore?

Let me know your thoughts in the comments.


P.S. POSH_Ado and the platform-specific modules can be found here

Breaking the rules with helper functions

One of my most popular answers on StackOverflow is also one which has a tiny bit of controversy. It involves how to “hide” helper functions in a module in order to keep them from being exported.

Export-ModuleMember Details
In case you’re unfamiliar with how exporting functions from a module works, here are the basic rules:

  1. If there are no Export-ModuleMember statements, all function are exported
  2. If there are any Export-ModuleMember statements, only the functions named in an Export-ModuleMember statement are exported

In a similar question (which I answered the same way) a couple of other solutions are presented. Those solutions involve invoking the PSParser to find all of the functions and while technically correct, I think they miss the point of the question.

Why hide helper functions?
In the context of a PowerShell module, a helper function is simply a function which supports the functionality of the “public” functions in the module, but isn’t appropriate for use by the end-user. A helper function may implement common logic needed by several functions or possibly interact with implementation details in the module which are abstracted away from the user’s viewpoint. Exporting helper functions provides no benefit for the public, and in fact can cause confusion as these extra functions get in the way of understanding the focus of the module. Thus, it is important to be able to exclude these helper functions from the normal export from the module.

Why it’s hard to hide helper functions
First, it’s not actually hard to hide helper functions, it’s just tedious. All you have to do is list each non-helper function in an Export-ModuleMember statement. Unfortunately, that means if you have 100 functions with only one helper function, you need to list each of the 99 functions in order to hide the single helper function. Also, if you add a function later, you need to remember to add it to the list of exported functions. Not a good prize in my book. The PSParser solutions are correct in that they work, but they are a big block of code that obscures the intent.

My easy solution and the broken rule
My solution is to name helper functions with a VerbNoun convention rather than the standard Verb-Noun convention and use Export-ModuleMember *-* to export all functions named like PowerShell cmdlets are supposed to be. Using a different naming conventions is breaking an important rule in the PowerShell community and you’ll see in the comments about my original answer that someone called me out on it.

Why the rule exists (and why I don’t care that I broke it)
PowerShell was designed and delivered as a very discoverable system. That is, you can use PowerShell to find out stuff about PowerShell, and once you know some PowerShell you can leverage that knowledge to use even more PowerShell. The Verb-Noun convention clearly marks PowerShell cmdlets (functions, scripts) as distictive items, and the verbs are curated to help guide you to the same functionality in different arenas. For instance, my favorite example is the verb Stop. You could easily have used End, Terminate, Kill, or any number of other verbs in place of Stop, but because Stop is the approved verb you know it’s the one to use. Thus, when you start to look at services, you know it’s going to be Stop-Service. When you look at jobs, you know it will be Stop-Job.

By using Verb-Noun in your functions you make them fit nicely into the PowerShell ecosystem. Running into improperly named (either not following the convention or using unapproved verbs) is uncommon, and because of this things work nicely and everyone is happy.

Helper functions are not meant to be discoverable. They exist only in the private implementation of a module, and users never need to know that they exist, let alone try to figure out how to use them. For this reason, I don’t really mind breaking the rule.

I’d rather have this:

Export-ModuleMember *-*

Than this:

Add-Type -Path "${env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\WindowsPowerShell\3.0\System.Management.Automation.dll"

Function Get-PSFunctionNames([string]$Path) {
    $ast = [System.Management.Automation.Language.Parser]::ParseFile($Path, [ref]$null, [ref]$null)
    $functionDefAsts = $ast.FindAll({ $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] }, $true)
    $functionDefAsts | ForEach-Object { $_.Name }
Export-ModuleMember -Function ( (Get-PSFunctionNames $PSCommandPath) | Where { $_ -ne 'MyPrivateFunction' } )

or this:

$errors = $null 
$functions = [system.management.automation.psparser]::Tokenize($psISE.CurrentFile.Editor.Text, [ref]$errors) `
    | ?{(($_.Content -Eq "Function") -or ($_.Content -eq "Filter")) -and $_.Type -eq "Keyword" } `
    | Select-Object @{"Name"="FunctionName"; "Expression"={
$functions | ?{ $_.FunctionName -ne "your-excluded-function" }

I’m really interested in feedback on this, since I’m coloring outside the lines. Do you favor practicality or do you think I should follow the rule.

Let me know your thoughts in the comments.


POSH_Ado : Inside POSH_Ado_SQLServer

In a previous post I introduced the POSH_Ado “project” and explained that it is a way to use the same code-base to access several different database platforms. I illustrated it with some sample calls to a SQL Server database using the POSH_Ado_SQLServer module and promised to show the internals of the module later. The time has come. Here’s how POSH_Ado_SQLServer works:

import-module POSH_Ado -args System.Data.SqlClient -Prefix SQLServer -force

Export-ModuleMember *-SQLServer*

That’s it. The module simply imports the POSH_Ado module, telling it what ADO.NET provider to use (System.Data.SQLClient) and what prefix to use for the imported cmdlets (SQLServer). It then, in turn, exports all of the cmdlets with the SQLServer prefix.

With that tiny bit of effort you get:

  • SQL and NT authenticated connections
  • Parameterized queries and stored procedures
  • Input and output parameters (no in/out parameters yet, though)
  • Ad-hoc or stored connections

What’s missing in this list? I can think of a couple of things (which I need to enter as issues on GitHub):

  • In/out parameters
  • SQL BulkCopy (it’s there in Adolib…just need to copy it to POSH_Ado_SQLServer

Since the code for POSH_Ado is based on Adolib which targeted SQL Server, it shouldn’t be surprising to see that there’s not much to do to get POSH_Ado to work with SQL Server. In the next “episode”, I’ll connect to MySQL, and the real benefit of POSH_Ado should become apparent.

Let me know what you think in the comments!


PowerShell List Assignment

PowerShell and lists of data go together hand-in-hand. Any time you execute a cmdlet, function, or script, the output is a list of objects which is placed in the output stream for processing.

Assigning a list to a variable is not very interesting, either. You just assign it and it’s done. Like this, for instance:

$files=dir c:\temp

Nothing to see here, we do this every time we use PowerShell. Lists on the right-hand side of the assignment operator are boring.

You might have even seen a trick for swapping variables using a comma on the left-hand side like this:


That’s kind of cool, but it seems like a pretty specific kind of thing to do. Fortunately for us, lists on the left-hand side can do more that this.

As an example, consider this line:


If you look at $a, you’ll see that it got the 1, and $b got 2 and 3.

We can expand the example:


Now, $a gets 1, $b gets 2, and $c gets 3,4,5 and 6.

The pattern should be clear. Each variable on the left gets a single object, until the last one which gets all remaining objects. If we have more variables than values, the “extra” variables are $null. If you specify the same variable more than once, it keeps the last corresponding value.

Why is this useful?

Well, if you want to work with a collection but treat the first item specially, now you have an easy way to do that.

$first,$rest = <however you get your collection>
<process $first>
<process $rest>

Probably not something you’ll do all the time, but it’s another trick in the bag.

Do you have any scenarios where this would be helpful? Let me know in the comments.


Getting Started with POSH_Ado

This is kind of long-winded, if you want, skip down to the code and it should be clear (I hope).

Where the story starts
As I mentioned here, the original adolib module got copied a few times and with cut-and-paste programming got turned into modules for other database platforms. While it was cool that it worked, I felt for a long time that this wasn’t the right solution. It felt like an object-oriented exercise where a parent class (abstract?) was called for to implement the ADO.NET pattern. Then, “child” modules could somehow cause the parent module functions to use the correct platform.

An awesome clue!
When I went looking for a way to build a better solution, I found Invoke-ADOCommand in the PowerShell Community Extensions project. It used ADO.NET providers as a key to implement a “generic” cmdlet for querying different platforms. If I recall correctly (it has been several years now), I think someone on StackOverflow pointed me there.

Going my own way
So I knew the key ingredient in the solution would be ADO.NET providers, but how to use those and preserve the flow of the adolib module? I wanted the new solution to work as closely as possible to the old solution for a few reasons:

    I have used a module (which I wrote) similar to adolib at work for quite some time and I really like it.
    I didn’t want people who had used adolib to have to relearn anything
    I didn’t want to significantly rewrite the code in adolib

Introducing POSH_Ado
The solution I came up with is a parameterized module called POSH_Ado which allows you to specify the provider that you want to use. If you compare the functions in POSH_Ado to the corresponding functions in adolib, you will see that they are very similar, with a few extra variables to help deal with variations in the syntax on different platforms, and a modified way of creating new platform-specific objects.

Using POSH_Ado
In order to use POSH_Ado, you need to install the POSH_Ado module, as well as the helper module for the specific platform(s) that you want to access. For the purposes of this article, I will be using POSH_Ado_SQLServer.

Once the modules are installed in an appropriate location, you simply import the platform-specific module. The module in turn imports the POSH_Ado module with a platform-specific prefix and sets options which are appropriate to the platform you’re using.

Shut up! Let’s see some code!
To illustrate, I will recreate the commands from the previous post (about adolib) using POSH_Ado_SQLServer. The only changes here are the “SQLServer” prefix in the cmdlets, and the change from Invoke-SQL to Invoke-Command. Also, I’m lazy so the text is just copied from the previous post.

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-SQLServerConnection -server '.'

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-SQLServerConnection -server '.' -database AdventureWorks2012
invoke-SQLServerQuery -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-SQLServerConnection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-SQLServerCommand -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-SQLServerStoredProcedure (with parameters that match Invoke-SQLServerQuery):

invoke-SQLServerStoredProcedure -storedProcName sp_who2 -connection $conn

What’s next?
So, all of that work and I’ve basically got a module which works just like adolib did. Why bother? In the next post, I’ll show you:

    What the POSH_Ado_SQLServer module looks like
    How the POSH_Ado_Oracle module is different from POSH_Ado_SQLServer
    All of the other platforms that I’ve tested this code on. :-)

I think once you’ve seen those three things, you’ll agree that this was a worthwhile exercise.

For now, that’s all.

Let me know what you think!


P.S. You can find the source here

Getting Started with ADOLib

In the last post, I gave a quick history of my involvement with SQL and SQLPSX.  A big part of that was the ADOLib module in SQLPSX, which I haven’t ever really explained here.  Since it has been almost 6 years now, I don’t see how I have managed to skip it.

This post should correct that oversight.

First of all, SQLPSX is mainly about administering SQL.  ADOLib, on the other hand is specifically geared towards using SQL as a service, executing queries and SQL commands as reasonably possible.

To that end, here are some examples of ADOLib usage:

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-connection -server '.'

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-connection -server '.' -database AdventureWorks2012
invoke-query -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-connection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-sql -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-StoredProcedure (with parameters that match Invoke-Query):

invoke-storedprocedure -storedProcName sp_who2 -connection $conn

The final important piece in ADOLib is Invoke-BulkCopy which is used to move lots of data quickly between ADO providers. You will usually be moving data between different servers (because there are lots easier ways to move data on the same server), but in this example we will use the same server as source and destination.
I’ve created a copy of the DatabaseLog table in AdventureWorks2012 and called it DatabaseLog2. Databaselog2 is empty, but we can use the following commands to copy the data.

$cmd=new-sqlcommand -connection $conn -sql 'Select * from DatabaseLog'
invoke-bulkcopy -records $cmd -server '.' -table DatabaseLog2

Here you can see that I am creating a command object with the source query but not executing it. I pass it into Invoke-BulkCopy so the .NET SQLBulkCopy class can take care of handling the rows. I also have to provide new connection information to the cmdlet. I’m not sure why I didn’t allow you to pass an established connection in, but I’ve created an issue so I can follow up on that.

There is more in ADOLib (including transaction support and output parameters), but I’ll let you look inside and see them.

Next time I’ll show how these same kinds of things can be accomplished using the new POSH_Ado module.

Let me know what you think in the comments


P.S.  ADOLib can be found here

Executing SQL the right way in PowerShell (revisited)

Almost 6 years ago I wrote this post about how I thought you should go about executing SQL statements and queries in PowerShell. It included a function which allowed you to pass a hashtable of parameter names and values.  This post has been one of the most popular on the site.

I submitted a module to the SQLPSX project called adolib which included a greatly expanded version of this function, along with others that let you invoke commands, stored procedures, and queries, as well as use the .NET SQLBulkCopy class to do high-performance data manipulation.

The SQLPSX has seen a lot of contributors come and go, but unfortunately there hasn’t been a release in several years and behind the scenes, not even very much activity.  At the recommendation of members of the PowerShell community (and after discussing with Chad Miller, the founder of SQLPSX) I copied the project to GitHub.  The repository can be found here.  I’ve opened a few issues for some “low-hanging fruit”, and welcome input from anyone.

One interesting facet of working in SQLPSX on adolib was that it was copied and modified to create MySQLLib and OracleClient modules.  These modules were based on the same code, except instead of using the SQLClient namespace they used the Oracle.DataAccess.Client and MySql.Data.MySqlClient namespaces.  Because ADO.NET works much the same no matter which provider you use, this worked fine.

I wasn’t really satisfied with copy-paste programming, though, and wrote a new version of adolib which allows you to specify the provider as a parameter when importing the module.  I’ve tested the approach with several different providers (SQL Server, MySQL, DB2, Oracle, FireBird, and even Paradox).  The code sat untouched as a fork on the SQLPSX project, so when I moved SQLPSX to GitHub, I decided to remove that fork and create a separate repository for it as POSH_ADO.  Watch for a post or two on using POSH_ADO in the next week or so.



Scope delimiter in interpolated strings

I’ve been meaning to write about this for a while.  It’s a simple thing that broke some code from PowerShell 1.0.  Yes, I still have some code running in production that was written back before 2.0 came out. And before I go any further let me say that PowerShell has done a remarkable job in keeping backward compatibility. I very rarely have old code break due to new PowerShell features or parsing.

Anyway, when writing messages out to the screen to show what’s going on in a script, I would often use a pattern like this:

write-host "$setting1 and $setting2"

This code upgraded just fine and is not a problem.

Where I ran into a problem was when I varied the pattern slightly.  The following code is not so happy:

write-host "`$setting1:$setting2"

This was valid 1.0 code, but it doesn’t run in 2.0 or above.


The problem stems from the addition of scope labels for variables in 2.0. To refer to scoped variables, you prefix the name of the variable with the scope modifier (local, global, script, private) followed by a colon. So the parser is seeing $setting1:$setting2 and thinking that “setting1” is a scope modifier.

Easy workarounds for this are adding a space before the colon or escaping it with a backtick. Also, I guess you could use subexpressions $() for setting1.

Have you run into this before? What other problems have you found in old code running in newer versions of PowerShell?


Quick Tip – Avoid abbreviating parameter names!

Looking at some of the solutions to the July scripting games problems (here) I noticed that several of them used abbreviations for parameter names.  For instance:

gwmi win32_operatingsystem -co @(".")

I understand that this is a competition of sorts and that part of the challenge is to get a solution with the smallest number of characters, but I realized that I really, really don’t like abbreviated parameter names.

To be clear, these are fine on the command-line (as are aliases, for instance), but I really want to avoid using parameter name abbreviations in my code. For one thing, since PowerShell allows you to use as short an abbreviation as you want as long as it is unambiguous, there is not a single “short form” for a given parameter. In the code above, for example, -co could have been -co, -com, -comp, etc. That leads to inconsistent code and reduces readability in my opinion.
Second, parameter abbreviations are not necessarily stable across PowerShell versions. It’s entirely possible, for instance, that a parameter starting with “co” could be added to in the next version of PowerShell which would make the parameter ambiguous.  At that point, the code is invalid (as well as not very readable).

I know this isn’t a huge deal, but wanted to get my thoughts out here.


Let me know what you think.

p.s.  somehow this got published without the ending.  I just now noticed and updated so it didn’t end in the middle of a sentence.



Cleaning the Path – A PowerShell One-liner

I’m not super crazy about writing one-liners in PowerShell, but I ran across a fun problem which was quick to write as a one-liner.  I’ll give that here with a little explanation, and follow up in a couple of days with a more polished advanced function solution.

Anyway, the problem was that I was working on a computer and happened to take a look at the PATH environment variable and saw a lot of directories in the path that were no longer valid. Apparently efforts to clean up the machine (e.g. removing old Visual Studio and SQL Server installs) didn’t include fixing the path.
To see if you have this problem, you can easily see your PATH with the following line of PowerShell (and no, this isn’t the one-liner)


When I saw the output (which included 48 different folders) I knew I needed to fix it.
Since Test-Path is an easy way to see if a folder exists, I quickly wrote the following to see which entries were bad:

$env:path -split ';' | where {!(Test-Path $_  )}

That listed 11 that were bad, but I also got an error because apparently there were some adjacent semicolons, meaning that $_ was set to an empty string which Test-Path didn’t like.
A quick addition made it not complain:

$env:path -split ';' | where {$_ -and !(Test-Path $_  )}

This gave me the list of directories that I needed to eliminate. Reversing the logic a bit to get the directories I want to retain looked like this:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}

Looking better, but now I notice that some directories are listed more than once ($PSHOME, for example is listed 6 times).
Adding a quick uniqueness check:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique

That gives a much better list.
I then added -Join to paste these back together.

($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

And that looks like a good value for $env:Path.
If I just needed to set it for the current session, I could do this:

$env:path=($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

But that isn’t particularly useful. Before I do anything permanent, though, I should save the current PATH somewhere. We can use the System.Environment class to set environment variables for the machine and copy the existing PATH to a new environment variable.


The final one-liner is this (and it’s not pretty):

[System.Environment]::SetEnvironmentVariable('Path',($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';','Machine')

Ok…that was a fun exercise. PowerShell continually impresses me with how easy it is do to things.

Got any good ideas for ways to improve this?

Let me know in the comments, and watch for the follow-up article in a couple of days about rewriting this as a real advanced function.