Getting Started with POSH_Ado

This is kind of long-winded, if you want, skip down to the code and it should be clear (I hope).

Where the story starts
As I mentioned here, the original adolib module got copied a few times and with cut-and-paste programming got turned into modules for other database platforms. While it was cool that it worked, I felt for a long time that this wasn’t the right solution. It felt like an object-oriented exercise where a parent class (abstract?) was called for to implement the ADO.NET pattern. Then, “child” modules could somehow cause the parent module functions to use the correct platform.

An awesome clue!
When I went looking for a way to build a better solution, I found Invoke-ADOCommand in the PowerShell Community Extensions project. It used ADO.NET providers as a key to implement a “generic” cmdlet for querying different platforms. If I recall correctly (it has been several years now), I think someone on StackOverflow pointed me there.

Going my own way
So I knew the key ingredient in the solution would be ADO.NET providers, but how to use those and preserve the flow of the adolib module? I wanted the new solution to work as closely as possible to the old solution for a few reasons:

    I have used a module (which I wrote) similar to adolib at work for quite some time and I really like it.
    I didn’t want people who had used adolib to have to relearn anything
    I didn’t want to significantly rewrite the code in adolib

Introducing POSH_Ado
The solution I came up with is a parameterized module called POSH_Ado which allows you to specify the provider that you want to use. If you compare the functions in POSH_Ado to the corresponding functions in adolib, you will see that they are very similar, with a few extra variables to help deal with variations in the syntax on different platforms, and a modified way of creating new platform-specific objects.

Using POSH_Ado
In order to use POSH_Ado, you need to install the POSH_Ado module, as well as the helper module for the specific platform(s) that you want to access. For the purposes of this article, I will be using POSH_Ado_SQLServer.

Once the modules are installed in an appropriate location, you simply import the platform-specific module. The module in turn imports the POSH_Ado module with a platform-specific prefix and sets options which are appropriate to the platform you’re using.

Shut up! Let’s see some code!
To illustrate, I will recreate the commands from the previous post (about adolib) using POSH_Ado_SQLServer. The only changes here are the “SQLServer” prefix in the cmdlets, and the change from Invoke-SQL to Invoke-Command. Also, I’m lazy so the text is just copied from the previous post.

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-SQLServerConnection -server '.'

invoke-SQLServerQuery -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-SQLServerConnection -server '.' -database AdventureWorks2012
 
invoke-SQLServerQuery -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-SQLServerConnection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-SQLServerCommand -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-SQLServerStoredProcedure (with parameters that match Invoke-SQLServerQuery):

invoke-SQLServerStoredProcedure -storedProcName sp_who2 -connection $conn

What’s next?
So, all of that work and I’ve basically got a module which works just like adolib did. Why bother? In the next post, I’ll show you:

    What the POSH_Ado_SQLServer module looks like
    How the POSH_Ado_Oracle module is different from POSH_Ado_SQLServer
    All of the other platforms that I’ve tested this code on. 🙂

I think once you’ve seen those three things, you’ll agree that this was a worthwhile exercise.

For now, that’s all.

Let me know what you think!

-Mike

P.S. You can find the source here

Getting Started with ADOLib

In the last post, I gave a quick history of my involvement with SQL and SQLPSX.  A big part of that was the ADOLib module in SQLPSX, which I haven’t ever really explained here.  Since it has been almost 6 years now, I don’t see how I have managed to skip it.

This post should correct that oversight.

First of all, SQLPSX is mainly about administering SQL.  ADOLib, on the other hand is specifically geared towards using SQL as a service, executing queries and SQL commands as reasonably possible.

To that end, here are some examples of ADOLib usage:

Query a database on the local machine (.) with an windows-authenticated ad-hoc connection.

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -server . 

Query a database on the local machine (.) with a windows-authenticated persistent connection

$conn=new-connection -server '.'

invoke-query -sql 'select top 1 * from AdventureWorks2012.Person.Person' -connection $conn

Note that we can include the database in the connection as well:

$conn=new-connection -server '.' -database AdventureWorks2012
 
invoke-query -sql 'select top 1 * from Person.Person' -connection $conn

If we need to use SQL Security, we can supply a -User and -Password (in plaintext :-()

$conn=new-connection -server '.' -User MyUser -Password P@ssword

We can run sql statements that don’t return rows (like INSERT, UPDATE, DELETE) with Invoke-SQL:

invoke-sql -sql "Update Person.Person set MiddleName='F' where BusinessEntityID=@ID" -parameters @{ID=1} -connection $conn

Here I’m also using SQL parameters. It’s pretty simple. You just include the parameters prefixed with @ in the SQL statement and then provide a -Parameters hashtable including values for each of the parameters you use. Here I only used a single parameter (ID), so there was only one entry in the hashtable. Invoke-SQL returns the number of rows affected by the SQL statement, by the way.

There’s a cmdlet for executing Stored Procedures called Invoke-StoredProcedure (with parameters that match Invoke-Query):

invoke-storedprocedure -storedProcName sp_who2 -connection $conn

The final important piece in ADOLib is Invoke-BulkCopy which is used to move lots of data quickly between ADO providers. You will usually be moving data between different servers (because there are lots easier ways to move data on the same server), but in this example we will use the same server as source and destination.
I’ve created a copy of the DatabaseLog table in AdventureWorks2012 and called it DatabaseLog2. Databaselog2 is empty, but we can use the following commands to copy the data.

$cmd=new-sqlcommand -connection $conn -sql 'Select * from DatabaseLog'
invoke-bulkcopy -records $cmd -server '.' -table DatabaseLog2

Here you can see that I am creating a command object with the source query but not executing it. I pass it into Invoke-BulkCopy so the .NET SQLBulkCopy class can take care of handling the rows. I also have to provide new connection information to the cmdlet. I’m not sure why I didn’t allow you to pass an established connection in, but I’ve created an issue so I can follow up on that.

There is more in ADOLib (including transaction support and output parameters), but I’ll let you look inside and see them.

Next time I’ll show how these same kinds of things can be accomplished using the new POSH_Ado module.

Let me know what you think in the comments

-Mike

P.S.  ADOLib can be found here

Executing SQL the right way in PowerShell (revisited)

Almost 6 years ago I wrote this post about how I thought you should go about executing SQL statements and queries in PowerShell. It included a function which allowed you to pass a hashtable of parameter names and values.  This post has been one of the most popular on the site.

I submitted a module to the SQLPSX project called adolib which included a greatly expanded version of this function, along with others that let you invoke commands, stored procedures, and queries, as well as use the .NET SQLBulkCopy class to do high-performance data manipulation.

The SQLPSX has seen a lot of contributors come and go, but unfortunately there hasn’t been a release in several years and behind the scenes, not even very much activity.  At the recommendation of members of the PowerShell community (and after discussing with Chad Miller, the founder of SQLPSX) I copied the project to GitHub.  The repository can be found here.  I’ve opened a few issues for some “low-hanging fruit”, and welcome input from anyone.

One interesting facet of working in SQLPSX on adolib was that it was copied and modified to create MySQLLib and OracleClient modules.  These modules were based on the same code, except instead of using the SQLClient namespace they used the Oracle.DataAccess.Client and MySql.Data.MySqlClient namespaces.  Because ADO.NET works much the same no matter which provider you use, this worked fine.

I wasn’t really satisfied with copy-paste programming, though, and wrote a new version of adolib which allows you to specify the provider as a parameter when importing the module.  I’ve tested the approach with several different providers (SQL Server, MySQL, DB2, Oracle, FireBird, and even Paradox).  The code sat untouched as a fork on the SQLPSX project, so when I moved SQLPSX to GitHub, I decided to remove that fork and create a separate repository for it as POSH_ADO.  Watch for a post or two on using POSH_ADO in the next week or so.

-Mike

 

Scope delimiter in interpolated strings

I’ve been meaning to write about this for a while.  It’s a simple thing that broke some code from PowerShell 1.0.  Yes, I still have some code running in production that was written back before 2.0 came out. And before I go any further let me say that PowerShell has done a remarkable job in keeping backward compatibility. I very rarely have old code break due to new PowerShell features or parsing.

Anyway, when writing messages out to the screen to show what’s going on in a script, I would often use a pattern like this:

write-host "$setting1 and $setting2"

This code upgraded just fine and is not a problem.

Where I ran into a problem was when I varied the pattern slightly.  The following code is not so happy:

write-host "`$setting1:$setting2"

This was valid 1.0 code, but it doesn’t run in 2.0 or above.

screenshot_variables

The problem stems from the addition of scope labels for variables in 2.0. To refer to scoped variables, you prefix the name of the variable with the scope modifier (local, global, script, private) followed by a colon. So the parser is seeing $setting1:$setting2 and thinking that “setting1” is a scope modifier.

Easy workarounds for this are adding a space before the colon or escaping it with a backtick. Also, I guess you could use subexpressions $() for setting1.

Have you run into this before? What other problems have you found in old code running in newer versions of PowerShell?

–Mike

Quick Tip – Avoid abbreviating parameter names

Looking at some of the solutions to the July scripting games problems (here) I noticed that several of them used abbreviations for parameter names.  For instance:

gwmi win32_operatingsystem -co @(".")

I understand that this is a competition of sorts and that part of the challenge is to get a solution with the smallest number of characters, but I realized that I really, really don’t like abbreviated parameter names.

To be clear, these are fine on the command-line (as are aliases, for instance), but I really want to avoid using parameter name abbreviations in my code. For one thing, since PowerShell allows you to use as short an abbreviation as you want as long as it is unambiguous, there is not a single “short form” for a given parameter. In the code above, for example, -co could have been -co, -com, -comp, etc. That leads to inconsistent code and reduces readability in my opinion.
Second, parameter abbreviations are not necessarily stable across PowerShell versions. It’s entirely possible, for instance, that a parameter starting with “co” could be added to in the next version of PowerShell which would make the parameter ambiguous.  At that point, the code is invalid (as well as not very readable).

I know this isn’t a huge deal, but wanted to get my thoughts out here.

Let me know what you think.

p.s.  somehow this got published without the ending.  I just now noticed and updated so it didn’t end in the middle of a sentence.

-Mike

Cleaning the Path – A PowerShell One-liner

I’m not super crazy about writing one-liners in PowerShell, but I ran across a fun problem which was quick to write as a one-liner.  I’ll give that here with a little explanation, and follow up in a couple of days with a more polished advanced function solution.

Anyway, the problem was that I was working on a computer and happened to take a look at the PATH environment variable and saw a lot of directories in the path that were no longer valid. Apparently efforts to clean up the machine (e.g. removing old Visual Studio and SQL Server installs) didn’t include fixing the path.
To see if you have this problem, you can easily see your PATH with the following line of PowerShell (and no, this isn’t the one-liner)

$env:Path

When I saw the output (which included 48 different folders) I knew I needed to fix it.
Since Test-Path is an easy way to see if a folder exists, I quickly wrote the following to see which entries were bad:

$env:path -split ';' | where {!(Test-Path $_  )}

That listed 11 that were bad, but I also got an error because apparently there were some adjacent semicolons, meaning that $_ was set to an empty string which Test-Path didn’t like.
A quick addition made it not complain:

$env:path -split ';' | where {$_ -and !(Test-Path $_  )}

This gave me the list of directories that I needed to eliminate. Reversing the logic a bit to get the directories I want to retain looked like this:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}

Looking better, but now I notice that some directories are listed more than once ($PSHOME, for example is listed 6 times).
Adding a quick uniqueness check:

$env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique

That gives a much better list.
I then added -Join to paste these back together.

($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

And that looks like a good value for $env:Path.
If I just needed to set it for the current session, I could do this:

$env:path=($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';'

But that isn’t particularly useful. Before I do anything permanent, though, I should save the current PATH somewhere. We can use the System.Environment class to set environment variables for the machine and copy the existing PATH to a new environment variable.

[System.Environment]::SetEnvironmentVariable('Path_SAVED',($env:path),'Machine')

The final one-liner is this (and it’s not pretty):

[System.Environment]::SetEnvironmentVariable('Path',($env:path -split ';' | where {$_ -and (Test-Path $_  )}| select-object -unique) -join ';','Machine')

Ok…that was a fun exercise. PowerShell continually impresses me with how easy it is do to things.

Got any good ideas for ways to improve this?

Let me know in the comments, and watch for the follow-up article in a couple of days about rewriting this as a real advanced function.

-Mike

PowerShell Summit 2015 North America Videos!!

If you, like me, aren’t fortunate enough to be able to be at the PowerShell Summit going on right now in Charlotte, NC, you can at least watch/listen to the videos of the sessions.

I’ve watched a couple already and even though it’s not as good as being there, it’s still really good.

The quality of the information in the presentations so far has been awesome.

The videos can be found on the PowerShell.org channel on youtube. They are slides and audio, so you don’t get to watch the presenters, but that doesn’t really diminish the value.

Here’s the link:
https://www.youtube.com/channel/UCqIw7UUwC5fUBFXYX68aMrQ

PowerShell will not fix all of your problems

I’m definitely guilty of using PowerShell in situations where it’s not the best answer. Some of that is curiosity (can I make it work) and some of it is stubbornness (I bet I can make it work). But I never want to give the impression that PowerShell is “fixing” my problems.

For instance, if you don’t have defined processes or clear requirements, trying to apply automation is going to end up an exercise in frustration. You’ll be asking “why did it do that?” when the answer is clearly that the script is written to do things that way.

So if you’re in over your head and know that you need automation to give you some leverage to get out of your bad situation, the first step is almost never to throw PowerShell into the mix. The first step should always be to make sure that you have a well-defined process. If that means that you continue manually for a bit so you can get everyone on-board with the process that’s fine. Once the process is defined, scripting it with PowerShell (or whatever is your automation tool of choice) will be much easier and the results more predictable.

Will PowerShell solve all of your problems? No.

Can PowerShell automate the solutions to problems that you have a process to handle? Definitely.

Perhaps you’re so busy you can’t get a handle on things enough to specify a full solution. That definitely happens and I don’t want to give the impression that you have to have 100% of things under control to apply automation to the mix. What you can do, though, is find a small subset of the problems you’re dealing with that are simple. Maybe that’s only 10% of your work and it doesn’t seem like it would be worth automating. If you automated that 10%, though, you’d get almost an hour each day back to enable you to focus on the things that are really eating up your time. And since the 10% is “simple”, it shouldn’t be difficult to automate, at least compared to the rest of your work.

Something else that I’ve found is that once you have automated the simple cases, more and more things begin to fall into that classification. Once you’ve got a solution that’s proven, it’s easy to build on that to start pulling in some of the more complex tasks. Pretty soon you will find that you some free time on your hands.

The point is that you can use automation to gain traction when it doesn’t seem like you’re making any headway. Once you get traction, you can accomplish a lot on your own. With PowerShell, you can accomplish a lot in a repeatable way, accurately, and in many cases without human intervention.

What do you think?

Mike

My PowerShell goals for 2015

I’m not much on New Year’s resolutions but I’ve seen a few people post their PowerShell-related goals and thought I’d jump on that bandwagon.

Here are a few things I want to get accomplished this year:

1.  50 blog posts
2.  New release of SQLPSX
3.  Separate release of ADOLIB
4.  Second book (maybe in a different format, like Pluralsight?)
(if you missed it, my first book was released late last year here).
5. Teach 10 PowerShell classes at work
6. Work through the IIS and AD month of lunches books
7. Build a virtualization lab at home and practice Hyper-V and VMWare
8. Do something cloudy (no idea what)

That sounds like a full plate for me. If you have any suggestions for posts (or series of posts 🙂 ) that would be awesome!

Mike

Packt’s $5 eBook Bonanza and what I’ve been doing all year

Early this year I was contacted by Packt Publishing to see if I had any interest in writing a PowerShell book. After I got up off the floor and thought about it a bit, I decided that it was something I wanted to do. I have spent the majority of the year struggling with my undisciplined, procrastinating nature and finally have hardcopies of my book in hand.  It has been a fun, rewarding process and I might just be hooked.  More on that to come.  🙂

The book is called “PowerShell TroubleShooting Guide“, and its focus is on understanding the PowerShell language and engine in order to give you more “traction” when coding and allowing you to spend less time debugging.

Here’s the great part. Just like last year, Packt is having their $5 eBook Bonanza, where all eBooks and videos are only $5. The sale is going until January 6, 2015, so you have some time.

I’m looking hearing your thoughts on the content I have chosen.

–Mike