I will skip the basics as this Microsoft blog did a good job explaining how to Run PowerShell scripts in SSIS .
Ok, now you have a powershell script running inside SSIS and soon enough you will need to deal with errors. By default you don’t get a lot from SSIS as the process popup window specified by WindowStyle normally gives you a glimpse of the error messages milliseconds before it terminates.
The first reaction I usually see people do is trying to make the popup window to stay/wait/hold, but instead of going down the fruitless path, here is what you can do:
1. Setup variables for StandardOutputVariable and StandardErrorVariable
2. Set a breakpoint on the condition Break when the container receives the OnPostExecute event
3. Start Debugging (Note: Don’t right-click and Execute Task) and wait for the breakpoint to be hit.
4 .Inspect the variables in the Locals window
I seriously can’t remember the number of times that I have had to lookup date conversion logic in SSIS. Decided to create a cheatsheet for it.
|Date in Flat File||SQL Data Type||Data Conversion Transformation||Derived Column Transformation
|yyyymmdd||date||DT_DBDATE (FastParse = True)||
|mm/dd/yyyy||date||(DT_DBDATE)(SUBSTRING([Col],7,4) + "-" + SUBSTRING([Col],1,2) + "-" + SUBSTRING([Col],4,2))
|yyyy-mm-dd hh:mm:ss||datetime||DT_DBTIMESTAMP (optional)||
|yyyy-mm-dd hh:mm:ss.nnnnnnn||datetime2||DT_DBTIMESTAMP2 (scale = count(n)) (optional)||
Most solutions on stackoverflow would suggest that you use a derived column transformation to insert dashes to form a yyyy-mm-dd string and then cast it to DT_DBDATE. A much simpler way is to turn on a custom property called FastParse. (Thanks to my co-worker Lynn for passing on this tip)
FastParse can be found in Show Advanced Editor -> Input and Output Properties -> Data Conversion Output -> Output Columns
This is the simplest format to handle.
A data conversion transformation will do. It can be implicitly loaded into a date column as is.
This is locale-specific so
a data conversion transformation can easily handle it it can be implicitly loaded given that we are in Australia.
I have to say this is the most hated date format in the database world (this part of the world at least). Now that we are in Australia and it is a US date format, it cannot be converted directly using a data conversion transformation. Instead we need to rely on a derived column transformation to pull apart the date components and reconstruct into the yyyy-mm-dd format.
EDIT: Actually I later found that SSIS can handle both the dd/mm/yyyy and mm/dd/yyyy format as long as the days are larger than 12, even for dates with mixed format within the same column. Will write about this in a future post.
This is a perfect date time format that can be converted using a data conversion transformation. Just need to remember to use the DT_DBTIMESTAMP data type. Or, simply load into a datetime column as is.
This is the format for a SQL datetime2 data type. In order to do a direct data conversion, the scale needs to match the digits in the fractional seconds. Basically, no fractional seconds = no scale, 1 digit = scale of 1, 2 digits = scale of 2, and so on up to 7. Again, simply load into a datetime2 column as is.
I have encountered and resolved an SSIS issue today where the package was executed successfully without errors, but close inspection of the data flow pipeline showed no row count. I think it is a good idea to write a post on this topic so other people can benefit from my observations.
I am going to reproduce the issue with a quick demo.
1. Create a new SSIS package with a data flow task. Use OLE DB as the source and Flat File as the destination.
2. Configure the source using a SQL command.
3. Before we go any further, let’s test the SQL command by
a) Clicking on “Parse Query”
b) Clicking on “Preview…”
c) Looking at “Columns”
All seems to be okay at this stage right? Wait and see.
4. Configure the destination and since this is not an important aspect of the demo, I will skip this step. I am using a Flat File destination but I think it is okay to use any other destination types.
5. Execute the package and you should see something like this:
Notice there is no row count and the output showed the pipeline wrote 0 rows.
6. I have found two ways of working around the issue:
a) Remove the USE statement at the top of the SQL command.
b) Insert a SET NOCOUNT ON statement at the start of the SQL command.
7. Now execute the package again and you should see it working properly.