tSQLt.FakeTable DOES NOT remove the original table

tSQLt is a great tool for simplifying database testing but it does come with a few gotchas. I got stuck on a failed test today that took me a while to identify the root cause. Let me take you through the journey. (Object names modified for the post)

The error I encountered was:

[test].[test book] failed: (Error) The INSERT statement conflicted with the FOREIGN KEY constraint "FK__book__publisher__31E413FD". The conflict occurred in database "anz_summix", table "dbo.tSQLt_tempobject_da8838ee37ed44daa755d58f38d078e3", column 'name'.[16,0]{test book,8}

In hindsight the temp table name was a dead giveaway but at the time I was like it’s a temp table made by tSQLt and who cares about temp tables, right? I went and looked at the tables involved:

book (foreign key table)
|
|FK__book__publisher__31E413FD
|
publisher (primary key table)

Both tables were empty. The foreign key was there. Everything looked normal. So then I took a look at the test proc (simplified for the post):

create proc test.[test book]
as

exec tSQLt.FakeTable 'publisher'

insert dbo.publisher (name)
values ('Microsoft')

insert book (name, publisher)
values ('SQL Server For Dummies', 'Microsoft')

return 0
go

I observed that the publisher table was faked which means it didn’t have any constraints on it. Data was inserted in the right order, primary key table first and then the foreign key table. Everything still looked ok so far.

What I failed to realise immediately at the time was that when tSQLt fakes a table, it doesn’t remove the original table, it simply renames it according to the source code:

EXEC tSQLt.Private_RenameObjectToUniqueName @SchemaName, @TableName, @NewNameOfOriginalTable OUTPUT;

This fact was not made clear in the official description:

FakeTable creates an empty version of the table without the constraints in place of the specified table.

That’s what the temp table was in the error message, the original table in disguise. The relationship actually looked like the diagram below during the execution of the test:

book (foreign key table)
|
|FK__book__publisher__31E413FD
|
tSQLt_tempobject_da8838ee37ed44daa755d58f38d078e3 (primary key table)

Once I realised what’s going on, the fix was quite easy. In my case I just needed to remove the FakeTable line as it was not required after all. Another option was to remove the foreign key constraint and that would also work.

Another note, you should always run tests at least once in an empty database environment. By empty I mean a freshly deployed database with no production data. The test I had trouble with was actually passing in our CI environment. That’s because the primary key table already had all the necessary data in it. It only showed up on my radar when I executed it against my local environment.

Git aliases I can’t live without

This is a dump of my most used git aliases for easy future reference.

[alias]
 # log with pretty formatting
 l = log \"--pretty=format:%C(yellow bold)%h %C(red bold)%ad %C(white bold)%an %C(reset)%s%C(cyan bold)%d\" --all --decorate --date=short

 # Show commits from me (Louie) 
 la = log --author="louie" \"--pretty=format:%C(yellow bold)%h %C(red bold)%ad %C(white bold)%an %C(reset)%s%C(cyan bold)%d\" --all --decorate --date=short

 # Show history (commits) of a file
 h = log --follow \"--pretty=format:%C(yellow bold)%h %C(red bold)%ad %C(white bold)%an %C(reset)%s%C(cyan bold)%d\" --all --decorate --date=short

 # Show files in a commit
 f = show --name-status \"--format=%C(yellow bold)%h %C(red bold)%ad %C(white bold)%an %C(reset)%s%C(cyan bold)%d\" --date=short

 s = status
 a = add -A
 b = branch
 dif = difftool --cached --dir-diff
 cm = checkout master
 cd = checkout develop

SSIS date conversion cheatsheet

I seriously can’t remember the number of times that I have had to lookup date conversion logic in SSIS. Decided to create a cheatsheet for it.

Date in Flat FileSQL Data TypeData Conversion TransformationDerived Column Transformation
yyyymmdddateDT_DBDATE (FastParse = True)
yyyy-mm-dddateDT_DBDATE
dd/mm/yyyydateDT_DBDATE
mm/dd/yyyydate(DT_DBDATE)(SUBSTRING([Col],7,4) + "-" + SUBSTRING([Col],1,2) + "-" + SUBSTRING([Col],4,2))
yyyy-mm-dd hh:mm:ssdatetimeDT_DBTIMESTAMP
yyyy-mm-dd hh:mm:ss.nnnnnnndatetime2DT_DBTIMESTAMP2 (scale = cont(n))

yyyymmdd

Most solutions on stackoverflow would suggest that you use a derived column transformation to insert dashes to form a yyyy-mm-dd string and then cast it to DT_DBDATE. A much simpler way is to turn on a custom property called FastParse. (Thanks to my co-worker Lynn for passing on this tip)

FastParse can be found in Show Advanced Editor -> Input and Output Properties -> Data Conversion Output -> Output Columns

yyyy-mm-dd

This is the simplest format to handle. A data conversion transformation will do.

dd/mm/yyyy

This is locale-specific so a data conversion transformation can easily handle it given that we are in Australia.

mm/dd/yyyy

I have to say this is the most hated date format in the database world. Now that we are in Australia and it is a US date format, it cannot be converted directly using a data conversion transformation. Instead we need to rely on a derived column transformation to pull apart the date components and reconstruct into the yyyy-mm-dd format.

yyyy-mm-dd hh:mm:ss

This is a perfect date time format that can be converted using a data conversion transformation. Just need to remember to use the DT_DBTIMESTAMP data type.

yyyy-mm-dd hh:mm:ss.nnnnnnn

This is the format for a SQL datetime2 data type. In order to do a direct data conversion, the scale needs to match the digits in the fractional seconds. Basically, no fractional seconds = no scale, 1 digit  = scale of 1, 2 digits = scale of 2, and so on up to 7.

How to install Visual Studio Code extensions behind proxy

I have had the pleasure to download Visual Studio Code and extensions in a new corporate environment recently. Like usual it was a struggle. Took me a while but eventually I got it done.

Downloading and installing VS Code was fine as I had internet access and local admin rights. You can pretty much stop reading if your corporate environment is so locked down that you can’t even install software. No point going any further, just go find another job.

Once I have installed VS Code, I looked for the powershell extension. To my surprise (well, not really surprised), I was presented with an error message:

getaddrinfo ENOTFOUND marketplace.visualstudio.com marketplace.visualstudio.com:443

A search in Google returned two very useful links. Here they are to save you the trouble:

https://github.com/Microsoft/vscode/issues/69
http://blog.majcica.com/2016/04/07/visual-studio-code-behind-a-proxy/

In a nutshell, the solution involved two steps:

  1. Find the proxy settings.
  2. Overwrite VS Code default settings.

Find the proxy settings

  1. Goto internet explorer -> Internet options -> Connections -> LAN settings
  2. If proxy server is populated, then take settings from here. If not, then read on.
  3. In my environment, there was an automatic configuration script BUT it was grayed out. That means I could neither read the whole string nor copy it out.
  4. In order to read the full url, I needed to go to the registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
  5. Copy the value from AutoConfigURL and open it in IE. I was prompted to download a .pac file.
  6. You should then be able to find the proxy settings within the file.

Overwrite VS Code default settings

  1. In VS Code, goto File -> Preferences -> User Settings
  2. Paste the proxy settings on the right hand side where it says settings.json
    // Place your settings in this file to overwrite the default settings
    {
     "http.proxy": "http://xxx.xxx.xxx:80",
     "https.proxy": "https://xxx.xxx.xxx:80",
     "http.proxyStrictSSL": false
    }

    vscode_user_settings

  3. Restart VS Code.
  4. This time you should be able to see the extensions available in the market place.vscode_extensions

Now I can go back and do some real work.

How to break up a large data insert script so it is runnable in SQL Server Management Studio

I needed to move a reasonably large amount of data from one table to another table on a different server. It was a one off activity so I didn’t want to bother with tools such as the Import and Export Wizard nor SSIS. I simply wanted to generate a script with the insert statements and run it in SSMS.

OK, when I tried that the first time, SSMS wasn’t too happy running the 160,000 line script:

Msg 8623, Level 16, State 1, Line 1
The query processor ran out of internal resources and could not 
produce a query plan. This is a rare event and only expected for 
extremely complex queries or queries that reference a very large 
number of tables or partitions. Please simplify the query. If you 
believe you have received this message in error, contact Customer 
Support Services for more information.

The error was somehow anticipated when I saw the size of the script. I worked around the issue by keeping the insert statements in the same script but breaking up the “union all select” into separate inserts. Here is the full example for you to devour:

use tempdb

if object_id('dbo.student') is not null
 drop table dbo.student

create table dbo.student
(
 id int not null identity(1, 1)
 , name sysname not null
 , residential_address nvarchar(max) not null
)

if object_id('dbo.merge_output') is not null
 drop table dbo.merge_output

create table dbo.merge_output
(
 id int not null
 , name sysname not null
 , residential_address nvarchar(max) not null 
 , merge_action sysname not null
)
 
;with cte_source(name, residential_address)
as
(
 select 'Louie1', '1 King St'
 union all select 'Louie2', '2 King St'
 union all select 'Louie3', '3 King St'
 union all select 'Louie4', '4 King St'
 union all select 'Louie5', '5 King St'
 union all select 'Louie6', '6 King St'
 union all select 'Louie7', '7 King St'
 union all select 'Louie8', '8 King St'
 union all select 'Louie9', '9 King St'
 union all select 'Louie10', '10 King St'
 union all select 'Louie11', '11 King St'
 union all select 'Louie12', '12 King St'
 union all select 'Louie13', '13 King St'
 union all select 'Louie14', '14 King St'
 union all select 'Louie15', '15 King St'
 union all select 'Louie16', '16 King St'
 union all select 'Louie17', '17 King St'
 union all select 'Louie18', '18 King St'
 union all select 'Louie19', '19 King St'
 union all select 'Louie20', '20 King St'
)
merge Student t
using cte_source s on t.name = s.name
when not matched then
 insert (name, residential_address) 
 values (s.name, s.residential_address)
when matched then
 update 
 set residential_address = s.residential_address
output inserted.id
, inserted.name
, inserted.residential_address
, $action
into merge_output;

if object_id('dbo.insert_statement') is not null
 drop table dbo.insert_statement

create table dbo.insert_statement
(
 seq int not null identity(1, 1)
 , cmd nvarchar(max) not null 
)

insert dbo.insert_statement
select concat('union all select ''id:', id
, ', name:', name
, ', residential_address: ', residential_address
, ', event:', merge_action, '''')
from merge_output

update dbo.insert_statement
set cmd = replace(cmd, 'union all ', 'go' + char(13) + char(10) 
+ 'insert dbo.event (message)' + char(13) + char(10))
where seq % 5 = 1

print '
**** Run the command below to export data:
sqlcmd -S ' + @@servername + ' -d tempdb -M -y0 -Q "set nocount on;
select cmd from dbo.insert_statement order by seq" -o "events.sql"'

Let me walk you though the script.

use tempdb

if object_id('dbo.student') is not null
 drop table dbo.student

create table dbo.student
(
 id int not null identity(1, 1)
 , name sysname not null
 , residential_address nvarchar(max) not null
)

dbo.student is a table where we insert/update student details. As a result of the insert/update, we want to capture the actions as events and insert into an event table.

if object_id('dbo.merge_output') is not null
 drop table dbo.merge_output

create table dbo.merge_output
(
 id int not null
 , name sysname not null
 , residential_address nvarchar(max) not null 
 , merge_action sysname not null
)
 
;with cte_source(name, residential_address)
as
(
 select 'Louie1', '1 King St'
 union all select 'Louie2', '2 King St'
 union all select 'Louie3', '3 King St'
 union all select 'Louie4', '4 King St'
 union all select 'Louie5', '5 King St'
 union all select 'Louie6', '6 King St'
 union all select 'Louie7', '7 King St'
 union all select 'Louie8', '8 King St'
 union all select 'Louie9', '9 King St'
 union all select 'Louie10', '10 King St'
 union all select 'Louie11', '11 King St'
 union all select 'Louie12', '12 King St'
 union all select 'Louie13', '13 King St'
 union all select 'Louie14', '14 King St'
 union all select 'Louie15', '15 King St'
 union all select 'Louie16', '16 King St'
 union all select 'Louie17', '17 King St'
 union all select 'Louie18', '18 King St'
 union all select 'Louie19', '19 King St'
 union all select 'Louie20', '20 King St'
)
merge Student t
using cte_source s on t.name = s.name
when not matched then
 insert (name, residential_address) 
 values (s.name, s.residential_address)
when matched then
 update 
 set residential_address = s.residential_address
output inserted.id
, inserted.name
, inserted.residential_address
, $action
into merge_output;

We are using a MERGE statement to get the data into the dbo.student table. The changes are captured by the OUTPUT clause into the dbo.merge_output table.

if object_id('dbo.insert_statement') is not null
 drop table dbo.insert_statement

create table dbo.insert_statement
(
 seq int not null identity(1, 1)
 , cmd nvarchar(max) not null 
)

insert dbo.insert_statement
select concat('union all select ''id:', id
, ', name:', name
, ', residential_address: ', residential_address
, ', event:', merge_action, '''')
from merge_output

Now, we are taking data out of the dbo.merge_output table and inserting into a transient table called dbo.insert_statement. If you select everything from dbo.insert_statement now, you’ll see:

seq cmd
1 union all select 'id:1, name:Louie1, residential_address:1 King St, event:INSERT'
2 union all select 'id:2, name:Louie2, residential_address:2 King St, event:INSERT'
3 union all select 'id:3, name:Louie3, residential_address:3 King St, event:INSERT'
4 union all select 'id:4, name:Louie4, residential_address:4 King St, event:INSERT'
5 union all select 'id:5, name:Louie5, residential_address:5 King St, event:INSERT'
6 union all select 'id:6, name:Louie6, residential_address:6 King St, event:INSERT'
7 union all select 'id:7, name:Louie7, residential_address:7 King St, event:INSERT'
8 union all select 'id:8, name:Louie8, residential_address:8 King St, event:INSERT'
9 union all select 'id:9, name:Louie9, residential_address:9 King St, event:INSERT'
10 union all select 'id:10, name:Louie10, residential_address:10 King St, event:INSERT'
11 union all select 'id:11, name:Louie11, residential_address:11 King St, event:INSERT'
12 union all select 'id:12, name:Louie12, residential_address:12 King St, event:INSERT'
13 union all select 'id:13, name:Louie13, residential_address:13 King St, event:INSERT'
14 union all select 'id:14, name:Louie14, residential_address:14 King St, event:INSERT'
15 union all select 'id:15, name:Louie15, residential_address:15 King St, event:INSERT'
16 union all select 'id:16, name:Louie16, residential_address:16 King St, event:INSERT'
17 union all select 'id:17, name:Louie17, residential_address:17 King St, event:INSERT'
18 union all select 'id:18, name:Louie18, residential_address:18 King St, event:INSERT'
19 union all select 'id:19, name:Louie19, residential_address:19 King St, event:INSERT'
20 union all select 'id:20, name:Louie20, residential_address:20 King St, event:INSERT'

I have 20 rows in the example but the real data set I had to deal with as I mentioned earlier was 160,000 rows. Initially, I only added one “insert dbo.event (message)” as the first line with all the “union all select” following it. (Of course, the first union all was removed but I won’t show you how that’s done because the solution below covers this part)

update dbo.insert_statement
set cmd = replace(cmd, 'union all ', 'go' + char(13) + char(10) 
+ 'insert dbo.event (message)' + char(13) + char(10))
where seq % 5 = 1

This whole blog is about the statement above. In this UPDATE statement, we are trying to achieve three objectives:

  1. Add the “insert dbo.event (message)” header necessary for the insert statement.
  2. Remove “union all ” from the first line of the SELECT statements.
  3. Break up the script into chunks.

Once exported, the events.sql file looks like:

go
insert dbo.event (message)
select 'id:1, name:Louie1, residential_address:1 King St, event:INSERT'
union all select 'id:2, name:Louie2, residential_address:2 King St, event:INSERT'
union all select 'id:3, name:Louie3, residential_address:3 King St, event:INSERT'
union all select 'id:4, name:Louie4, residential_address:4 King St, event:INSERT'
union all select 'id:5, name:Louie5, residential_address:5 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:6, name:Louie6, residential_address:6 King St, event:INSERT'
union all select 'id:7, name:Louie7, residential_address:7 King St, event:INSERT'
union all select 'id:8, name:Louie8, residential_address:8 King St, event:INSERT'
union all select 'id:9, name:Louie9, residential_address:9 King St, event:INSERT'
union all select 'id:10, name:Louie10, residential_address:10 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:11, name:Louie11, residential_address:11 King St, event:INSERT'
union all select 'id:12, name:Louie12, residential_address:12 King St, event:INSERT'
union all select 'id:13, name:Louie13, residential_address:13 King St, event:INSERT'
union all select 'id:14, name:Louie14, residential_address:14 King St, event:INSERT'
union all select 'id:15, name:Louie15, residential_address:15 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:16, name:Louie16, residential_address:16 King St, event:INSERT'
union all select 'id:17, name:Louie17, residential_address:17 King St, event:INSERT'
union all select 'id:18, name:Louie18, residential_address:18 King St, event:INSERT'
union all select 'id:19, name:Louie19, residential_address:19 King St, event:INSERT'
union all select 'id:20, name:Louie20, residential_address:20 King St, event:INSERT'

Bonus Tip – Exporting string data out of a table without truncation

print '
**** Run the command below to export data:
sqlcmd -S ' + @@servername + ' -d tempdb -M -y0 -Q "set nocount on;
select cmd from dbo.insert_statement order by seq" -o "events.sql"'

If there is a string column longer than 256 characters, it can be fiddly trying to export it out of SSMS. Instead of messing around with query settings, I simply issued a sqlcmd command with the -y0 switch:

-y variable_length_type_display_width
If display_width is 0, the output is truncated at 1 MB.

It works most of the time but if your string is larger than 1MB, then you need a more sophisticated solution.

My job is done for the day.

Regardless of WAIT_AT_LOW_PRIORITY, index operations can still break cursor operations

Today I had the pleasure to read about the new index rebuild option WAIT_AT_LOW_PRIORITY. It’s a great addition to the DBA arsenal and you can read all about it here: Testing ALTER INDEX REBUILD with WAIT_AT_LOW_PRIORITY in SQL Server 2014

I have written about how index operations can interrupt cursor operations. Now naturally I wondered whether the new index option makes any difference to this scenario. Let’s work through an example.

1. Create table and index

create table temp
(
    id int not null identity(1, 1) primary key
    , data varchar(10) not null default('test')
)
go

insert temp default values
go 50

create index idx_temp on temp (data)
go

2. Start a cursor loop

declare cur cursor
for
    select id from temp

open cur

declare @id int

fetch cur into @id

while @@fetch_status = 0
begin
    print @id
    waitfor delay '00:00:01'
    
    fetch cur into @id
end

close cur
deallocate cur

3. Rebuild index with the new option

alter index idx_temp 
on temp 
rebuild with 
(
    online = on 
    (
        wait_at_low_priority
        (
             max_duration = 1 minutes
             , abort_after_wait = self
        )
    )
)

We still get the error:

1
2
3
4
5
6
Msg 16943, Level 16, State 4, Line 18
Could not complete cursor operation because the table schema changed 
after the cursor was declared.

I guess the lesson for today is that no matter how seemingly harmless/efficient the index operation is, you still need to test it thoroughly before applying to prod.

Powershell’s Write-Host sometimes is better than Write-Output

Despite what’s being said about the danger of Write-Host, I have found a scenario where it actually is the desired cmdlet to use over Write-Output.

Consider the code snippet below:

function Get-Array
{
    "I am not supposed to be in the array."

    $a = @("I am the only item in the array.")
    return $a
}

$array = Get-Array

cls

"`$array.count: [$($array.count)]"
$array

When I first encountered this piece of code (the simplified version shown above), I was quite frustrated with the output:

$array.count: [2]
I am not supposed to be in the array.
I am the only item in the array.

Why the hell was my print statement showing up in the array? Well, after hours of debugging and googling now I know it has everything to do with the behaviour of Write-Output:

The Write-Output cmdlet sends the specified object down the pipeline to the next command. If the command is the last command in the pipeline, the object is displayed in the console.
Write-Output sends objects down the primary pipeline, also known as the “output stream” or the “success pipeline.” To send error objects down the error pipeline, use Write-Error.
This cmdlet is typically used in scripts to display strings and other objects on the console. However, because the default behavior is to display the objects at the end of a pipeline, it is generally not necessary to use the cmdlet. For instance, Get-Process | Write-Output is equivalent to Get-Process.

The Get-Array function was explicitly returning an array object, but the assignment later on was also passing on whatever was in the primary pipeline. Gotcha!

A simple solution to this gotcha is by replacing the implicit Write-Output to Write-Host:

function Get-Array
{
    Write-Host "I am not supposed to be in the array."

    $a = @("I am the only item in the array.")
    return $a
}

$array = Get-Array

cls

"`$array.count: [$($array.count)]"
$array

As expected, you’d get the output below:

$array.count: [1]
I am the only item in the array.

This works because Write-Host behaves quite differently to Write-Output:

Write-Host sends the objects to the host. It does not return any objects. However, the host might display the objects that Write-Host sends to it.

Understanding the differences between the seemingly simple commands can save loads of wasted time in debugging, especially in a scripting world.

How to create a new local git branch that tracks a remote branch

I’ve had to look up this on StackOverflow twice in the last week. I thought it’d be a good idea to jot down some notes so I can remember what to do next time.

1. Add the remote repo
git remote add louie [email protected]:louiebao/dbarsenal.git

2. Get all the branches for the new remote repo
git fetch louie

3. Create a new branch locally that tracks the remote branch
git checkout -b demo louie/demo

4. Check
git branch -v -v

Handy SSMS Query Shortcuts

In SQL Server Management Studio, you are able to define custom query shortcuts to help with common tasks. I have been using them for quite some years now and realized I haven’t written about them yet. So here we go.

The query shortcuts can be found under Tools -> Options... -> Environment -> Keyboard -> Query Shortcuts and the default setting doesn’t look very exciting.

ssms_keyboard_query_shortcuts

Before I go any further, let me explain how query shortcuts work. The “shortcut” is obviously the key combo you press which does two things. It (1) prepends the “statement” to the highlighted text in the editor and (2) executes it. Although SSMS calls the statement Stored Procedure but it is not limited to only procs, it works with any valid T-SQL statement. In other words:

shortcut = statement [highlighted text]

Let me show you with an example, let’s say you have a query that looks like the one below and you would like to run sp_help sys.columns to see the table details:

select *
from sys.objects a
join sys.columns b on a.object_id = b.object_id

Without a shortcut, you’d need to go to a new line and type the command. Now with the help of the shortcut, you can simply select the text sys.columns and press Alt+F1 to achieve the same result. How cool is that? I hear you want more. Ok, I will reveal all the query shortcuts I find particular useful.

sp_helptext – How many times have you looked at a view or proc and wondered about the implementation underneath. Well, no need to search for that object in Object Explorer anymore, simply setup a shortcut. My personal preference for this shortcut is the Ctrl+F1 slot.

select * from sys.objects where name like – This is quite handy when you want to quickly check to see if an object exists in the database. I normally set this to Ctrl+5

select – Don’t let the simplicity of this shortcut fool you. Not having to type those six characters can save a lot of time when you want to run a system function embedded in a query, such as suser_sname(), newid(), getutcdate() etc. I normally set this to Ctrl+6

select top 10 * from – The reason for having a separate top 10 query shortcut from the select all is that it provides a quick and easy way of retrieving the column names with some sample data. I normally set this to Ctrl+9

select * from – I use Ctrl+0 for this.

The shortcuts listed above are the common ones I use almost everywhere whereas the ones below are less common but still handy to have around:

select concat(””, newid(), ””) as [newid()] – If you have a need to generate GUIDs all the time, then surely it deserves a shortcut slot.

set statistics io, time on – This one is for folks looking at performance.

select * from dbo.dbversion order by id desc – This is useful if you have a version table in the database. Setting up a shortcut can make it easier for checking versions.

Watch out for square brackets when using Select-String to search for SQL objects

Powershell’s Select-String cmdlet is powerful, well, sometimes too powershell you’d want to dumb it down a notch. That’s what happened to me today.

Here is the scenario. I have the name of a column in the form of [schema_name].[table_name].[column_name] and I want to see if [schema_name].[table_name] part of the name is found amongst a list of tables. I have set up two test scenarios below for demonstration purpose:

Table List
[sys].[objects]
[sys].[columns]
[sys].[types]

Column
1. For [sys].[objects].[type_desc], Select-String should return something as the table name [sys].[objects] forms part of the column name.
2. For [sys].[tables].[type_desc], Select-String should return nothing as no tables from the list matches [sys].[tables].

Expressed in powershell:

$table_array = "[sys].[columns]", "[sys].[types]", "[sys].[objects]"
$sys_objects_name = "[sys].[objects].[type_desc]"
$sys_tables_name = "[sys].[tables].[type_desc]"

"`nLooking for a table that is part of $sys_objects_name..."
Select-String -InputObject $sys_objects_name -Pattern $table_array

"`nLooking for a table that is part of $sys_tables_name..."
Select-String -InputObject $sys_tables_name -Pattern $table_array

In this instance, -Pattern takes an array and for each one of the array items it is matched against -InputObject. In other words, we are trying to find a -Pattern that is a subset of -InputObject.

Running the above script yields the output below:

Looking for a table that is part of [sys].[objects].[type_desc]...
[sys].[objects].[type_desc]

Looking for a table that is part of [sys].[tables].[type_desc]...
[sys].[tables].[type_desc]

Notice how both Select-String returned something? That’s wrong as we don’t expect the second Select-String to return anything. After trying to debug with a few different scenarios, it dawned on me that Select-String uses regular expression matching by default. It was one of those “OF COURSE!” moments in hindsight. Most of the time it is ok if you are just trying to do some simple string matching but when working with SQL (square bracket) quoted names, it can wreak havoc.

The solution is to turn regular expression matching off and to do that you just need to specify the flag -SimpleMatch:

$table_array = "[sys].[columns]", "[sys].[types]", "[sys].[objects]"
$sys_objects_name = "[sys].[objects].[type_desc]"
$sys_tables_name = "[sys].[tables].[type_desc]"

"`nLooking for a table that is part of $sys_objects_name..."
Select-String -InputObject $sys_objects_name -Pattern $table_array -SimpleMatch

"`nLooking for a table that is part of $sys_tables_name..."
Select-String -InputObject $sys_tables_name -Pattern $table_array -SimpleMatch

Running the script again now yields the correct output:

Looking for a table that is part of [sys].[objects].[type_desc]...
[sys].[objects].[type_desc]

Looking for a table that is part of [sys].[tables].[type_desc]...

–oo00oo–
For more info, please refer to the links below:
Select-String