Resolved: Could not complete cursor operation because the table schema changed after the cursor was declared.

I have written about how a reindex job can interrupt cursor operations:

Could not complete cursor operation because the table schema changed after the cursor was declared.

Regardless of WAIT_AT_LOW_PRIORITY, index operations can still break cursor operations

Today I got the chance to have another go at this issue. In previous posts, the cursor was declared as is without specifying any attributes.

After going through all the attributes, I have decided to test a few and see if they would make any difference. I will spare you the trouble and list the findings below:

FAST_FORWARD

Could not complete cursor operation because the table schema changed after the cursor was declared.

FORWARD_ONLY

Could not complete cursor operation because the table schema changed after the cursor was declared.

READONLY

Could not complete cursor operation because the table schema changed after the cursor was declared.

STATIC

It worked!

It was quite obvious why it worked in hindsight, according to the trusted Books Online:

STATIC

Defines a cursor that makes a temporary copy of the data to be used by the cursor. All requests to the cursor are answered from this temporary table in tempdb; therefore, modifications made to base tables are not reflected in the data returned by fetches made to this cursor, and this cursor does not allow modifications.

There you go, problem solved.

How to install Visual Studio Code extensions behind proxy

I have had the pleasure to download Visual Studio Code and extensions in a new corporate environment recently. Like usual it was a struggle. Took me a while but eventually I got it done.

Downloading and installing VS Code was fine as I had internet access and local admin rights. You can pretty much stop reading if your corporate environment is so locked down that you can’t even install software. No point going any further, just go find another job.

Once I have installed VS Code, I looked for the powershell extension. To my surprise (well, not really surprised), I was presented with an error message:

getaddrinfo ENOTFOUND marketplace.visualstudio.com marketplace.visualstudio.com:443

A search in Google returned two very useful links. Here they are to save you the trouble:

https://github.com/Microsoft/vscode/issues/69
http://blog.majcica.com/2016/04/07/visual-studio-code-behind-a-proxy/

In a nutshell, the solution involved two steps:

  1. Find the proxy settings.
  2. Overwrite VS Code default settings.

Find the proxy settings

  1. Goto internet explorer -> Internet options -> Connections -> LAN settings
  2. If proxy server is populated, then take settings from here. If not, then read on.
  3. In my environment, there was an automatic configuration script BUT it was grayed out. That means I could neither read the whole string nor copy it out.
  4. In order to read the full url, I needed to go to the registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
  5. Copy the value from AutoConfigURL and open it in IE. I was prompted to download a .pac file.
  6. You should then be able to find the proxy settings within the file.

Overwrite VS Code default settings

  1. In VS Code, goto File -> Preferences -> User Settings
  2. Paste the proxy settings on the right hand side where it says settings.json
    // Place your settings in this file to overwrite the default settings
    {
     "http.proxy": "http://xxx.xxx.xxx:80",
     "https.proxy": "https://xxx.xxx.xxx:80",
     "http.proxyStrictSSL": false
    }

    vscode_user_settings

  3. Restart VS Code.
  4. This time you should be able to see the extensions available in the market place.vscode_extensions

Now I can go back and do some real work.

How to break up a large data insert script so it is runnable in SQL Server Management Studio

I needed to move a reasonably large amount of data from one table to another table on a different server. It was a one off activity so I didn’t want to bother with tools such as the Import and Export Wizard nor SSIS. I simply wanted to generate a script with the insert statements and run it in SSMS.

OK, when I tried that the first time, SSMS wasn’t too happy running the 160,000 line script:

Msg 8623, Level 16, State 1, Line 1
The query processor ran out of internal resources and could not 
produce a query plan. This is a rare event and only expected for 
extremely complex queries or queries that reference a very large 
number of tables or partitions. Please simplify the query. If you 
believe you have received this message in error, contact Customer 
Support Services for more information.

The error was somehow anticipated when I saw the size of the script. I worked around the issue by keeping the insert statements in the same script but breaking up the “union all select” into separate inserts. Here is the full example for you to devour:

use tempdb

if object_id('dbo.student') is not null
 drop table dbo.student

create table dbo.student
(
 id int not null identity(1, 1)
 , name sysname not null
 , residential_address nvarchar(max) not null
)

if object_id('dbo.merge_output') is not null
 drop table dbo.merge_output

create table dbo.merge_output
(
 id int not null
 , name sysname not null
 , residential_address nvarchar(max) not null 
 , merge_action sysname not null
)
 
;with cte_source(name, residential_address)
as
(
 select 'Louie1', '1 King St'
 union all select 'Louie2', '2 King St'
 union all select 'Louie3', '3 King St'
 union all select 'Louie4', '4 King St'
 union all select 'Louie5', '5 King St'
 union all select 'Louie6', '6 King St'
 union all select 'Louie7', '7 King St'
 union all select 'Louie8', '8 King St'
 union all select 'Louie9', '9 King St'
 union all select 'Louie10', '10 King St'
 union all select 'Louie11', '11 King St'
 union all select 'Louie12', '12 King St'
 union all select 'Louie13', '13 King St'
 union all select 'Louie14', '14 King St'
 union all select 'Louie15', '15 King St'
 union all select 'Louie16', '16 King St'
 union all select 'Louie17', '17 King St'
 union all select 'Louie18', '18 King St'
 union all select 'Louie19', '19 King St'
 union all select 'Louie20', '20 King St'
)
merge Student t
using cte_source s on t.name = s.name
when not matched then
 insert (name, residential_address) 
 values (s.name, s.residential_address)
when matched then
 update 
 set residential_address = s.residential_address
output inserted.id
, inserted.name
, inserted.residential_address
, $action
into merge_output;

if object_id('dbo.insert_statement') is not null
 drop table dbo.insert_statement

create table dbo.insert_statement
(
 seq int not null identity(1, 1)
 , cmd nvarchar(max) not null 
)

insert dbo.insert_statement
select concat('union all select ''id:', id
, ', name:', name
, ', residential_address: ', residential_address
, ', event:', merge_action, '''')
from merge_output

update dbo.insert_statement
set cmd = replace(cmd, 'union all ', 'go' + char(13) + char(10) 
+ 'insert dbo.event (message)' + char(13) + char(10))
where seq % 5 = 1

print '
**** Run the command below to export data:
sqlcmd -S ' + @@servername + ' -d tempdb -M -y0 -Q "set nocount on;
select cmd from dbo.insert_statement order by seq" -o "events.sql"'

Let me walk you though the script.

use tempdb

if object_id('dbo.student') is not null
 drop table dbo.student

create table dbo.student
(
 id int not null identity(1, 1)
 , name sysname not null
 , residential_address nvarchar(max) not null
)

dbo.student is a table where we insert/update student details. As a result of the insert/update, we want to capture the actions as events and insert into an event table.

if object_id('dbo.merge_output') is not null
 drop table dbo.merge_output

create table dbo.merge_output
(
 id int not null
 , name sysname not null
 , residential_address nvarchar(max) not null 
 , merge_action sysname not null
)
 
;with cte_source(name, residential_address)
as
(
 select 'Louie1', '1 King St'
 union all select 'Louie2', '2 King St'
 union all select 'Louie3', '3 King St'
 union all select 'Louie4', '4 King St'
 union all select 'Louie5', '5 King St'
 union all select 'Louie6', '6 King St'
 union all select 'Louie7', '7 King St'
 union all select 'Louie8', '8 King St'
 union all select 'Louie9', '9 King St'
 union all select 'Louie10', '10 King St'
 union all select 'Louie11', '11 King St'
 union all select 'Louie12', '12 King St'
 union all select 'Louie13', '13 King St'
 union all select 'Louie14', '14 King St'
 union all select 'Louie15', '15 King St'
 union all select 'Louie16', '16 King St'
 union all select 'Louie17', '17 King St'
 union all select 'Louie18', '18 King St'
 union all select 'Louie19', '19 King St'
 union all select 'Louie20', '20 King St'
)
merge Student t
using cte_source s on t.name = s.name
when not matched then
 insert (name, residential_address) 
 values (s.name, s.residential_address)
when matched then
 update 
 set residential_address = s.residential_address
output inserted.id
, inserted.name
, inserted.residential_address
, $action
into merge_output;

We are using a MERGE statement to get the data into the dbo.student table. The changes are captured by the OUTPUT clause into the dbo.merge_output table.

if object_id('dbo.insert_statement') is not null
 drop table dbo.insert_statement

create table dbo.insert_statement
(
 seq int not null identity(1, 1)
 , cmd nvarchar(max) not null 
)

insert dbo.insert_statement
select concat('union all select ''id:', id
, ', name:', name
, ', residential_address: ', residential_address
, ', event:', merge_action, '''')
from merge_output

Now, we are taking data out of the dbo.merge_output table and inserting into a transient table called dbo.insert_statement. If you select everything from dbo.insert_statement now, you’ll see:

seq cmd
1 union all select 'id:1, name:Louie1, residential_address:1 King St, event:INSERT'
2 union all select 'id:2, name:Louie2, residential_address:2 King St, event:INSERT'
3 union all select 'id:3, name:Louie3, residential_address:3 King St, event:INSERT'
4 union all select 'id:4, name:Louie4, residential_address:4 King St, event:INSERT'
5 union all select 'id:5, name:Louie5, residential_address:5 King St, event:INSERT'
6 union all select 'id:6, name:Louie6, residential_address:6 King St, event:INSERT'
7 union all select 'id:7, name:Louie7, residential_address:7 King St, event:INSERT'
8 union all select 'id:8, name:Louie8, residential_address:8 King St, event:INSERT'
9 union all select 'id:9, name:Louie9, residential_address:9 King St, event:INSERT'
10 union all select 'id:10, name:Louie10, residential_address:10 King St, event:INSERT'
11 union all select 'id:11, name:Louie11, residential_address:11 King St, event:INSERT'
12 union all select 'id:12, name:Louie12, residential_address:12 King St, event:INSERT'
13 union all select 'id:13, name:Louie13, residential_address:13 King St, event:INSERT'
14 union all select 'id:14, name:Louie14, residential_address:14 King St, event:INSERT'
15 union all select 'id:15, name:Louie15, residential_address:15 King St, event:INSERT'
16 union all select 'id:16, name:Louie16, residential_address:16 King St, event:INSERT'
17 union all select 'id:17, name:Louie17, residential_address:17 King St, event:INSERT'
18 union all select 'id:18, name:Louie18, residential_address:18 King St, event:INSERT'
19 union all select 'id:19, name:Louie19, residential_address:19 King St, event:INSERT'
20 union all select 'id:20, name:Louie20, residential_address:20 King St, event:INSERT'

I have 20 rows in the example but the real data set I had to deal with as I mentioned earlier was 160,000 rows. Initially, I only added one “insert dbo.event (message)” as the first line with all the “union all select” following it. (Of course, the first union all was removed but I won’t show you how that’s done because the solution below covers this part)

update dbo.insert_statement
set cmd = replace(cmd, 'union all ', 'go' + char(13) + char(10) 
+ 'insert dbo.event (message)' + char(13) + char(10))
where seq % 5 = 1

This whole blog is about the statement above. In this UPDATE statement, we are trying to achieve three objectives:

  1. Add the “insert dbo.event (message)” header necessary for the insert statement.
  2. Remove “union all ” from the first line of the SELECT statements.
  3. Break up the script into chunks.

Once exported, the events.sql file looks like:

go
insert dbo.event (message)
select 'id:1, name:Louie1, residential_address:1 King St, event:INSERT'
union all select 'id:2, name:Louie2, residential_address:2 King St, event:INSERT'
union all select 'id:3, name:Louie3, residential_address:3 King St, event:INSERT'
union all select 'id:4, name:Louie4, residential_address:4 King St, event:INSERT'
union all select 'id:5, name:Louie5, residential_address:5 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:6, name:Louie6, residential_address:6 King St, event:INSERT'
union all select 'id:7, name:Louie7, residential_address:7 King St, event:INSERT'
union all select 'id:8, name:Louie8, residential_address:8 King St, event:INSERT'
union all select 'id:9, name:Louie9, residential_address:9 King St, event:INSERT'
union all select 'id:10, name:Louie10, residential_address:10 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:11, name:Louie11, residential_address:11 King St, event:INSERT'
union all select 'id:12, name:Louie12, residential_address:12 King St, event:INSERT'
union all select 'id:13, name:Louie13, residential_address:13 King St, event:INSERT'
union all select 'id:14, name:Louie14, residential_address:14 King St, event:INSERT'
union all select 'id:15, name:Louie15, residential_address:15 King St, event:INSERT'
go
insert dbo.event (message)
select 'id:16, name:Louie16, residential_address:16 King St, event:INSERT'
union all select 'id:17, name:Louie17, residential_address:17 King St, event:INSERT'
union all select 'id:18, name:Louie18, residential_address:18 King St, event:INSERT'
union all select 'id:19, name:Louie19, residential_address:19 King St, event:INSERT'
union all select 'id:20, name:Louie20, residential_address:20 King St, event:INSERT'

Bonus Tip – Exporting string data out of a table without truncation

print '
**** Run the command below to export data:
sqlcmd -S ' + @@servername + ' -d tempdb -M -y0 -Q "set nocount on;
select cmd from dbo.insert_statement order by seq" -o "events.sql"'

If there is a string column longer than 256 characters, it can be fiddly trying to export it out of SSMS. Instead of messing around with query settings, I simply issued a sqlcmd command with the -y0 switch:

-y variable_length_type_display_width
If display_width is 0, the output is truncated at 1 MB.

It works most of the time but if your string is larger than 1MB, then you need a more sophisticated solution.

My job is done for the day.

Regardless of WAIT_AT_LOW_PRIORITY, index operations can still break cursor operations

Today I had the pleasure to read about the new index rebuild option WAIT_AT_LOW_PRIORITY. It’s a great addition to the DBA arsenal and you can read all about it here: Testing ALTER INDEX REBUILD with WAIT_AT_LOW_PRIORITY in SQL Server 2014

I have written about how index operations can interrupt cursor operations. Now naturally I wondered whether the new index option makes any difference to this scenario. Let’s work through an example.

1. Create table and index

create table temp
(
    id int not null identity(1, 1) primary key
    , data varchar(10) not null default('test')
)
go

insert temp default values
go 50

create index idx_temp on temp (data)
go

2. Start a cursor loop

declare cur cursor
for
    select id from temp

open cur

declare @id int

fetch cur into @id

while @@fetch_status = 0
begin
    print @id
    waitfor delay '00:00:01'
    
    fetch cur into @id
end

close cur
deallocate cur

3. Rebuild index with the new option

alter index idx_temp 
on temp 
rebuild with 
(
    online = on 
    (
        wait_at_low_priority
        (
             max_duration = 1 minutes
             , abort_after_wait = self
        )
    )
)

We still get the error:

1
2
3
4
5
6
Msg 16943, Level 16, State 4, Line 18
Could not complete cursor operation because the table schema changed 
after the cursor was declared.

I guess the lesson for today is that no matter how seemingly harmless/efficient the index operation is, you still need to test it thoroughly before applying to prod.

Powershell’s Write-Host sometimes is better than Write-Output

Despite what’s being said about the danger of Write-Host, I have found a scenario where it actually is the desired cmdlet to use over Write-Output.

Consider the code snippet below:

function Get-Array
{
    "I am not supposed to be in the array."

    $a = @("I am the only item in the array.")
    return $a
}

$array = Get-Array

cls

"`$array.count: [$($array.count)]"
$array

When I first encountered this piece of code (the simplified version shown above), I was quite frustrated with the output:

$array.count: [2]
I am not supposed to be in the array.
I am the only item in the array.

Why the hell was my print statement showing up in the array? Well, after hours of debugging and googling now I know it has everything to do with the behaviour of Write-Output:

The Write-Output cmdlet sends the specified object down the pipeline to the next command. If the command is the last command in the pipeline, the object is displayed in the console.
Write-Output sends objects down the primary pipeline, also known as the “output stream” or the “success pipeline.” To send error objects down the error pipeline, use Write-Error.
This cmdlet is typically used in scripts to display strings and other objects on the console. However, because the default behavior is to display the objects at the end of a pipeline, it is generally not necessary to use the cmdlet. For instance, Get-Process | Write-Output is equivalent to Get-Process.

The Get-Array function was explicitly returning an array object, but the assignment later on was also passing on whatever was in the primary pipeline. Gotcha!

A simple solution to this gotcha is by replacing the implicit Write-Output to Write-Host:

function Get-Array
{
    Write-Host "I am not supposed to be in the array."

    $a = @("I am the only item in the array.")
    return $a
}

$array = Get-Array

cls

"`$array.count: [$($array.count)]"
$array

As expected, you’d get the output below:

$array.count: [1]
I am the only item in the array.

This works because Write-Host behaves quite differently to Write-Output:

Write-Host sends the objects to the host. It does not return any objects. However, the host might display the objects that Write-Host sends to it.

Understanding the differences between the seemingly simple commands can save loads of wasted time in debugging, especially in a scripting world.

How to add a custom domain name to a Windows Azure website

In my last blog post, I talked about migrating from a web host to Windows Azure. One of the remaining cleanup tasks was to point the domain name from something like louiebao-blog.azurewebsites.net to louiebao.net. The actual steps may be slightly different depends on who your domain registrar is but the general idea should remain the same. I am using DNSimple so I will be showing you screen shots from them.

Add a service for Windows Azure

After logging in to DNSimple, click on the desired domain name, goto “Add or edit services”, find Windows Azure and click on Add. Once presented with the “Setup Windows Azure Service” screen, enter the subdomain for your Azure site. This is the part before .azurewebsite.net, in my case louiebao-blog.

add_a_service_for_windows_azure

After adding the service, you will see a few new DNS records under the Windows Azure service. Note the A record isn’t there as DNSimple doesn’t know the IP address at this point, you will need to create it in a later step.

windows_azure_dns_records

Add the custom domain in Windows Azure

In Windows Azure’s mangement portal, goto the “Manage custom domains” dialog and enter the custom domain name, in my case louiebao.net. At this point, the awverify.louiebao.net CNAME record created in the previous step should be in action. Once Azure verifies that you own the domain, you will be able to add the domain name to the list.

Once this step is done, you can optionally remove the CNAME awverify record as it has no further use to us.

azure_manage_domains

Add an A record in DNSimple

Take the IP address from the “Manage custom domains” dialog and create a DNS A record with it in DNSimple.

dns_a_record

Remove DNS records from the old host

Go back to your old host and remove the now obsolete DNS records.

Wait for a hour or two or even longer

I am not joking as with DNS changes certain things just take a long time to refresh. I am not an expert on DNS but as long as your configurations are correct, it will work eventually. For the impatient ones, you can just go to the next step and keep checking on the status.

Check

I used http://www.digwebinterface.com/ to make sure the domain name can be successfully resolved to the desired IP address.

digwebinterface

Now my blog can be accessed from louiebao.net as well as louiebao-blog.azurewebsites.net. In case you are wondering, the default Azure domain name louiebao-blog.azurewebsites.net cannot be removed as far as I know.

One more thing

I quickly realised that I have missed a step after publishing this post. The WordPress settings are still pointing to louiebao-blog.azurewebsites.net and need to be updated as well.

wordpress_settings wordpress_settings_home

How to migrate a WordPress blog to Windows Azure

Just recently I have migrated this WordPress blog to Windows Azure. I am actually writing this post on the new platform. I have learnt a few lessons along the way and it is always a good idea to document them while everything is still fresh in my head.

Like doing anything for the first time, I started by searching for How To articles on the web. One frustrating thing I found was that there were a lot of out of date info around and after following a few step by step guides to no success, I eventually gave up and decided to go on my own.

After failing a few attempts, I stopped following other people’s instructions and started thinking. What is the most important content I care about? The answer was obvious, the blog posts. Do I care about themes, plugins and comments, etc? Yes but they are not critical. At a bare minimum, I would be content if only the blog posts were migrated across. In that case, I just needed to backup and restore the WordPress database then. Once I realised what I needed to achieve, the migration path became much clearer. Well that was the initial thinking but nonetheless, the path was more or less clear. What I found later was that a straight backup and restore would not have worked as the new environment was not empty, there was already data in the database.

Now I will cut to the chase and describe what I did step by step.

1. Install plugin to work with WordPress’s MySQL database

The plugin I used was called Adminer (formerly phpMinAdmin). I simply went to Plugins -> Add New -> Searched for Adminer -> Install -> Activate.

2. Decide on tables on export

Goto Tools -> Adminer -> Start Adminer inside

Once in Adminer, I was presented with 11 tables, the trick was to work out which tables I needed to export.

wp_options
I didn’t need to migrate this table because the new WordPress installation on Windows Azure contained all my options already and I could go back and match the settings manually anyway.

wp_users, wp_usermeta
Since this is my personal blog, I am the only user I care about, so no need to migrate this table either because I was already a user in the new environment. The ID of my users in both the old and the new databases were the same and this could be very handy during migrations.

wp_posts, wp_postmeta
These tables looked important, I definitely wanted to migrate them.

wp_terms, wp_term_relationships, wp_term_taxonomy
Yep, I also wanted to retain the categories.

wp_comments, wp_commentmeta
I decided not to bother with comments.

wp_links
The table was empty (later found out it was deprecated).

3. Export

Clicked on Export from the panel on the left and selected the following options

Output: save
Format: SQL
Database: <BLANK>
Tables: <BLANK>
Data: INSERT

Before exporting, made sure I ticked wp_postmeta, wp_posts, wp_terms, wp_term_relationships and wp_term_taxonomy tables.

4. Edit the exported SQL file

Once I had exported the tables, I started to inspect the SQL file. There were links to the old site littered across the file. I simply did a search and replace. (For example, from www.oldsite.com/blog to www.newsite.com)

5. Import into the new database

I then went into Adminer on the Windows Azure side and imported the SQL file. No errors. All good.

6. Check

Opened up the new site and it was a relief seeing all the old blog posts.

7. Publish a new blog post

If you are reading this, that means I have successfully migrated the blog from my old host to Azure. At least the main part.

8. What’s next?

There is still a bit of cleaning up to do as all those images and files from the old site also need to be moved to the new site. But that’s for another day. Stay tuned.

Powershell file is not digitally signed

A powershell profile which I have happily used for over a year suddenly turned on me today. Normally when I open up the command prompt, it would kick off powershell and which would then load posh-git using a powershell profile but instead of the super nice git prompt, I received the following error:

File cannot be loaded. The file \\somenetworkserver\myhomefolder\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 is not digitally signed.

The file was created by me initially so digital signature has no relevance here. I tried to resolve this issue by tinkering with Set-ExecutionPolicy Unrestricted and Set-ExecutionPolicy ByPass, needless to say, they didn’t work. If it was that simple, I wouldn’t be writing this post.

Luckily I found this blog article which pointed me in the right direction: http://blogs.technet.com/b/heyscriptingguy/archive/2012/10/29/how-to-run-powershell-scripts-from-a-shared-directory.aspx

By default, computers running Windows Server include UNC paths in the Internet security zone. Windows PowerShell is actually responding to the security zone when it throws the error for the UNC paths.

Alright, we are getting somewhere, since the powershell profile sits in a network share, this must be it.

Fix Attempt No. 1
OK, let’s see if I can add the UNC to the intranet zone. Bad news! The setting was locked down by Group Policy. Basically any fixes that need to manipulate group policy is out of question.

 internet_options

OK, don’t panic, think. So I cannot place the profile file on the network share, what if I move it to a local drive? Bingo.

Fix Attempt No. 2
After wasting a good 10 minutes on Stack Overflow I conceded that it is not possible to move the profile files as they can only be created in their default locations. Wait a minute, they? files? How many profile files are there? I then fired up get-help about_profiles and like a compass it led me to the solution.

Fix Attempt No. 3
Apparently $profile.AllUsersAllHosts is located in $PsHome and thank god it is on the local drive: $PsHome = C:\Windows\System32\WindowsPowerShell\v1.0

At this point, I had reached enlightenment. I joyfully carried out the following steps and my profile was once again happily loading posh-git on start up:

  1. Create a new profile
    1.  new-item –type file –path $profile.AllUsersAllHosts –force
      1. Small hiccup here. I was looking for Microsoft.PowerShell_profile.ps1 after running the command but the file was actually named profile.ps1
  2. Move content from Microsoft.PowerShell_profile.ps1 to profile.ps1
  3. Remove Microsoft.PowerShell_profile.ps1 from the network share.

Using computed columns to solve performance problems

Some times when you are working with 3rd party vendor code, or maintaining legacy code, it can be tricky especially when a query doesn’t perform. Modifying the query by adding hints is one way to address the issue but more than often you will find your hands are tied and unable to change a single line of code. I had to deal with something like that once and fortunate enough to have computed columns at my disposal.

A computed column although easy to understand is probably not the first tool people associate with performance tuning. Yet it can be a very powerful weapon in your performance tuning arsenal. I will demonstrate with an example.

Say you are working with a table and a view like the ones below.

use tempdb go

if object_id('trade_id_lookup') is not null drop table trade_id_lookup go

create table trade_id_lookup ( trade_id bigint not null identity(1, 1) primary key , system_id int not null , portfolio sysname not null , product_id int not null ) go if object_id('v_trade_id_lookup') is not null drop view v_trade_id_lookup go

create view v_trade_id_lookup as select cast(trade_id as nvarchar) as 'trade_id' , system_id , portfolio , product_id from trade_id_lookup go

insert trade_id_lookup (system_id, portfolio, product_id) values (1, 'portfolio', 9) go 10000

The view is then used throughout the system for looking up trade attributes given a trade_id.

select system_id, portfolio, product_id
from dbo.v_trade_id_lookup
where trade_id = '999'

It all looks quite innocent until you start to interrogate the execution plan.image

A clustered index scan, why? Due to the CAST expression embedded in the view, the optimizer is not able to perform an index seek because it had to convert every single row. If you hover over the index scan icon, it will show something like:

(CONVERT(nvarchar(30),[tempdb].[dbo].[trade_id_lookup].[trade_id],0)=CONVERT_IMPLICIT(nvarchar(4000),[@1],0))

Bear in mind that I intentionally stripped away all the non essential code for the demo. Imagine looking at a large execution plan filled with joins and index scans, it is not an easy task to spot something like this without looking at the code inside the view. This is one of the reasons I hate deeply nested views, they are a product of object-oriented paradigm and should not belong in the database world. Or maybe it’s because of my inability to understand INCEPTION.

Now we know what’s causing the performance issue, what can we do about that? One quick answer would be to remove the CAST expression inside the view. I wish it was that simple, remember we are working with code that we don’t necessarily understand its intent. Modifying queries albeit a seemingly simple line is fraught with danger and should be your last sort.

So we cannot change the queries nor the view. Then what? Fortunately, I could modify the underlying table, provided that no business logic was modified. This is where the computed column comes in handy.

alter table trade_id_lookup
add trade_id_string as cast(trade_id as nvarchar)
go 
create index idx_louie on trade_id_lookup (trade_id_string)
go

By copying the CAST logic from the view into a computed column, we achieved two things:

  1. An index can now be created on the converted trade_id.
  2. No code has to be modified and it all just magically works.

Executing the SELECT query again will yield the execution plan below, see how the index seek is now in use?

image

The key lookup is a by-product of the demo, you can optimize the query further by adding the referenced columns in the INCLUDE list of the index.

–oo00oo–

To understand a bit more about how the optimizer utilises indexes on computed columns, please refer to Paul White’s answer to the question Index on Persisted Computed column needs key lookup to get columns in the computed expression

Got “Cannot connect to WMI provider” when tried to open SQL Server Configuration Management

I tried to open SQL Server Configuration Manager one day and got a sad face error dialog:

2014-07-30 SQL Server Configuration Management error

A quick search on google returned a few positive results. One of them looked especially promising: http://methmal132.wordpress.com/2013/10/04/how-to-resolve-the-sql-error-cannot-connect-to-wmi-provider

The blog post is a replicate of the knowledge base article: http://support.microsoft.com/kb/956013 but with pretty pictures.

If you are here because you had the same issue then you can stop reading if the above workaround resolved your issue. Unfortunately for me, I had compounded issues. It turned out my WMI was corrupt and I had to fix that first.

2014-07-30 WMI Property Dialog

Off to google again and luckily I found this post: http://katyscode.wordpress.com/2007/02/03/tutorial-how-to-fix-wmi-corruption

Impressed with the level of technical details contained in the post, I religiously followed the instructions to rebuild the WMI repository word to word.

2014-07-30 restart winmgmt and rename repository

2014-07-30 wmi property dialog fixed

Once the WMI woes were sorted, I returned to the KB article for the workaround and that fixed my issue eventually!

2014-07-30 mof worked