# Wednesday, July 22, 2009

Sometimes SQL Server gets it wrong, UNIQUE constraints and NULL is one of the parts where it is wrong. People call it a bug and request for it to be fixed, Microsoft calls requests to fix it common and offers workarounds and overall current SQL Server behavior does not adhere to the standard (ISO-9075).

To see the bug reports (and vote); 126533, 299229, 311223 (closed), 387273. Basically they are each others duplicates and from those, the request 299229 has most attention and the best discussion.

Available workarounds;

  • Filtered index (starting with SQL Server 2008)
  • Indexed view (starting with SQL Server 2000)
  • Triggers (not recommended as the check is after the modification is completed and if invalid has to perform a rollback)

... and don't say Microsoft didn't get it right because it's Microsoft... Access implements UNIQUE constraints correctly.

Wednesday, July 22, 2009 3:00:57 PM (W. Europe Daylight Time, UTC+02:00)
# Wednesday, June 17, 2009

I've been holding this one off for quite some time, but finally bit the bullet... migrate the server.Coming from Windows Small Business Server 2003 (32-bit) going to Windows Server 2008 with Exchange 2007 (both x64), including transferring the web site. There were a few nasty pieces in the process and I didn't have/take the time/resources to test it all though before jumping in... so I got a disruption of about one hour for the blog and inbound smtp.

First issue I encountered was the installation of the server, which I ordered from Dell without operating system. I am a Microsoft Action Pack subscriber, so I slammed the 64-bit Windows Server from the Action Pack on the system, entered the Product Key and then went on to phone based activation. After putting in the last confirmation code and pushing the next button... my activation wasn't accepted!!! ... to cut a long story short, after dozens of voice menu selections and 4 persons, I found the guy who could help me out: Philip. Though it still toke Philip and me about 20 minutes to find the actual problem... I wasn't an Action Pack subscriber anymore!!! Technical story here, but it happened when Action Pack was integrated more tightly with the Partner Program. Somehow I didn't complete the Action Pack renewal transaction completely in February, but the partner site didn't show that (it only showed I'm still good for the Partner Program).

Couple of days later, I could go ahead with the migration process. It turned out to be easier than I expected after I found a very useful document about Upgrading Small Business Server 2003 to Exchange 2007. Some pointers on using the document though;

  • Transferring the the other FSMO's (Schema Master is described); Determining FSMO Role Holders
  • With Windows Server 2008 you NEED the Exchange 2007 SP1 DVD, Exchange 2007 without SP1 will not pass installation checks.
  • Section Migrate mail send does not cover migrate mail receive. This caused me a bit of downtime... The issue was
    SMTP error from remote mail server after MAIL FROM:<abcd@efg.hi> SIZE=2703:
    host abcdef.ghi []: 530 5.7.1 Client was not authenticated

    And can be solved in the following manner in Exchange Management Console:
    • Go to the Server Configuration, Hub Transport. Select your server and add Anonymous to the Default SERVERNAME Receive Connector. While you're there, you can increase receiving message size here too.

And there were parts I did different;

  • I continue to rely on Vamsoft's ORF for the anti-spam. Version 4.3 is fully compatible with Windows Server 2008 and Exchange 2007 and an ease to configure.
  • Since I will continue the same DNS name for OWA and Windows mobile Active Sync, I transferred the Self-signed certificate from the SBS2003 (find it in your IIS, web site, directory security tab) to the new server.

My other bit of downtime was the blog... simply copy from the old server to the new one didn't cut it. So I downloaded the latest release of dasBlog from Codeplex, copied it into the virtual director of new server. Next I had to;

  • Copy my own theme
  • Compare (and adjust) the \SiteConfig files
  • Transfer \Content files and \Logs files
  • Make sure the account running the Application Pool has read on all dasBlog folders and change on \SiteConfig, \Content and \Logs.
  • The Application Pool is running in Classic Managed Pipeline Mode

Guess I can say I didn't plan for the blog to be upgraded... that just had to happen.

Wednesday, June 17, 2009 9:31:05 PM (W. Europe Daylight Time, UTC+02:00)
# Friday, May 15, 2009

Microsoft launched a new preview site for SQL Server 2008 R2. You can sign up for notifications about the releases of CTP's.

From an administrative viewpoint, focus seems at tools to manage, utilize and consolidate instances and machines.

Towards usability (mainly business intelligence), enhancements to integrate with Office 2010 (mainly Excel and SharePoint) are featured. Also further integration with Virtual Earth is shown.

Finally there is mention about something called Master Data Services and the Stewardship Portal. In concept a very powerful way to create and maintain a data warehouse bus.

Friday, May 15, 2009 1:30:57 PM (W. Europe Daylight Time, UTC+02:00)
# Tuesday, March 24, 2009

Noticed this one on Born to Learn... Microsoft has two sessions today/tomorrow on preparing for the exam 70-432

Tuesday, March 24  7:30 A.M. Pacific Time   (What time is this in my region? 2009-03-24 15:30 CET)

Tuesday, March 24  5:30 P.M. Pacific Time   (What time is this in my region? 2009-03-25 01:30 CET)

It slipped past me too, but since I can manage to squeeze the first session in my schedule, I'll attend. Follow up and link to the recording to be posted later.

My impressions of this sessions is that Rob Boek takes you through the main areas of interest for the exam, at least it aligned quite well with what I can remember from my beta-test. Good stuff if you need an impression of the most important topics of the exam to prepare for.

Updated 2009-04-02

Technorati tags: ,
Tuesday, March 24, 2009 9:46:13 AM (W. Europe Standard Time, UTC+01:00)
# Monday, March 2, 2009

Updated 2010-07-24; added timeout to the receive command in the stored procedure. For more info, see Service Broker can make your transaction log big.

Event Notifications are based on Service Broker. This brings an advantage and a disadvantage; enhanced scalability at the cost of increased complexity (and lack of a User Interface, so we have to take it to T-SQL code). Lets take it there step by step for SQL Server 2005 and 2008;

  1. Some code to cause deadlocks.
  2. See how the deadlocks look like using the Trace Flags.
  3. See how the deadlocks look like in Profiler.
  4. Use the Event Notification on the deadlocks.
  5. Sending out the mail.

The code shown as part of this article is also available at this link. To code to create the database for the deadlocks is available at this link. The code works with SQL Server 2005 SP2 and SQL Server 2008

1. Some code to cause deadlocks.

To get the code to cause the deadlocks, click this link. Run the code from SQL Server Management Studio (SSMS) to create the DeadLockSample database. Next create two additional connections (New Query) to the DeadLockSample database, execute the procedure procLockedTran1 on one connection and procLockedTran2 on the other connection. There is a 5 seconds delay in the two procedures between locking the first and second resource. So if you execute the two procedures within 5 seconds, you get your deadlock. You could also change the delay in the procedures if you think 5 seconds is to short/long.

Normally a process would be terminated when it is the deadlock victim. For the procedures created by the script, the error is handled , so the process that is the deadlock victim just selects it is the victim and the connection will not be terminated.

If you want to create a deadlock in any of the following scenarios, just run procLockedTran1 and procLockedTran2 on two separate connections simultaneously.

2. See how the deadlocks look like using the Trace Flags.


To get the deadlocks recorded in the SQL Server Error Log, either set the trace flag as startup parameter in SQL Server Configuration Manager (see picture) which will cause the trace flag to be active each time you start your SQL Server (until you remove the trace flag again), or issue a DBCC TRACEON (1222,-1) command which sets the trace flag until you stop SQL Server or issue a DBCC TRACEOFF (1222,-1).

With the trace flag active, you can now see the deadlock appear in you SQL Server Error Log.

3. See how the deadlocks look like in Profiler.

... or in SSMS for that matter. With the SQL Server tools it is possible to display deadlock graphs; XML documents (with .xdl extension) describing the deadlock information of one or more deadlocks. When the deadlock graph event (EventClass 148) with the TextData column is included in the trace definition, the XML to create the deadlock graph is captured (both through the Profiler graphical tool and the sp_trace procedures) and the graph will automatically be displayed when opened in Profiler. Also the graphical profiler has the option to write deadlock graphs directly to a file. Note that the graphical presentation of the deadlock is limited compared to the information in the .xdl document. The information in the .xdl document is the same information as the information written to the SQL Server Error Log with trace flag 1222. One extra comment if you filter your trace on database, you won't get the deadlock graph, as noticed by Kalen Delaney.

4. Use the Event Notification on the deadlocks.

Event Notification lets you capture events that can occur on server, database or broker queue, for a list of events, query sys.event_notification_event_types. For this purpose, the event type 1148, DEADLOCK_GRAPH is of interest. Also note that the DEADLOCK_GRAPH is a trace event, which can only be captured at server level.

The second thing needed is a broker SERVICE to send the event to. Since we'll be handling the events at server level, the SERVICE (and QUEUE) will be created in the msdb database. There is no need to create the MESSAGE TYPEs  or CONTRACTs as they already exist for Event Notification.

USE msdb
-- Create a queue to receive messages.
CREATE QUEUE queDeadLock_Graph
-- Create a service on the queue that references
-- the event notifications contract.
CREATE SERVICE svcDeadLock_Graph
ON QUEUE queDeadLock_Graph
-- Now query the sys.databases for the service_broker_guid of the msdb database.

SELECT service_broker_guid FROM sys.databases WHERE name = 'msdb';

The result of the query for the Service Broker GUID for the msdb databases is needed to point to the right database for the service.

-- Create the event notification.
, '2A7C5300-7C64-43F5-AB10-5B079D07678E'; -- the GUID for msdb goes here.

You can look at the content of the queue through SELECT * FROM msdb.dbo.queDeadLock_Graph, as a queue in fact is a table. Next step will be processing the messages that get into the queue and more specifically get the deadlock details out of the message_body. The message body looks binary, but if you query it like this;

SELECT CAST(message_body AS xml) FROM msdb.dbo.queDeadLock_Graph

you'll notice that it is actually XML, where the textdata node holds the xdl structure of the deadlock graph.

5. Sending out the mail

Okay, we're in Service Broker land by now. Event Notification acts as the Initiator Service and the svcDeadLock_Graph is the Target Service. In this scenario, the only thing left to be done is specify how the queue should be processed. The processing shall be done by a stored procedure that will be tied to the queue as the activation program. Once something comes in to the queue, this stored procedure will be invoked... kind of like a trigger for insert on a normal table. The specification for the stored procedure will be:

  1. Extract the deadlock graph from the message_body and store it in a variable.
  2. Extract the datetime of the deadlock graph notification and store it in a variable.
  3. Store the datetime and deadlock graph in a table.
  4. Send the datetime as part of the message and deadlock graph as attachment in an email. This does require database mail to be set up, if you haven't, an error will be logged in the table.

The code to create the table to hold the deadlock information (and the reason why no mail was sent).

USE msdb
TABLE dbo.tblDeadLock_Log
DeadLock_ID int IDENTITY(1,1) CONSTRAINT pk_tblDeadLock_Log PRIMARY
, DeadLock_Detected
, DeadLock_Graph
, NoMailReason nvarchar(2048

The stored procedure to be used as the activation program on the queue, this is where the real work is done. Alter the @profile_name and @recipients in at sp_send_dbmail to match the setup of your dbmail setup.

CREATE PROCEDURE dbo.procReceiveDeadLock_Graph
@message_type_name nvarchar(128
DECLARE @deadlock_graph
RECEIVE TOP(1) @conversation_handle =
, @message_body = CAST(message_body AS xml
@message_type_name = message_type_name
FROM dbo.queDeadLock_Graph)

, TIMEOUT 10000
-- Line added 2010-07-24;
-- /ssb/2010/07/24/ServiceBrokerCanMakeYourTransactionLogBig.aspx

-- Validate message
IF (@message_type_name = 'http://schemas.microsoft.com/SQL/Notifications/EventNotification'
@message_body.exist('(/EVENT_INSTANCE/TextData/deadlock-list)') = 1
-- Extract the info from the message
SELECT @deadlock_graph = @message_body.query('(/EVENT_INSTANCE/TextData/deadlock-list)'
@event_datetime = @message_body.value('(/EVENT_INSTANCE/PostTime)[1]','datetime'
-- Put the info in the table
INSERT dbo.tblDeadLock_Log (DeadLock_Detected, DeadLock_Graph
VALUES (@event_datetime, @deadlock_graph
-- Send deadlock alert mail.
-- Requires configured database mail, will log an error if not (or anything else goes wrong).
DECLARE @subj nvarchar(255), @bdy nvarchar(max), @qry nvarchar(max), @attfn nvarchar(255
SELECT @subj = 'A deadlock occurred on ' +
, @bdy = 'A deadlock occurred at ' + CONVERT(varchar(50),@event_datetime, 120) + ' on SQL Server: ' + @@SERVERNAME +
'. See attached xdl-file for deadlock details.'
, @qry = 'SET NOCOUNT ON; SELECT deadlock_graph FROM msdb.dbo.tblDeadLock_Log WITH (READUNCOMMITTED) WHERE DeadLock_ID = ' + CAST(@deadlock_id AS varchar(10
)) -- Locking hint is to prevent this dynamic query to be blocked by the lock held by the insert. The dynamic SQL will not come from inside this transaction.
, @attfn = @@SERVERNAME + '_' + CAST(@deadlock_id AS varchar(10)) +
EXEC sp_send_dbmail @profile_name =
, @recipients =
, @subject = @subj
, @body = @bdy
, @query = @qry
, @attach_query_result_as_file = 1
, @query_attachment_filename = @attfn
-- http://support.microsoft.com/kb/924345
, @query_result_header = 0
, @query_result_width = 32767
, @query_no_truncate = 1
UPDATE dbo.tblDeadLock_Log
WHERE DeadLock_ID = @deadlock_id
ELSE -- Not an event notification with deadlock-list
CONVERSATION @conversation_handle

Finally to tie the stored procedure to the queue so incoming notifications will be processed.

ALTER QUEUE dbo.queDeadLock_Graph
= ON
PROCEDURE_NAME = msdb.dbo.procReceiveDeadLock_Graph

Now try the procedures procLockedTran1 and procLockedTran2 and see the results in the msdb.dbo.tblDeadLock_Log and in your mail (if configured).

Monday, March 2, 2009 12:30:01 PM (W. Europe Standard Time, UTC+01:00)
# Friday, February 27, 2009

Okay, here's the deal; deadlocks sometimes occur and you want to be notified as they happen.

So define an SQL Agent Alert on error 1205 to send you an e-mail notification... partially, but it won't work as error 1205 isn't logged.

How to get the error logged; either set the severity level to 19 or above when the message is created (does not apply to 1205) or flag the message to be logged to event log, which can be done using sp_altermessage... at least this was the way to do it with SQL Server 2000 but one of the changes for SQL Server 2005 says:

sp_altermessage cannot be used to change the logging behavior of system messages (messages with Message ID < 50000). To audit system messages, use SQL Trace and the User Error Message Event Class. For more information, see Introducing SQL Trace.

For SQL Server 2005 it used to be a dead end, but with as of Service Pack 3 it is again possible to alter sys.messages < 50000 so they are written to the Windows NT Application Event Log. Specifically for deadlocks you'd run:

EXEC sp_altermessage 1205, 'WITH_LOG', 'true'

but it would apply to other system events you want logged too.

Update 2009-04-08; issue fixed in Service Pack 1 for SQL Server 2008. For SQL Server 2008 (up until Cumulative Update 3) it is still impossible to alter sys.messages < 50000. Though it was mentioned (2008-07-28) at connect in FeedBackID=294122 that this issue would be fixed for SQL Server 2008 too, so far (2009-02-27) it isn't.

Besides the fact that the alert option is still impossible with SQL Server 2008, it can only tell you a deadlock occurred. Should you need more information on the deadlock however, your next option would be TRACE flags 1204 or 1222, the difference between the two of them being the amount of detail. Trace flag 1222 will be use during the post as it provides the most detail. Setting the trace flags writes the deadlock information to the SQL Server Error Log, but not to the Windows NT Application Event Log... so there's one two (and a half) option remaining:

Process the Error Log with the undocumented sp_readerrorlog as part of the alert response... but that won't work for SQL Server 2008 (yet) and you have a better option for SQL Server 2005. For more info see MSSQLTips.

SQL Server 2005 & 2008: Event Notification

SQL Server 2005 & 2008: WMI Alert, see MSDN

So next post will be on getting your deadlock info through Event Notification.

Friday, February 27, 2009 10:47:32 PM (W. Europe Standard Time, UTC+01:00)
# Thursday, February 26, 2009

As I blogged before, relog is quite useful. The syntax examples on the TechNet page however are pretty much useless if you want to go to SQL Server. So let's look at a few scenarios and the syntax to make them work, but before that, let's get the DSN and the counter data file.

Configure the Data Source Name to the SQL Server as System DSN based on the SQL Server driver (SQLSRV32.DLL)... Native Client does NOT work. The name of the DSN in the syntax samples will be PerfDB.

Next is the file with performance data. Relog will detect the format from its internal structure (if it is a valid counter log file), so you do not have to specify if your file is comma separated (.csv), tab separated (.tsv) or binary (.blg). Since binary is the most practical format for large amounts of data, the file for the syntax examples will be c:\my perflogs\p_log01.blg (and consecutive numbers for any next file).

One final comment before going to the scenarios; relog creates three tables in the SQL Server database targeted by the DSN (if they do not already exist). These tables are;relog_schema

  • dbo.CounterData (holds the actual values of the counter)
  • dbo.CounterDetails (holds the machine, object, counter and, if applicable, the instance)
  • dbo.DisplayToID (holds information on the perfmon data that was loaded)

Senario 1: Load all counters into the database

relog "c:\my perflogs\p_log01.blg" -f SQL -o SQL:PerfDB!1stRun

All clear, except for the blue 1stRun (and the exclamation mark preceding it). The reason is that it is possible to write multiple logs to the same database. Each time log information is written to the database, a new DisplayToID record is created. The name given in blue is the  DisplayString. If the DisplayString is used before, the data will be added under the same GUID.

Scenario 2: Load all counters between begin and end datetime into the database

relog "c:\my perflogs\p_log02.blg" -f SQL -o SQL:PerfDB!2ndRun -b 26-02-2009 10:00:00 -e 26-02-2009 10:30:00

TechNet says the format for relog date times should be M/d/yyyy hh:mm:ss, which is the internal format of the log files is M/d/yyyy hh:mm:ss.sss, minus the milliseconds. In reality, relog looks at the regional settings, including the customizations you did to the regional settings! The string in the sample is valid for the regional settings Dutch (Netherlands), so dd-mm-yyyy HH:mm:ss. Best way to find out what format relog expects is to run relog /?.

Together with the previous issue of the char(24) storage of a binary converted datetime string, this regional settings dependency is horrible handling of datetime. For globalization support it would be great if relog was given an extra switch to indicate that datetime strings are in ISO 8601 or ODBC format, independent of the regional setting.

Scenario 3: Load a limited set of counters into the database

relog "c:\my perflogs\p_log03.blg" -q -o c:\counters.txt

Edit the c:\counters.txt file to only include the counters to be written to the database.

relog "c:\my perflogs\p_log03.blg" -cf c:\counters.txt -f SQL -o SQL:PerfDB!"select set of counters from the 3rd run"

It is possible to combine scenarios 2 and 3 to load a limited set of counters between two datetimes. Also, if you want spaces in the DisplayString, it can be done with the double quotes as shown in this example.

Thursday, February 26, 2009 3:42:13 PM (W. Europe Standard Time, UTC+01:00)
# Saturday, January 31, 2009

A common question for students during/after attending a Microsoft training: "Where do we get the virtual PC images we used during course?"

The answer is, you don't... Microsoft provides these images for classroom use only by Certified Partners for Learning Solutions and Microsoft Certified Trainers. Quite understandable, as these images, contain a lot of software. However, the students question is valid too, for practice and exam preparation. And you can get about the same experience you had in class, based on Microsoft's "Run IT on a Virtual Hard Disk" program. Run IT on a Virtual Hard Disk allows you to download and use a fully installed evaluation version of an installed product. So here is how you can build your own VHD for the 2779 or 2780 courses.

  1. Your PC; I recommend you use a PC with at least 1.5 GB of RAM and Windows XP.
  2. Virtual PC; download and install Virtual PC 2007 (if you want, you can use Virtual Server 2005 R2 instead). For the download and more information see the Microsoft Virtual PC site.
  3. SQL Server VHD; download the 4 files image files for the SQL Server 2005 Evaluation VHD and unpack the VHD. SQL Server 2005 is currently not listed on the Run IT on a Virtual Hard Disk site.
  4. SQL Server installation media; download the Evaluation Edition of SQL Server 2005 (180-day version), requires Windows Live ID. Some labs/practices/demonstrations require multiple instances, these are installed on the 2779 and 2780 images, but not on the Evaluation VHD. So you may need to install the SQLINSTANCE2 and SQLINSTANCE3.
  5. SQL Service Pack; the VHD for SQL Server Evaluation has no Service Pack for SQL Server applied, whereas the MOC courses 2779 and 2780 are based on SQL Server 2005 SP1. Links to the SQL Server 2005 Service Packs:
    1. SQL Server 2005 Service Pack 1
    2. SQL Server 2005 Service Pack 2
    3. SQL Server 2005 Service Pack 3
  6. Create a Virtual machine in Virtual PC based on the downloaded VHD. Set the amount of memory to at least 1024 MB, also enable undo disks.
  7. Start and log in to the Virtual PC guest, you will need the administrator password Evaluation1. You will notice that the Windows Server 2003 operating system is not activated, therefor you only have a limited period for evaluation.
  8. From the student CD that came with your courseware, run Allfiles.exe. This will extract all files required by the practices, labs and demonstrations. Note that the setup will be different from what you were used to during the course. The files in the course were on a separate VHD for each module, which was mounted as D:\. After unpacking Allfiles.exe the whole course is in one folder tree (typically C:\Program Files\Microsoft Learning\27xx\). Note that you may have to compensate for paths and server names; so when you are in 2780 module 4, a path D:\democode\SalesCert.cer should be changes to C:\Program Files\Microsoft Learning\2780\Mod04\SalesCert.cer. Likewise the name of the server is different too, so MIAMI should be changed to WIN2K3R2EE.

Tips about downloading and file interaction between the Virtual PC host, the Guest and the Internet.

  1. When you have a ISO-file on your host, you can mount this ISO as CD/DVD in the guest. You can also instruct Virtual PC guest to use the CD/DVD drive from the host.
  2. When you have normal files on your host, you can use the Virtual PC Shared Folders feature; this exposes a folder on the host as a network drive on the guest.
  3. You can use the Networking feature of Virtual PC to use Shared networking (NAT) or your hosts Network adapter to allow access to the network and to Internet, so you can download files directly into your Virtual PC guest.

Main differences between the MOC and Eval VHD's

  MOC Eval
Server name MIAMI Win2k3R2EE
SQL Server Edition Developer SP1 Enterprise no SP
Instances [default]
SQL Service Account MIAMI\SQLServer [LocalSystem]
Password Pa$$w0rd Evaluation1
Course files One VHD per module All files in a folder tree, paths have to be checked/changes.
Saturday, January 31, 2009 6:53:43 PM (W. Europe Standard Time, UTC+01:00)
# Monday, November 24, 2008

For those of you who are not familiar with the tool Relog; it is part of Windows and allows you to reprocess System Monitor logs. This is quite useful, as logging counters to the binary format (.blg) is very space efficient. However to analyze those counters, you may want to use SQL Server. Relog allows to rewrite the log and also creates the tables for you on SQL Server, if not yet present (syntax samples). The unfortunate part is in the CounterDateTime column of the CounterData table, despite the column name, the date type is char(24). When trying to convert the char(24) to a datetime, this throws the error;

Msg 241, Level 16, State 1, Line 2
Conversion failed when converting datetime from character string.

I was getting a bit annoyed by this error, as a conversion from a char to a datetime should succeed if the string is valid... and say 2008-09-30 12:10:15.322 looked valid. I even checked Tibors ultimate datetime guide to see if I missed something, because I was unsure of the impact of the last byte (a valid string yyyy-mm-dd HH:mi:ss.sss actually is 23 characters). Also when trying to insert a string with a valid datetime in a variable, the conversion succeeds. Next I turned my attention to how the data was stored in the CounterDateTime column, by inserting a valid datetime string and retrieving one that was inserted by Relog (both returning the string (char(24)) and its binary representation varbinary(24)).

Source Character representation Binary representation
Relog insert 2008-09-30 12:10:15.322 0x323030382D30392D33302031323A31303A31352E33323200
Manual insert 2008-09-30 12:10:15.322 0x323030382D30392D33302031323A31303A31352E33323220

The difference is in the last byte of the binary representation, the datetime string written by relog isn't padded with spaces as one would expect to happen for the unused positions in a char. Instead it is zeroed as one would expect for unused bytes in a binary string. To get a datetime representation of the CounterDateTime a double conversion is needed;

CAST(SUBSTRING([CounterDateTime],1,23) AS datetime)

Now for the "nasty" in Relogs database schema:

  • Changing the CounterDateTime to datetime is a no-go. Relog does not want it.
  • Adding a computed column with the conversions above is a no-go. Relog does not have all columns bound.

What remains is creating a view on top of the CounterData table with this conversion included.

Another issue that may save you some time, the DSN you create to access the SQL Server should be the "SQL Server" provider, not the "SQL Native Client".

Monday, November 24, 2008 8:48:37 PM (W. Europe Standard Time, UTC+01:00)
# Wednesday, October 8, 2008

A while back, I wrote about using sp_ procedures in master to create a toolbox, which works fine for me... until recently I hit a strange problem. I developed a procedure that would read the definition of a table and would create a change_log table and trigger to populate that table. On my dev-environment (Windows Vista x64, SQL Server 2005 Dev Ed x64 SP2) this worked like I expected. But when I tested the procedure on a different server, it didn't!?! A very little sample that touches the essence of the problem I witnessed;

use master
object_id('sp_test','p') is not
drop procedure sp_test
procedure sp_test
db_name() + '.' + table_schema + '.' + table_name
from information_schema.
order by table_schema, table_name
procedure sp_test

Now on my dev machine, this once listed all tables in master and once all tables in AdventureWorks, as intended. But on other servers, it returned the tables from master on both executions of sp_test (the db_name function is executed correctly in both cases; once master, once AdventureWorks). For some reason, when referencing objects, the stored procedure stayed to master. I was puzzled, but before crying BUG (out loud) I tried the newsgroups and a prompt reply from SQL Server MVP Roy Harvey pointed me to the undocumented stored procedure sp_MS_marksystemobject. Simply execute;

exec sp_MS_marksystemobject sp_test

and you won't suffer from the inconsistent behavior I witnessed.

Technorati tags: ,
Wednesday, October 8, 2008 5:51:28 PM (W. Europe Daylight Time, UTC+02:00)