# Tuesday, October 25, 2011

Character based data types, especially the var types varchar and nvarchar allow to declare a really big amount and just use a tiny bit and it doesn’t even hurt in SQL Server. However, it may in the application. Today I was working with a customer for the implementation of some third party software that would be doing analysis on the data. It would load the really over-dimensioned (n)varchar types in something like a (n)char.... you get the picture, huge amounts of wasted space. So from the databases side, a bit of restructuring was desirable. To see how much we can actually save, I created query to report on the character usage;

CREATE TABLE ##char_columns (
[SCHEMA]
VARCHAR(128) NULL,
[TABLE]
VARCHAR(128) NULL,

[COLUMN]
VARCHAR(128) NULL,

[data_type]
VARCHAR(128) NULL,

[max_length]
INT NULL,

[is_replicated]
BIT NULL,

[actual_max_length]
INT NULL,

[row_count]
BIGINT NULL,

[not_null_count]
BIGINT NULL)

INSERT
##char_columns

SELECT
s.name AS 'schema'
,
t
.name AS
'table'
,
c
.name AS
'column'
,
y
.name AS
'data_type'
,
c
.max_length
,
c
.is_replicated
,
NULL
,
NULL
,
NULL

FROM
sys.tables t

INNER
JOIN sys.schemas s ON t.schema_id = s.schema_id

INNER
JOIN sys.columns c ON t.object_id = c.object_id

INNER
JOIN sys.types y ON c.system_type_id = y.system_type_id

WHERE
t.type = 'U'
AND y.name IN ('varchar','char','nchar','nvarchar')
WHILE
(SELECT COUNT(*) FROM ##char_columns WHERE actual_max_length IS NULL) > 0

BEGIN
TRY

DECLARE
@sql NVARCHAR(4000)
, @actual_max_length INT , @column VARCHAR(128) , @schema VARCHAR(128) , @table VARCHAR(128)
SELECT
TOP 1 @schema = [SCHEMA], @table = [TABLE], @column = [COLUMN]
FROM ##char_columns

WHERE
actual_max_length IS NULL

-- Set the actual_max_length to -1 if a column has only null values or no rows.

SELECT @sql = 'UPDATE ##char_columns
SET [actual_max_length] = (SELECT ISNULL(MAX(LEN([' + @column + '])),-1) FROM [' + @schema + '].[' + @table + '])
,
      [row_count] = (SELECT COUNT_BIG(*) FROM [' + @schema + '].[' + @table + '])
,
      [not_null_count] = (SELECT COUNT_BIG([' + @column + ']) FROM [' + @schema + '].[' + @table + ']) 
WHERE [schema] = ''' + @schema + '''
 
      AND [table] = ''' + @table + '''
 
      AND [column] = ''' + @column + ''''
EXEC
(@sql)

END
TRY

BEGIN
CATCH

DECLARE
@err_num NVARCHAR(20), @err_msg NVARCHAR(2048), @full_msg NVARCHAR(2048)

SELECT
@err_num = ERROR_NUMBER(), @err_msg = ERROR_MESSAGE()
SELECT @full_msg = 'ERROR: ' + @err_num + ' DESCR: ' + @err_msg + ' QUERY: ' + @sql

RAISERROR
(@full_msg,16,0)

BREAK

END
CATCH

SELECT
* FROM ##char_columns

DROP
TABLE ##char_columns

Be aware that varchar(max) and nvarchar(max) will give a max_length of –1 and that actual_max_length is –1 if no rows exist in the table or the column only contains NULLs (refer to the row_count and not_null_count to determine the situation).

Tuesday, October 25, 2011 10:37:16 PM (W. Europe Daylight Time, UTC+02:00)
# Friday, April 15, 2011

In a clustered environment, it may not always be obvious what node SQL Server has been running on. Most logging is done based on the network name of the resource group, or the instance name of the virtual server. Still it is logged once in the Windows Application Event Log and the SQL Server Error Log each, though the “memory” of those sources is likely limited by configuration; size and overwrite settings of the application log and number of error logs (maximum 99) for the SQL Server Error log.

In the Windows Application Event Log, the EventID 17664 is of interest (verify the source is an SQL Server instance). Event Log filtering is relative straight forward and turns up an event like this.

EventMessageNodeSQLServer

Figure Event Properties; 1 = Instance name, 2 = Network name of the resource group, 3 = Computer name of the cluster node, 4 = Binary code containing network name\instance name

In case of the SQL Server Error Logs, the thing of interest is the message;

The NETBIOS name of the local node that is running the server is 'computername'. This is an informational message only; no user action is required.

Parsing the files might be the easiest from SQL Server with the xp_readerrorlog. Unfortunately error handling surrounding eXtended stored Procedures is limited. Using the script below, the last result set shows the node where a clustered SQL Server was started on.


SET
NOCOUNT ON
DECLARE
@iteration
int
DECLARE
@log_events table ([LogDate] datetime, [ProcessInfo] nvarchar(20), [Text] nvarchar(max), [Iteration] int
)
SELECT @iteration = 0
-- Set to current error log
WHILE @iteration <
100
    BEGIN
    BEGIN
TRY
        -- EXECUTE the XP without INSERT to trip the error.
        -- XP error is handled poorly, but it is even worse
        -- when it occurs on the insert from executing an XP.
        EXEC xp_readerrorlog @p1 = @iteration, @p2 = 1, @p3 = 'NETBIOS name'
;
        INSERT @log_events ([LogDate], [ProcessInfo], [Text]
)
            EXEC xp_readerrorlog @p1 = @iteration, @p2 = 1, @p3 = 'NETBIOS name'
;
        UPDATE
@log_events
            SET [Iteration] =
@iteration
       
    WHERE [Iteration] IS
NULL;
        SELECT @iteration = @iteration + 1
;
   
END
TRY
    BEGIN
CATCH
        SELECT
@iteration = @iteration + 1
;
    END
CATCH
    END
SELECT
[Iteration],
[LogDate]
   
    , CAST(SUBSTRING(

           
SUBSTRING([Text]

            ,
LEN('The NETBIOS name of the local node that is running the server is ''') + 1
           
,16
-- Max length NETBIOS name is 16 char (15 actually).
   
    ), 1, CHARINDEX('''', SUBSTRING
(
            [Text]
           
,LEN('The NETBIOS name of the local node that is running the server is ''') +
1
           
,16
-- Max length NETBIOS name is 16 char (15 actually).
   
    )
    )
- 1) AS varchar(16)) AS
'NodeName'
FROM
@log_events
WHERE [Text] LIKE
'The NETBIOS name of the local node that is running the server is %'

Friday, April 15, 2011 12:56:11 PM (W. Europe Daylight Time, UTC+02:00)
# Friday, December 3, 2010

Service Pack 2 for SQL Server 2008 annoyed me on one of my machines (development system, SQL Server doesn’t autostart). It came via Windows Update started with offering the service pack without .NET Framework 4 installed… then even after .NET Framework was installed, it kept failing. So f..k Windows Update and download SQL2k8SP2 and install it.

A couple of days later I needed SQL Server on that machine, I did the net start mssqlserver trick and tried to connect using Management Studio directly after that. So management studio threw me the

Login failed for user 'domain\user'. Reason: Server is in script upgrade mode. Only administrator can connect at this time. (Microsoft SQL Server, Error: 18401)

The very “well” documented error about the fact that SQL Server is running some scripts and rather has no-one mesh around (unless you’re on a DAC). The strange thing is that SQL Server seemed to have stopped after; that, something that isn’t supposed to happen. I restarted the service and tried again… same result, so it’s Google-time; best hit: Weird SQL Server Error: Unable to Connect to Server. Reason: Server is in script upgrade mode. With that being the best answer, the behavior I witnessed isn’t supposed to happen and thus I turned to the SQL Error Logs to find some info;

2010-12-03 21:02:04.28 spid7s      Performing replication job security meta-data upgrades...
2010-12-03 21:02:07.34 spid7s      Error: 824, Severity: 24, State: 2.
2010-12-03 21:02:07.34 spid7s      SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:2064; actual 0:0). It occurred during a read of page (1:2064) in database ID 21 at offset 0x00000001020000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\some_database.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
2010-12-03 21:02:07.34 spid7s      Error: 912, Severity: 21, State: 2.
2010-12-03 21:02:07.34 spid7s      Script level upgrade for database 'master' failed because upgrade step 'sqlagent100_msdb_upgrade.sql' encountered error 824, state 2, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2010-12-03 21:02:07.35 spid7s      Error: 3417, Severity: 21, State: 3.
2010-12-03 21:02:07.35 spid7s      Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
2010-12-03 21:02:07.86 spid7s      SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
2010-12-03 21:02:07.99 Server      The SQL Server Network Interface library successfully deregistered the Service Principal Name (SPN) [ MSSQLSvc/host.domain.local ] for the SQL Server service.

I had been working on a corrupted database called some_database (name changed for obvious reasons) for a customer a few weeks ago, but hadn’t removed that database. After removing the offending database files, SQL Server wasn’t tripped by the exception while opening the database.

To me, this does smell a like a bug (though I would not recommend running corrupt databases on a production system), so I filed it on connect; Finishing service pack installation prohibited by corrupt database.

Friday, December 3, 2010 10:04:20 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, November 30, 2010

There is nothing fancy about this post… just a little frustration. I wanted to compress files after they were processed with SSIS 2005. And with compression, I mean NTFS-compression, not creating a .zip or .gz (because that is what I mostly found through the search engines).

My first path of research was to do it in a Script Task, but it turns out, invoking the compression attribute of files on NTFS requires C# and can’t be done with VB.NET. So it’s a no go in SSIS 2005.

During the search, somewhere the compact.exe program in Windows was suggested as an alternative. Very useful from an SSIS perspective, it means the Execute Process Task is required. To make it all happen, three things are needed;

  • A Foreach Loop container
  • A string variable (either in the scope of the Foreach Loop Container or in the Package-scope), named filename in this example.
  • An Execute Process Task in the Foreach Loop container

clip_image002

Configure (Collection Page) the Foreach Loop Container as Foreach File enumerator, point to the folder where the files reside and retrieve the file names as Fully qualified. Depending your requirements, you may also specify an additional mask and include subfolders.

clip_image004

Next visit the Variable Mappings tab and point to the previously created variable listed as User::filename.

Next is the Execute Process Task. But first a little on compact.exe, to compress a file, compact.exe should be called with the /C flag and the name of the file to be compressed. Something like C:\WINOWS\System32\compact.exe /C "D:\ftp\archive\dump_001208.csv"

In terms of what the Execute Process Task expects, the Executable should be set to compact.exe. The arguments should be /C "D:\ftp\archive\dump_001208.csv" and since the filename is supplied by the variable, an expression should be used. On the Expressions tab, create an expression for Arguments. With the expression designer, create the following expression

"/C \"" + @[User::filename] + "\""

The important part to recognize is that expressions are largely built on C-style syntax (download the PragmaticWorks white paper  SSIS Accelerator Series: Working with SSIS Expressions to learn more on SSIS expressions). The arguments expression should return a string, hence the opening and closing double-quotes. The filename, which is supplied by the @[User::filename] variable (drag and drop the variable in the expression designer) should be enclosed within double-quotes too. For SSIS to recognize that these double-quotes are part of the string it should return, the double-quotes need to be escaped (using the backslash). To confirm the expression is correct, use the Evaluate Expression button. The evaluated value should return /C "" (slash + C + space + double-quote + double-quote), once the filename variable is filled, it will appear between the two double-quotes.

clip_image006

Enclosing the filename in double quotes prevents files with spaces in the name from causing errors. There could be other errors though, like a file being locked. With current configuration, the package would stop and throw an error on the first occurrence of this condition. To prevent the package from failing in that event, set the FailTaskIfReturnCodeIsNotSuccessValue property of the Execute Process Task to false.

Tuesday, November 30, 2010 9:49:56 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, October 19, 2010

Some time ago I was working with performance data collected by a customer to look for I/O-patterns and make recommendations for the I/O-subsystem for a new SQL Server 2008 machine that would replace a couple of SQL Server 2000 servers. Unfortunately I ran into issues with the values for Avg. Disk sec/Read and Avg. Disk sec/Write, which were ridiculous with average values around 100-million seconds per read or write. Those values occurred on two virtualized Windows 2000 Servers.


Today, I was working for another customer. Physical Windows Server 2003 boxes (both nodes in the same cluster). I imported the .blg files into a database and was looking at very high values (up to 100 million) for Avg. Disk sec/Read and Avg. Disk sec/Write. I consulted the staff-DBA, who referred me to one off the Windows administrators. So the Windows administrator logged in, opened performance monitor and… got reasonable values for those counters (hovering at about 0.006, I’d even say those values are good). To prove I wasn’t making a fool of myself, we walked over to my desk and I showed the results I had… same servers, same counters, very different values. Next we tried using the account of the staff-DBA, which showed the “normal” values.


Simply eliminating, the Win-admin said the only difference is the account, so he went and checked whether the account I used had full administrator permissions. It had. Next he turned his attention to the user profile for the account I used (which happened to be quite big). The user profile belonging to the account I used was deleted and I logged in again, this time to be shown the correct values. RESOLVED!!!


Unfortunately the collected data still was useless. I started the counter log again, let it run for about 10 minutes and examined it… sky high values!!! In disbelief I switched to current activity, to be shown normal values!!! Next I created a new counter log definition, let it run for a couple of minutes and this time I got normal values.


Bottom line, to resolve insanely high values for Avg. Disk sec/Read and Avg. Disk sec/Write (it might apply for other counters of the type PERF_AVERAGE_TIMER too), use a fresh user profile and a fresh counter log definition.


Thanks Ruud.

Tuesday, October 19, 2010 9:44:49 PM (W. Europe Daylight Time, UTC+02:00)
# Thursday, August 19, 2010

I am working on an MS Access database that needs to be integrated in a CRM system. Since the backend of the CRM is SQL Server (why else would I be involved), I migrated the Access database to SQL Server using the SQL Server Migration Assistant for Access v4.2 (SSMA for Access). Next I turned my attention to translating attributes between the systems and unfortunately, a free text notes field in the Access database is used for a lot of things that need to be translated to attributes. Some things I know how to translate, but some things have to be determined by the administrators for the CRM System and Access application. So I built a query in SQL on the tables formerly stored in the Access database and used Excel (with MS Query) to create a report for the administrators, so they can sort out how customer statuses should be translated.

Extremely simplified, the query looked like this;

SELECT tlbCustomer.CUST_CODE
  , tlbCustomer.
PAY_REF
  , COALESCE(UPPER(REPLACE(tblNotes.NOTE_TXT,'.','')),'') AS
NOTE_TXT
FROM tlbCustomer LEFT OUTER JOIN
tblNotes
      ON tlbCustomer.CUST_ID = tblNotes.
CUST_ID
GROUP BY tlbCustomer.
CUST_CODE
  , tlbCustomer.
PAY_REF
  , UPPER(REPLACE(tblNotes.NOTE_TXT,'.',''
))
ORDER BY COUNT(*)
DESC

The query worked great on SQL, but did not return the text (NOTE_TXT) to Excel.

To cut the long story short, the original tblNotes.NOTE_TXT field in the Access database was of type MEMO. SSMA 2008 for Access converted this to nvarchar(max) and somehow MS Query (in Office 2007) does not return, nor throw an error on, large data types (varchar(max), nvarchar(max)). A quick test showed that MAX was not really needed;

SELECT MAX(LEN(tblNotes.NOTE_TXT)) FROM tblNotes

So modify the query to return NOTE_TXT as nvarchar(256) does the trick;

SELECT tlbCustomer.CUST_CODE
  , tlbCustomer.
PAY_REF
  , CAST
(
      COALESCE(UPPER(REPLACE(tblNotes.NOTE_TXT,'.','')),'')

    AS nvarchar(256)) AS
NOTE_TXT
FROM tlbCustomer LEFT OUTER JOIN
tblNotes
      ON tlbCustomer.CUST_ID = tblNotes.
CUST_ID
GROUP BY tlbCustomer.
CUST_CODE
  , tlbCustomer.
PAY_REF
  , UPPER(REPLACE(tblNotes.NOTE_TXT,'.',''
))
ORDER BY COUNT(*)
DESC
Thursday, August 19, 2010 12:26:16 PM (W. Europe Daylight Time, UTC+02:00)
# Saturday, July 24, 2010

Sometimes one is confronted with forgetting an option... It all started with a database that had a single data file of 1.8 GB and a transaction log more than 6 times the size of that. Usually that would be a database in full recovery model and no transaction log backups. This database was in simple recovery model though and according to the Disk Usage report, 99.99% of the transaction log was free. So a DBCC SHRINKFILE should solve the problem, however DBCC SHRINKFILE solved nothing, the log remained at about 11 GB.

A query on sys.databases learned that this particular database had ACTIVE_TRANSACTION for log_reuse_wait_desc. A peek in Activity Monitor then showed that there was an open transaction for the database; a DELETE task with a huge wait time and the BROKER_RECEIVE_WAITFOR wait type.

Then it occurred to me this is the database that used to have a deadlock problem. After implementing some procedure fixes, there haven’t been any deadlocks for over a half year and neither has the server (or service) been shutdown in between.

The stored procedure on the queue does a receive, but no timeout associated to it. Effectively this means that the receive will wait inside the transaction, until a messages comes into the queue. That hasn’t happened in the last six months. The resolution is simple, add the TIMEOUT option to the RECEIVE and the procedure will stop when there wasn’t any work to be done for the specified amount of time (in milliseconds). For the code, see the post “Deadlock Alerts Through Event Notification”, which has been updated with the TIMEOUT option.

After altering the procedure, kill the spid. Service Broker will restart, but now with the TIMEOUT option in the procedure and stop after the specified amount of time has elapsed without messages arriving in the queue.

Saturday, July 24, 2010 6:48:00 PM (W. Europe Daylight Time, UTC+02:00)
# Friday, February 26, 2010

When restoring a backup of a database, SQL Server recreates the files with the same sizes as listed in the backup, or at least tries to do so. The only option you have, is specifying the path for each file. Generally not a problem, but every now and then I encounter a database with a transaction log about 10 to 50 times the size of the data file(s) . . . yep, full recovery model and just daily full backups. I am the first to acknowledge transaction logs backups are key to prevent this situation, but the next best thing still is that recent valid database backup. . . and then you can’t restore it, because the 200 GB to write that big empty file called transaction log cannot be allocated.

Today was one of those days;

Msg 3257, Level 16, State 1, Line 1
There is insufficient free space on disk volume 'D:\' to create the database. The database requires 101197938688 additional free bytes, while only 19386400768 bytes are available.
Msg 3119, Level 16, State 4, Line 1
Problems were identified while planning for the RESTORE statement. Previous messages provide details.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

Surely I wouldn’t be the only one having problems with this and I found two others on connect;

I voted (as important) for both items, though the second one is already closed. Yet I was triggered by the comments from the MS-people to ask for a specific solution towards the transaction log;

RESTORE DATABASE ... WITH STUB_LOG

One of the habits of RESTORE DATABASE is that it wants to allocate the log size (and files) as found in the backup. Strange, as the size of those files is the result of past activity (not contained in the backup being restored). The only space needed in the transaction log, is that of the activity during the backup. Additional space requirements may exist for subsequent restores of differential backups and log backups (in norecovery or standby) or future activity (after recovery), but that isn’t an issue as the log files would grow again during these restores.

The only real issue I can think of is having a database with fixed size transaction log file(s), which means no auto grow. So for a WITH STUB_LOG option to work properly in these scenarios, it may need to imply that the log file(s) are always set to unrestricted auto grow (possibly only honoring the original increment).

It would at least prevent me from having to allocate 100GB for an empty transaction log on 20GB available.

Friday, February 26, 2010 9:26:02 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, February 9, 2010

If you're interested in expanding your skills in SQL Server, check out these events;

Happy learning.

Tuesday, February 9, 2010 10:43:22 AM (W. Europe Standard Time, UTC+01:00)
# Friday, December 18, 2009

Microsoft recently launched a Virtual Business Card site for Microsoft Certified Professionals. So if you ever passed one or more of those Microsoft exams in the NT 4.0 or later eras, check out the www.mcpvirtualbusinesscard.com site (Windows Live ID sign in to connect to you records in Microsoft's certification database).

I set my profile page up to show off the credentials..

Also had a look at including the transcript, but that would be a 13-page pdf, a bit of overkill if you ask me.

Friday, December 18, 2009 11:47:33 AM (W. Europe Standard Time, UTC+01:00)