# Friday, December 3, 2010

Service Pack 2 for SQL Server 2008 annoyed me on one of my machines (development system, SQL Server doesn’t autostart). It came via Windows Update started with offering the service pack without .NET Framework 4 installed… then even after .NET Framework was installed, it kept failing. So f..k Windows Update and download SQL2k8SP2 and install it.

A couple of days later I needed SQL Server on that machine, I did the net start mssqlserver trick and tried to connect using Management Studio directly after that. So management studio threw me the

Login failed for user 'domain\user'. Reason: Server is in script upgrade mode. Only administrator can connect at this time. (Microsoft SQL Server, Error: 18401)

The very “well” documented error about the fact that SQL Server is running some scripts and rather has no-one mesh around (unless you’re on a DAC). The strange thing is that SQL Server seemed to have stopped after; that, something that isn’t supposed to happen. I restarted the service and tried again… same result, so it’s Google-time; best hit: Weird SQL Server Error: Unable to Connect to Server. Reason: Server is in script upgrade mode. With that being the best answer, the behavior I witnessed isn’t supposed to happen and thus I turned to the SQL Error Logs to find some info;

2010-12-03 21:02:04.28 spid7s      Performing replication job security meta-data upgrades...
2010-12-03 21:02:07.34 spid7s      Error: 824, Severity: 24, State: 2.
2010-12-03 21:02:07.34 spid7s      SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:2064; actual 0:0). It occurred during a read of page (1:2064) in database ID 21 at offset 0x00000001020000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\some_database.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
2010-12-03 21:02:07.34 spid7s      Error: 912, Severity: 21, State: 2.
2010-12-03 21:02:07.34 spid7s      Script level upgrade for database 'master' failed because upgrade step 'sqlagent100_msdb_upgrade.sql' encountered error 824, state 2, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2010-12-03 21:02:07.35 spid7s      Error: 3417, Severity: 21, State: 3.
2010-12-03 21:02:07.35 spid7s      Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
2010-12-03 21:02:07.86 spid7s      SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
2010-12-03 21:02:07.99 Server      The SQL Server Network Interface library successfully deregistered the Service Principal Name (SPN) [ MSSQLSvc/host.domain.local ] for the SQL Server service.

I had been working on a corrupted database called some_database (name changed for obvious reasons) for a customer a few weeks ago, but hadn’t removed that database. After removing the offending database files, SQL Server wasn’t tripped by the exception while opening the database.

To me, this does smell a like a bug (though I would not recommend running corrupt databases on a production system), so I filed it on connect; Finishing service pack installation prohibited by corrupt database.

Friday, December 3, 2010 10:04:20 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, November 30, 2010

There is nothing fancy about this post… just a little frustration. I wanted to compress files after they were processed with SSIS 2005. And with compression, I mean NTFS-compression, not creating a .zip or .gz (because that is what I mostly found through the search engines).

My first path of research was to do it in a Script Task, but it turns out, invoking the compression attribute of files on NTFS requires C# and can’t be done with VB.NET. So it’s a no go in SSIS 2005.

During the search, somewhere the compact.exe program in Windows was suggested as an alternative. Very useful from an SSIS perspective, it means the Execute Process Task is required. To make it all happen, three things are needed;

  • A Foreach Loop container
  • A string variable (either in the scope of the Foreach Loop Container or in the Package-scope), named filename in this example.
  • An Execute Process Task in the Foreach Loop container

clip_image002

Configure (Collection Page) the Foreach Loop Container as Foreach File enumerator, point to the folder where the files reside and retrieve the file names as Fully qualified. Depending your requirements, you may also specify an additional mask and include subfolders.

clip_image004

Next visit the Variable Mappings tab and point to the previously created variable listed as User::filename.

Next is the Execute Process Task. But first a little on compact.exe, to compress a file, compact.exe should be called with the /C flag and the name of the file to be compressed. Something like C:\WINOWS\System32\compact.exe /C "D:\ftp\archive\dump_001208.csv"

In terms of what the Execute Process Task expects, the Executable should be set to compact.exe. The arguments should be /C "D:\ftp\archive\dump_001208.csv" and since the filename is supplied by the variable, an expression should be used. On the Expressions tab, create an expression for Arguments. With the expression designer, create the following expression

"/C \"" + @[User::filename] + "\""

The important part to recognize is that expressions are largely built on C-style syntax (download the PragmaticWorks white paper  SSIS Accelerator Series: Working with SSIS Expressions to learn more on SSIS expressions). The arguments expression should return a string, hence the opening and closing double-quotes. The filename, which is supplied by the @[User::filename] variable (drag and drop the variable in the expression designer) should be enclosed within double-quotes too. For SSIS to recognize that these double-quotes are part of the string it should return, the double-quotes need to be escaped (using the backslash). To confirm the expression is correct, use the Evaluate Expression button. The evaluated value should return /C "" (slash + C + space + double-quote + double-quote), once the filename variable is filled, it will appear between the two double-quotes.

clip_image006

Enclosing the filename in double quotes prevents files with spaces in the name from causing errors. There could be other errors though, like a file being locked. With current configuration, the package would stop and throw an error on the first occurrence of this condition. To prevent the package from failing in that event, set the FailTaskIfReturnCodeIsNotSuccessValue property of the Execute Process Task to false.

Tuesday, November 30, 2010 9:49:56 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, October 19, 2010

Some time ago I was working with performance data collected by a customer to look for I/O-patterns and make recommendations for the I/O-subsystem for a new SQL Server 2008 machine that would replace a couple of SQL Server 2000 servers. Unfortunately I ran into issues with the values for Avg. Disk sec/Read and Avg. Disk sec/Write, which were ridiculous with average values around 100-million seconds per read or write. Those values occurred on two virtualized Windows 2000 Servers.


Today, I was working for another customer. Physical Windows Server 2003 boxes (both nodes in the same cluster). I imported the .blg files into a database and was looking at very high values (up to 100 million) for Avg. Disk sec/Read and Avg. Disk sec/Write. I consulted the staff-DBA, who referred me to one off the Windows administrators. So the Windows administrator logged in, opened performance monitor and… got reasonable values for those counters (hovering at about 0.006, I’d even say those values are good). To prove I wasn’t making a fool of myself, we walked over to my desk and I showed the results I had… same servers, same counters, very different values. Next we tried using the account of the staff-DBA, which showed the “normal” values.


Simply eliminating, the Win-admin said the only difference is the account, so he went and checked whether the account I used had full administrator permissions. It had. Next he turned his attention to the user profile for the account I used (which happened to be quite big). The user profile belonging to the account I used was deleted and I logged in again, this time to be shown the correct values. RESOLVED!!!


Unfortunately the collected data still was useless. I started the counter log again, let it run for about 10 minutes and examined it… sky high values!!! In disbelief I switched to current activity, to be shown normal values!!! Next I created a new counter log definition, let it run for a couple of minutes and this time I got normal values.


Bottom line, to resolve insanely high values for Avg. Disk sec/Read and Avg. Disk sec/Write (it might apply for other counters of the type PERF_AVERAGE_TIMER too), use a fresh user profile and a fresh counter log definition.


Thanks Ruud.

Tuesday, October 19, 2010 9:44:49 PM (W. Europe Daylight Time, UTC+02:00)
# Thursday, August 19, 2010

I am working on an MS Access database that needs to be integrated in a CRM system. Since the backend of the CRM is SQL Server (why else would I be involved), I migrated the Access database to SQL Server using the SQL Server Migration Assistant for Access v4.2 (SSMA for Access). Next I turned my attention to translating attributes between the systems and unfortunately, a free text notes field in the Access database is used for a lot of things that need to be translated to attributes. Some things I know how to translate, but some things have to be determined by the administrators for the CRM System and Access application. So I built a query in SQL on the tables formerly stored in the Access database and used Excel (with MS Query) to create a report for the administrators, so they can sort out how customer statuses should be translated.

Extremely simplified, the query looked like this;

SELECT tlbCustomer.CUST_CODE
  , tlbCustomer.
PAY_REF
  , COALESCE(UPPER(REPLACE(tblNotes.NOTE_TXT,'.','')),'') AS
NOTE_TXT
FROM tlbCustomer LEFT OUTER JOIN
tblNotes
      ON tlbCustomer.CUST_ID = tblNotes.
CUST_ID
GROUP BY tlbCustomer.
CUST_CODE
  , tlbCustomer.
PAY_REF
  , UPPER(REPLACE(tblNotes.NOTE_TXT,'.',''
))
ORDER BY COUNT(*)
DESC

The query worked great on SQL, but did not return the text (NOTE_TXT) to Excel.

To cut the long story short, the original tblNotes.NOTE_TXT field in the Access database was of type MEMO. SSMA 2008 for Access converted this to nvarchar(max) and somehow MS Query (in Office 2007) does not return, nor throw an error on, large data types (varchar(max), nvarchar(max)). A quick test showed that MAX was not really needed;

SELECT MAX(LEN(tblNotes.NOTE_TXT)) FROM tblNotes

So modify the query to return NOTE_TXT as nvarchar(256) does the trick;

SELECT tlbCustomer.CUST_CODE
  , tlbCustomer.
PAY_REF
  , CAST
(
      COALESCE(UPPER(REPLACE(tblNotes.NOTE_TXT,'.','')),'')

    AS nvarchar(256)) AS
NOTE_TXT
FROM tlbCustomer LEFT OUTER JOIN
tblNotes
      ON tlbCustomer.CUST_ID = tblNotes.
CUST_ID
GROUP BY tlbCustomer.
CUST_CODE
  , tlbCustomer.
PAY_REF
  , UPPER(REPLACE(tblNotes.NOTE_TXT,'.',''
))
ORDER BY COUNT(*)
DESC
Thursday, August 19, 2010 12:26:16 PM (W. Europe Daylight Time, UTC+02:00)
# Saturday, July 24, 2010

Sometimes one is confronted with forgetting an option... It all started with a database that had a single data file of 1.8 GB and a transaction log more than 6 times the size of that. Usually that would be a database in full recovery model and no transaction log backups. This database was in simple recovery model though and according to the Disk Usage report, 99.99% of the transaction log was free. So a DBCC SHRINKFILE should solve the problem, however DBCC SHRINKFILE solved nothing, the log remained at about 11 GB.

A query on sys.databases learned that this particular database had ACTIVE_TRANSACTION for log_reuse_wait_desc. A peek in Activity Monitor then showed that there was an open transaction for the database; a DELETE task with a huge wait time and the BROKER_RECEIVE_WAITFOR wait type.

Then it occurred to me this is the database that used to have a deadlock problem. After implementing some procedure fixes, there haven’t been any deadlocks for over a half year and neither has the server (or service) been shutdown in between.

The stored procedure on the queue does a receive, but no timeout associated to it. Effectively this means that the receive will wait inside the transaction, until a messages comes into the queue. That hasn’t happened in the last six months. The resolution is simple, add the TIMEOUT option to the RECEIVE and the procedure will stop when there wasn’t any work to be done for the specified amount of time (in milliseconds). For the code, see the post “Deadlock Alerts Through Event Notification”, which has been updated with the TIMEOUT option.

After altering the procedure, kill the spid. Service Broker will restart, but now with the TIMEOUT option in the procedure and stop after the specified amount of time has elapsed without messages arriving in the queue.

Saturday, July 24, 2010 6:48:00 PM (W. Europe Daylight Time, UTC+02:00)
# Friday, February 26, 2010

When restoring a backup of a database, SQL Server recreates the files with the same sizes as listed in the backup, or at least tries to do so. The only option you have, is specifying the path for each file. Generally not a problem, but every now and then I encounter a database with a transaction log about 10 to 50 times the size of the data file(s) . . . yep, full recovery model and just daily full backups. I am the first to acknowledge transaction logs backups are key to prevent this situation, but the next best thing still is that recent valid database backup. . . and then you can’t restore it, because the 200 GB to write that big empty file called transaction log cannot be allocated.

Today was one of those days;

Msg 3257, Level 16, State 1, Line 1
There is insufficient free space on disk volume 'D:\' to create the database. The database requires 101197938688 additional free bytes, while only 19386400768 bytes are available.
Msg 3119, Level 16, State 4, Line 1
Problems were identified while planning for the RESTORE statement. Previous messages provide details.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

Surely I wouldn’t be the only one having problems with this and I found two others on connect;

I voted (as important) for both items, though the second one is already closed. Yet I was triggered by the comments from the MS-people to ask for a specific solution towards the transaction log;

RESTORE DATABASE ... WITH STUB_LOG

One of the habits of RESTORE DATABASE is that it wants to allocate the log size (and files) as found in the backup. Strange, as the size of those files is the result of past activity (not contained in the backup being restored). The only space needed in the transaction log, is that of the activity during the backup. Additional space requirements may exist for subsequent restores of differential backups and log backups (in norecovery or standby) or future activity (after recovery), but that isn’t an issue as the log files would grow again during these restores.

The only real issue I can think of is having a database with fixed size transaction log file(s), which means no auto grow. So for a WITH STUB_LOG option to work properly in these scenarios, it may need to imply that the log file(s) are always set to unrestricted auto grow (possibly only honoring the original increment).

It would at least prevent me from having to allocate 100GB for an empty transaction log on 20GB available.

Friday, February 26, 2010 9:26:02 PM (W. Europe Standard Time, UTC+01:00)
# Tuesday, February 9, 2010

If you're interested in expanding your skills in SQL Server, check out these events;

Happy learning.

Tuesday, February 9, 2010 10:43:22 AM (W. Europe Standard Time, UTC+01:00)
# Friday, December 18, 2009

Microsoft recently launched a Virtual Business Card site for Microsoft Certified Professionals. So if you ever passed one or more of those Microsoft exams in the NT 4.0 or later eras, check out the www.mcpvirtualbusinesscard.com site (Windows Live ID sign in to connect to you records in Microsoft's certification database).

I set my profile page up to show off the credentials..

Also had a look at including the transcript, but that would be a 13-page pdf, a bit of overkill if you ask me.

Friday, December 18, 2009 11:47:33 AM (W. Europe Standard Time, UTC+01:00)
# Friday, December 4, 2009

Yesterday I visited the SQL Server day 2009, organized by the Belgian SQL Server User Group SQLUG.BE, in Mechelen. Congratulations on the event guys!

After the keynote by Microsoft Belgium (I wish they had talked a little bit more about SQL Azure), I visited the session by Henk van der Valk on world record ETL... now Henk has control over some top notch iron, but that doesn't mean his tips don't apply to a modest setup. Henk also mentioned he recently joined the blogsphere at www.henkvandervalk.com.

Next I sat (sorry I have to say so) horrible sponsored session by Quest... and this has nothing to do with the FogLight product. On another occasion (an afternoon session by Quest Netherlands) I witnessed the possibilities of Foglight (for .NET, SQL Server and VMware) and I must say it's a good looking product. However we got 30 minutes of boring listing of challenges and day to day problems (as if we weren't aware of them already) and in the end got some screenshots, which were completely out of context. I would have be completely lost in the presentation if I hadn't been to the session earlier by Quest NL.

After that, I meant to sit the session "Reporting Services a DBA's tool? YES!!!", but since the agenda card was a little confusing (or better said, I didn't pay enough attention) I walked into the session by Nico Jacobs on "Handling XML in SQL Server". Funny, as there was nothing new for me in the session but still I really enjoyed it... most important because as a trainer you rarely get an opportunity to see one of your colleagues at work on a familiar subject. Thanks Nico, I really enjoyed it.

The other session I attended was on "Policy Based Management", again by Nico Jacobs. I hoped it would go deeper into the possibilities of implementing it for SQL Server 2005 and SQL Server 2000. Unfortunately that was not the case, so I'll have to dive into the Enterprise Policy Management Framework and PowerShell without a quick start. But again, it was a joy listening to Nico.

Final session and closing keynote was by Chris Webb on PowerPivot (a.k.a. Gemini). It wasn't my first glance at Gemini, but it definitely is the kind of quick start I was looking for. Sitting a session like that saves a day of looking for stuff.

All-in-all, a day well spent.

Friday, December 4, 2009 2:35:00 PM (W. Europe Standard Time, UTC+01:00)
# Saturday, November 21, 2009

Creating linked servers between two instances of SQL Server may not always be as straight forward as hoped. To make it more complicated, interference may be caused by matters beyond the control of the database administrator; Active Directory. After creating a linked server, being able to successfully use it may be the real challenge, it may even work for one user and fail for another.

Service Account

To communicate successfully in a domain environment, the service account for your database engine is the first to check/configure. Possible choices for the service account are;

  • Domain User (recommended)
  • Local User
  • LocalSystem
  • LocalService
  • NetworkService (recommended)

From those options, a local user or LocalService for the service account will eventually stop you. In both cases a SQL Server has the permission set of user on the machine where SQL Server operates, with no identity on the network. In case the SQL Server has to connect to any remote resource, it will do so as an anonymous user (null session). Unless you want to be absolutely sure that SQL Server should never do anything beyond anonymous on the network, you can go this route... but hey, you're reading about linked servers, so just drop the thought of using a local user or LocalService.

Valid options to enable SQL Server to operate on the network are LocalSystem, NetworkService and Domain User. All have the potential to work well, but LocalSystem is not recommended from a security perspective. LocalSystem has the permission set of an administrator on the system where SQL Server operates, anyone obtaining control over your SQL Server has full access to the underlying operating system (think about the damage that could be done when xp_cmdshell is enabled). From a network perspective, LocalSystem and NetworkService are identical; when the SQL Server operates over the network, it will do so under the credentials of the Windows machine. With a domain user as service account, SQL Server has user permissions locally (as is the case with NetworkService, LocalService and a local user), on the network SQL Server will present the credentials of the domain user.

So two good options remain; NetworkService and a domain user. Which one you chose mainly depends on the advantages of one over the other;

  • NetworkService:
    • No account or password management in Active Directory
    • Service Principal Names are automatically generated
  • Domain User
    • Multiple instances on different machines can have the same identity which is required for clustering.
    • Transparent security for Database Mirroring. 
    • Usually SQL Agent will be the same account, this makes things easier with Log Shipping and Replication. ChangeServiceAccount
    • Improved access to Service Master Key (in case you don't have a backup of the key itself). -->

Personally, I favor using a domain user.

Should you find that you need to change the Service Account; use the "SQL Server Configuration Manager"

Configuring and using a SQL Server linked server for Windows Authentication

NewLinkedServer1There are two ways to go about the configuration; graphically and via stored procedure. The graphical approach brings up a dialog, where just a little configuration is needed.

On the General page, select the server type to be SQL Server (choosing SQL Server Native or OLEDB drivers will work too, but require extra configuration) and type the name of the target instance using the flat name or FQDN followed by a backslash and instance name in case of a named instance.

On the Security page, select the option to "Be made using the login's current security context", which effectively tells SQL Server to impersonate or delegate the login requesting access though the linked server.

On the Server Options page, verify that Data Access is set to True. To see the stored procedures involved, either use the Script button on top of the dialog instead of pressing ok or right-click a configured linked server and "Script Linked Server as >".

To avoid running into (later explained) problems when creating linked servers, create the linked server from a shell (interactive or remote desktop) on the Windows machine where the linked server is configured.

To use the linked server, issue the query

SELECT TOP 50 * FROM [HOST.NET.LOCAL\INSTANCE].AdventureWorks.Sales.SalesOrderDetail

Note that the linked server HOST.NET.LOCAL\INSTANCE is enclosed in braces, which is required when the server name is fully qualified and/or when it is a named instance. If you don't want to expose host and/or instance names, consider creating an alias in "SQL Server Configuration Manager" under client configuration and point the linked server to the alias.

Impersonation and delegation

During the configuration (Security page) impersonation and delegation were mentioned. Also to avoid problems while configuring a linked server, the best way to do so was on the server where the linked server was to be created. The reason to do so is to avoid delegation. Below are a summary of impersonation and delegation (and links to a more complete overview). Windows accounts will be impersonated if necessary, Windows accounts will not be delegated unless both account and delegating system are configured to do so.

Impersonation happens when a process connects to resources pretending to have the identity of it's caller. Like when a user Bobbie is connected to SQLINST1 and issues a query for a table via a linked server on SQLINST2. Depending on the configuration of the linked server, SQLINST1 will pretend to be Bobbie when connecting on Bobbies behalf to SQLINST2. For a more detailed overview of impersonation, check the Pluralsight-wiki on impersonation. Note that impersonation is limited to the  system where process making the call is active.

Delegation is much like impersonation, with the additional feature that delegation is not limited to the system of the process making the call. Because the process now has to forward the credentials to another system and prove to operate on behalf of someone else than the system itself, the Kerberos protocol is required (that or knowledge of the password of the delegated user). For a more detailed overview of delegation, check the Pluralsight-wiki on delegation.

UserDelegationPropertyFrom the perspective of Windows systems, Bobbie (DOMAIN\Bobbie) is logged in on a typical client system (Windows XP, Vista or 7) and uses an application that connects to SERVER1\SQLINST1. In SQLINST1 two linked servers are configured; SERVER1 (being the default instance on SERVER1) and SERVER2\SQLINST1. The context of the application is DOMAIN\Bobbie and thus an access token for DOMAIN\Bobbie is handed to SERVER1 (for logging in on the SQL Server service SERVER1\SQLINST1). When the application issues a query for an object on the default instance of SERVER1, one local process on SERVER1 has to impersonate DOMAIN\Bobbie to the other local process on SERVER1. This will succeed and can work both with NTLM and Kerberos, because SERVER1 trusts itself and the domain; DOMAIN\Bobbie was authenticated by the domain and is only used by SERVER1 locally.

Next the application calls for SERVER2 and now the challenge is on! Bobbie get's a

Msg 18456, Level 14, State 1, Line 1
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.

SystemDelegationPropertySERVER2 does not trust SERVER1 and therefor will not accept the claim of SERVER1 that it acts on Bobbies behalf, SERVER2 wants to see PROOF! One way of proof would be if SERVER1 had the password of Bobbie, indicating that Bobbie fully trusts SERVER1... but Lesson number 1; never give anyone your password. Lesson number 2; don't write it on a Post-It... Lesson number 3; don't trust computers that violate lessons 1 or 2. The other option is to turn to a mutually trusted third party: the domain, but now the configuration needs to be right for Kerberos.

For Kerberos to do it's trick, the user (Domain\Bobbie) and the machine delegating (SERVER1) the user have to be trusted for delegation. This is sufficient if the SQL Server services on SERVER1\SQLINST1 and SERVER2 run as NetworkService.

In case SERVER1\SQLINST1 is running under a domain user account, trust that user account for delegation, not the server.

Service Principal Name

Kerberos distinguishes three roles;

  • Key Distribution Center (KDC), in Active Directory this role is held by the Domain Controllers.
  • Client, these are the users.
  • Server, these are the computers in your network.

With this basic division of roles, things work well in a domain, until a service account like that of the SQL Server database engine is a domain user. If so, that particular domain user should be made a server too. To mark an account as a service, a Service Principal Name must be registered. With NetworkService, this was done by the server (because servers can write SPN's on their own behalf), for any other account, by default only Domain Administrators can write an SPN. Use the SetSPN tool to mark an account as Service Principal Name, SetSPN can be downloaded, is part of the Windows Server 2003 Support Tools and is installed on Windows Server 2008 with the AD DS role.

To register an SPN for a service account, use the following syntax:

SetSPN -a MsSqlSvc/server1.domain.local:1433 domain\user

SetSPN -a MsSqlSvc/server1:1433 domain\user

Substitute the blue parts in the syntax for the server names (FQDN and NetBIOS), the port where SQL Server listens (this could be different from 1433, especially when registering an SPN for additional instances) and the domain user account used as SQL Server service account.

When running more SQL Server instances under the same domain account, it may be useful to check the approach listed in Step 3 of How to use Kerberos authentication in SQL Server, so the AD-people have to be called upon only once for the service account, not for every instance installation.

Reference Environment

ReferenceEnvironment As reference environment (may be a layout for a test-network), the following machines, users and services are used in a single domain configuration;

  • Windows Server 2003 or 2008: DC1 (domain controller), SERVER1 (member), SERVER2 (member)
  • Windows XP, Vista or 7: CLIENT (member)
  • SQL Server 2005 or 2008: SERVER1 (default instance) tcp 1433 static, SERVER1\SQLINST1 tcp dynamic (named instance), SERVER2\SQLINST1 (named instance) tcp 1433 static

To make it work, user Domain\Bobbie has to be trusted for delegation in all four scenario's. Depending on the service accounts for SERVER1\SQLINST1 and SERVER2\SQLINST2, use the following settings

Service account for SERVER1\SQLINST1 Service account for SERVER2\SQLINST1 Users and or computers to be trusted for delegation SPN's to create
NetworkService NetworkService SERVER1
DOMAIN\Bobbie
<none>
NetworkService DOMAIN\SRV_SQL2 SERVER1
DOMAIN\Bobbie
MSSQLSvc/server2.domain.local:1433 DOMAIN\SRV_SQL2
DOMAIN\SRV_SQL1 NetworkService DOMAIN\SRV_SQL1
DOMAIN\Bobbie
<none>
DOMAIN\SRV_SQL1 DOMAIN\SRV_SQL2 DOMAIN\SRV_SQL1
DOMAIN\Bobbie
MSSQLSvc/server2.domain.local:1433 DOMAIN\SRV_SQL2

Things to consider

AdjustKerberosIntervals When working with Kerberos, tickets are granted for a longer period of time (typically 10 hours), so configuration changes usually take a longer time to apply. Use the KERBTRAY and/or KLIST resource kit utilities to view and purge tickets, in order to speed things up (avoids rebooting systems or logging off/on users). Alternatively, consider shortening the ticket lifetimes to the minimum; 10 minutes for services and 1 hour for users.

When configuring the linked servers, it was best to do so at the server where the linked server is configured from. Depending on your role you may be a domain admin, so should domain admins be delegated in order to make setting up linked servers easier? Rather not! Imagine someone installed malicious program or script on a system that is trusted for delegation and somehow a domain administrator executes this program or script. It could hurt any system or user in your network.

Saturday, November 21, 2009 8:56:50 PM (W. Europe Standard Time, UTC+01:00)