Friday, 21 August 2015

Auditing Failed Logins in SQL Server


ProblemAn auditor has taken a look at our SQL Servers and has told us that we need to audit login failures to the SQL Servers themselves. How do we do this?
SolutionSQL Server permits the auditing of both login successes and failures, depending on your need. The way to turn this auditing on is by using SQL Server Management Studio. Connect to the SQL Server in Object Explorer and then right-click on the SQL Server and choose the Properties option from the pop-up menu. You should see the server properties like so:
Server Properties
You'll want to click on the Security page (highlighted in the previous image) which will bring you to where you can set the login auditing:
Security Page
There are four options available:
  • None - Neither successful nor failed logins will be audited.
  • Failed logins only - Failed logins will be audited, but successful logins will be ignored.
  • Successful logins only - Successful logins will be audited, but failed logins will be ignored.
  • Both failed and successful logins - Login will be audited regardless of success and failure.
When you've made your choice, click the OK button. The setting for login auditing is actually a registry entry which differs based on SQL Server version and SQL Server instance name. SQL Server only reads this setting when the SQL Server service starts up. Therefore, for this setting to take effect, you'll need to restart the SQL Server service. Because it is a registry setting, the only easy way to set how to audit is through the GUI. There isn't a way using T-SQL without using one of the extended stored procedures that hits the registry directly.
A word about the various settings. At the very least, you should be auditing failed logins on production systems. There may be cases where it is necessary to audit successful logins, but realize that auditing successful logins will likely generate a lot of events to sift through. On extremely sensitive systems it is important to have this level of control. However, on less critical systems, you may be better served to only audit failed logins so you don't get overwhelmed with the number of events. If you have a log management solution, by all means audit both.

Once you've turned on auditing, the next step is to be able to find the results of said auditing. The first place where the audit information is recorded is the SQL Server log. For instance, an audit success by the "sa" account and an audit failure are shown in the following excerpt:
SQL Server log - auditing
If you want to view the contents of the SQL Server log by using T-SQL, there's an undocumented (but well known) extended stored procedure called xp_readerrorlog. You can use it to dump the results of the error log to a recordset by:
EXEC xp_readerrorlog;
To read further back, you can give this extended stored procedure an integer parameter which corresponds to the order of the SQL Server log. 0 represents the current log, with each number after that referring to the next one back. So to see the contents of the 3rd log (including the current one) you would pass a parameter of 2 (counting up from 0 - 0, 1, 2 would be the third), you would execute:
EXEC xp_readerrorlog 2;
To search the current error log and only return failed logins you can use the following command.  The first parameter specifies the error log (0=current), the second parameter specifies the type of log (1=SQL Error Log) and the third parameter specifies the message to search for.
 EXEC sp_readerrorlog 01'Login failed' 

If you have log management software, another good place to look is the Application event log for the operating system. You can view this by using the Computer Management tool. If you expand System Tools and then Event Viewer, you'll see the Application event log like so:
Computer Management - App Event Log
If you look in this event log, you'll be looking for events with a source of MSSQLSERVER or MSSQL$<Instance Name>. For instance, the following shows the audit success and failure events we looked at before in the SQL Server log:
App Event Log - SQL Server Login Events
Successful logins for SQL Server 2005 and 2008 will have an event ID of 18454 and failed logins will have an event ID of 18456. SQL Server 2000 uses the same event ID for both, making it impossible to determine of the event signifies a success or failure without looking at the event details. As a result, I would recommend only auditing failures to eliminate the confusion. Once you have determined the correct events, you can look at the event log details to see what login was attempted. For instance, here is the failed login attempt:
Event - SQL Server Failed Login 
and here is the successful login attempt:
Event - SQL Server Login Success
The details of what account was attempted is stored within the description of the event log. Therefore, if you need to automate the retrieval of audit successes or failures, make sure the application can retrieve the details and parse the text accordingly. 

Friday, 14 August 2015

Top 10 Index T-SQL Statements That SQL Server DBAs Should Know




SQL Server DBAs know that indexes in databases are very similar to indexes in libraries. An index in a database is a structure associated with a table or view that speeds retrieval of rows from the table or view.
This article lists the top index-related T-SQL statements that are useful for SQL Server database administrators. The T-SQL statements covered in this article are classified into three categories: Index Definition or Create, Query — Query index related information and Maintenance.

Definition - Create Index

1. Clustered Index

Clustered indexes store the data rows in sorted order in the table based on their key values. Only one clustered index can be created per table, because the data rows themselves can only be sorted in one order.
A clustered index can be created while creating constraints like primary key on an existing table. Example:


ALTER TABLE [MyAddress] 

ADD  CONSTRAINT [PK_Address_AddressID] PRIMARY KEY CLUSTERED 

(

            [AddressID] ASC

) ON [PRIMARY]

GO
A clustered index can also be created on a column with no constraints related clause. Example:


CREATE CLUSTERED INDEX [MyAddress_id_CIX] ON [MyAddress1] 

(

            [ID] ASC

)ON [PRIMARY]

GO

2. Non Clustered Index

Generally, nonclustered indexes are created to improve the performance of frequently used queries not covered by the clustered index. In a nonclustered index, the logical order of the index does not match the physical stored order of the rows on disk.
A nonclustered Index can be created on an existing table covering columns not covered by clustered index. Example:


CREATE UNIQUE NONCLUSTERED INDEX 

[NIX_col5_col2_col3_col4_col6] 

ON [MyAddress] 

(

            [AddressLine1] ASC,

            [AddressLine2] ASC,

            [City] ASC,

            [StateProvinceID] ASC,

            [PostalCode] ASC

)ON [PRIMARY]

GO
A nonclustered index can also be created while creating constraints on the existing table. Example:


ALTER TABLE [MyAddressType] 

ADD  CONSTRAINT [DEFF_MyAddressType_ModifiedDate]  

DEFAULT (getdate()) FOR [ModifiedDate]

GO

3. XML Index

An XML index can be created on an XML column and the table must have a clustered index on the primary key. The XML index can be primary or secondary.
A primary XML index can be created as shown below:


CREATE PRIMARY XML INDEX idx_xCol_MyTable on MyTable (xCol)
A secondary XML index can be created as shown below:


CREATE TABLE MyTable (Col1 INT PRIMARY KEY, XmlCol XML)

GO

-- Create primary index.

CREATE PRIMARY XML INDEX PIdx_MyTable_XmlCol 

ON T(XmlCol)

GO

-- Create secondary indexes (PATH, VALUE, PROPERTY).

CREATE XML INDEX PIdx_MyTable_XmlCol_PATH ON MyTable(XmlCol)

USING XML INDEX PIdx_MyTable_XmlCol

FOR PATH

GO

CREATE XML INDEX PIdx_MyTable_XmlCol_VALUE ON T(XmlCol)

USING XML INDEX PIdx_MyTable_XmlCol

FOR VALUE

GO

4. Spatial Index

SQL Server 2008 provided a special type of column called a spatial column, which is a table column that contains data of a spatial data type, such as geometry or geography.
A spatial index can be created using the following syntax:


CREATE TABLE MySpatialTable(id int primary key, geometry_col geometry);

CREATE SPATIAL INDEX SIndx_MySpatialTable_geometry_col1 

   ON MySpatialTable(geometry_col)

   WITH ( BOUNDING_BOX = ( 0, 0, 500, 200 ) );

Query Index related metadata

5. Find all Indexes

The following query can be used to query all the tables, columns and indexes on the current database:


SELECT OBJECT_SCHEMA_NAME(BaseT.[object_id],DB_ID()) AS [Schema],  

  BaseT.[name] AS [table_name], I.[name] AS [index_name], AC.[name] AS [column_name],  

  I.[type_desc]

FROM sys.[tables] AS BaseT  

  INNER JOIN sys.[indexes] I ON BaseT.[object_id] = I.[object_id]  

  INNER JOIN sys.[index_columns] IC ON I.[object_id] = IC.[object_id] 

  INNER JOIN sys.[all_columns] AC ON BaseT.[object_id] = AC.[object_id] AND IC.[column_id] = AC.[column_id] 

WHERE BaseT.[is_ms_shipped] = 0 AND I.[type_desc] <> 'HEAP' 

ORDER BY BaseT.[name], I.[index_id], IC.[key_ordinal]

6. Fragmentation

The following query can be used to find the index fragmentation on all the tables in the current database:


SELECT object_name(IPS.object_id) AS [TableName], 

   SI.name AS [IndexName], 

   IPS.Index_type_desc, 

   IPS.avg_fragmentation_in_percent, 

   IPS.avg_fragment_size_in_pages, 

   IPS.avg_page_space_used_in_percent, 

   IPS.record_count, 

   IPS.ghost_record_count,

   IPS.fragment_count, 

   IPS.avg_fragment_size_in_pages

FROM sys.dm_db_index_physical_stats(db_id(DB_NAME()), NULL, NULL, NULL , 'DETAILED') IPS

   JOIN sys.tables ST WITH (nolock) ON IPS.object_id = ST.object_id

   JOIN sys.indexes SI WITH (nolock) ON IPS.object_id = SI.object_id AND IPS.index_id = SI.index_id

WHERE ST.is_ms_shipped = 0

order by IPS.avg_fragment_size_in_pages desc

7. Missing index

SQL Server keeps track of the indexes that it thinks you should create that will help in improving the performance of queries. The following query list all missing indexes.


SELECT  sys.objects.name

, (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) AS Impact

,  'CREATE NONCLUSTERED INDEX ix_IndexName ON ' + sys.objects.name COLLATE DATABASE_DEFAULT + ' ( ' + IsNull(mid.equality_columns, '') + CASE WHEN mid.inequality_columns IS NULL 

                THEN ''  

    ELSE CASE WHEN mid.equality_columns IS NULL 

                    THEN ''  

        ELSE ',' END + mid.inequality_columns END + ' ) ' + CASE WHEN mid.included_columns IS NULL 

                THEN ''  

    ELSE 'INCLUDE (' + mid.included_columns + ')' END + ';' AS CreateIndexStatement

, mid.equality_columns

, mid.inequality_columns

, mid.included_columns 

    FROM sys.dm_db_missing_index_group_stats AS migs 

            INNER JOIN sys.dm_db_missing_index_groups AS mig ON migs.group_handle = mig.index_group_handle 

            INNER JOIN sys.dm_db_missing_index_details AS mid ON mig.index_handle = mid.index_handle AND mid.database_id = DB_ID() 

            INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.objects.OBJECT_ID 

    WHERE     (migs.group_handle IN 

        ( 

        SELECT     TOP (500) group_handle 

            FROM          sys.dm_db_missing_index_group_stats WITH (nolock) 

            ORDER BY (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) DESC))  

        AND OBJECTPROPERTY(sys.objects.OBJECT_ID, 'isusertable')=1 

    ORDER BY 2 DESC , 3 DESC 

8. Unused index

The following statement lists all the indexes that have not been used. This also generates the DROP index statement which can come handy when deleting the indexes.


SELECT o.name, indexname=i.name, i.index_id   

, reads=user_seeks + user_scans + user_lookups   

, writes =  user_updates   

, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)

, CASE

            WHEN s.user_updates < 1 THEN 100

            ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates

  END AS reads_per_write

, 'DROP INDEX ' + QUOTENAME(i.name) 

+ ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) as 'drop statement'

FROM sys.dm_db_index_usage_stats s  

INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id   

INNER JOIN sys.objects o on s.object_id = o.object_id

INNER JOIN sys.schemas c on o.schema_id = c.schema_id

WHERE OBJECTPROPERTY(s.object_id,'IsUserTable') = 1

AND s.database_id = DB_ID()   

AND i.type_desc = 'nonclustered'

AND i.is_primary_key = 0

AND i.is_unique_constraint = 0

AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000

ORDER BY reads

Index Maintainenance

9. Rebuild index

When an index gets fragmented, it requires defragmentation. Defragmentation can be done using the rebuild clause when altering a table. This command is equivalent to DBCC DBREINDEX in SQL Server versions prior to 2005. The command that can be used to rebuild the index is as follows:


USE AdventureWorks2008R2;

GO

ALTER INDEX PK_Employee_BusinessEntityID ON HumanResources.Employee

REBUILD;

GO
If ALL is not specified in rebuild, it will not rebuild a nonclustered index.

10. REORGANIZE index

Specifies that the index leaf level will be reorganized. The REORGANIZE statement is always performed online. This command is equivalent to DBCC INDEXDEFRAG in SQL Server versions prior to 2005.


USE AdventureWorks2008R2;

GO

ALTER INDEX PK_ProductPhoto_ProductPhotoID ON Production.ProductPhoto

REORGANIZE ;

GO

Conclusion

I hope you've found useful this list of the top Index-related T-SQL statements that are useful for SQL Server database administrators.


SQL SERVER Backup Timeline and Understanding of Database Restore Process in Full Recovery Model



I assume you all know that there are three types of Database Backup Models, so we will not discuss on this commonly known topic today. In fact, we will just talk about how to restore database that is in full recovery model.
In general, databases backup in full recovery mode are taken in three different kinds of database files.
  1. Full Database Backup
  2. Differential Database Backup
  3. Log Backup
What really perplexes most people is differential database backup.
Let me narrate here a real story. One of the DBAs in local city once called me up and laughingly said that he has just found something not smart about SQL Server Product Team in terms of database backup. I just could not believe this negative remark against SQL Server Product Team so I asked him to explain me what is it. He told me that in his opinion it is an extra step when it is about Differential Database backup. I asked him how he restores his database. He replied that he starts with first full database backup and then sequentially all log backups and differential database backups. He continued his explanation and said that he has figured something interesting; that is, if he does not restore all the differential database backups and only restores log backups it just works fine. According to him this is an unnecessary step.
Well, I am extremely happy to say he is wrong. He has totally failed to understand the concept of differential database backup. I called up another friend in his company and told him this story and he found it funny too! He suggested that he will explain to my friend that he needs to do all differential backups first and then all log backups after the last differential backup. I was again amazed and didn’t know what to do. He was wrong too!
After interacting with many DBAs I have realized that it is quite confusing to most of the people how differential database is useful and many are not aware of the correct method to restorefull recovery model. Before I start explaining please understand the following diagram where I have demonstrated time line when a backup was taken.
Let us remember the golden rule for restore first.

‘After restoring full database backup, restore latest differential database backup and all the transaction log backup after that to get database to current state.’
From the above-listed rule, it is very clear that there is no need to restore all the differential database backups when restoring databases. You can only restore the latest Differential database backup. Differential database backup is the backup of all the changes made in database from last full backup; it is cumulative itself. That is the reason why the size of next differential database backup is much more than the previous differential database backup. All differential database backups contain all the data of previous differential database backups. You just have to restore the latest differential database backup and right after that install all the transaction database backups to bring database to the current state.
If you do not want to have differential database backup and have all the transaction log backups, in that case, you will have to install all the transactional database backups, which will be very time consuming and is not recommended when disastrous situation is there and getting server back online is the priority. In this way, differential database backups are very useful to save time as well as are very convenient to restore database. Instead of restoring many transaction database logs, which needs to be done very carefully without missing a single transaction in between, this is very convenient.
In our example, there are multiple paths to get server to the current state.
Path 1 (SLOWEST) : Full Database Restore >> Log Backup 0_1 >> Log Backup 1_1 to all remaining logs.
Path 2 : Full Database Restore >> Differential Database Backup 1 >> Log Backup 1_1 to all remaining logs.
Path 3 (FASTEST) : Full Database Restore >> Differential Database Backup 2 >> Log Backup 2_1 to all remaining logs.