Tuesday, 27 October 2015

Dynamic Quorum in Windows Server 2012 Cluster

The dynamic quorum model is the default quorum configuration available when you are creating a Cluster using Windows Server 2012. We can manually remove nodes from participating in the voting. To keep a cluster up and running proper voting and quorum should be maintained. Each node in a cluster is a vote. For the cluster to keep going more than half of the votes must continue to run. That means if there are 7 nodes in a cluster, it requires a minimum of 4 [(7+1)/2] nodes up and running to keep the cluster up.

                        As per the above example, in windows server 2008 R2 we need to make sure that at least 4 nodes are running to keep the cluster up and if there are only 3 nodes are running then the cluster services will terminate immediately. We need to manually force the cluster to start and bring it online. But in a Windows Server 2012 Failover Cluster when a node goes down the number of votes needed to maintain the cluster also dynamically goes down. So as mentioned in the earlier example if one node goes down, the total vote count becomes 6 and minimum also reduces and the server will continue running. Although dynamic quorum solved the problem of causing the entire cluster to shut down due to lack of quorum voting, it did not eliminate the need to manually configure a witness resource when the node count changes.

                          In Windows Server 2012 R2 introduced the concept of dynamic witness which dynamically assigns the witness resource a vote, depending on the number of online nodes in the cluster. If the number of nodes are even witness is required and given a vote and if its odd then the witness is removed dynamically from the voting mechanism. When creating a failover cluster with Windows Server 2012 we always configure a witness resource. The Failover clustering feature determines when the witness should have a vote.

Sunday, 23 November 2014

Internal and External Fragmentation in SQL Server


A non-contagious storage of data is called fragmentation. Fragmentation adversely affects the performance of the SQL Server. There are mainly two types of fragmentation.

1.       Internal Fragmentation

Internal fragmentation occurs when data is stored non-contiguously inside a page. This means there are unused space between the records in a page. Insert, delete and update operations fill the page and overflow of data causes a “page split”. Splitting causes a full page to divide evenly, inserting half of its data on a newly allocated page. Internal fragmentation decreases page density and as a result it is very difficult for the server to search and find the data.

 
 


2.       External Fragmentation (Logical Fragmentation)
 
When the extents of a table are not physically stored as per the logical order external fragmentation occurs. That means next page pointer does not point to the next physical page. This can be identified at the avg_fragmentation_in_percent in sys.dm_db_index_physical stats. This value is not relevant if you have a small number of pages and if all pages are already in the buffer pool and logical IOs are not necessarily physical IOs.

Thursday, 16 October 2014

Isolation Levels in SQL Server


The Isolation levels are followed as a continuity of the previous one . I am not explaining the first two Isolation levels (Read Committed and Read Uncommitted) as everybody is familiar with those. We can briefly explain the others as below:

Repeatable Read:- Repeatable read guarantees records queried by a previous select will not be changed or deleted, it does not stop new records being inserted but it is still possible to get Phantom reads (Subsequent reads of the data in the same transaction could be different) at this isolation Level.

Serializable:- This isolation level takes Repeatable Read and adds the guarantee that no new data will be added eradicating the chance of getting Phantom Reads. It does this by placing range locks on the queried data. This causes any other transactions trying to modify or insert data touched on by this transaction to wait until it has finished.

Snapshot:- This provides the same guarantees as serializable. So the difference is, it doesn't block other queries from inserting or updating the data touched by the snapshot transaction. Instead row versioning is used so when data is changed the old version is kept in tempdb so existing transactions will see the version without the change. When all transactions that started before the changes are complete the previous row version is removed from tempdb. This means that even if another transaction has made changes you will always get the same results as you did the first time in that transaction.

 
The below table will give you an elaborate idea about Isolation Levels:
                                                                                                                                               

Isolation Level
Dirty Reads
Non-Repeatable
Reads
Phantom Reads
Read Uncommitted
Y
Y
Y
Read Committed
N
Y
Y
Repeatable Read
N
N
Y
Serializable
N
N
N

Weak

         


Strong

Thursday, 30 May 2013

FORCESEEK HINT is extended in Denali

When SQL Server chooses seek over scan while retrieving record details, it will compare key value with search input, based on comparison result, Query engine will move to appropriate page. If index has multiple columns, if we don't want all key columns to consider, in SQL Server 2012, we can mention the index columns to consider when the index has multiple key columns.

SELECT EmployeeID,EmployeeName,Account
FROM Employees 
WITH (FORCESEEK(Idx_Employee(EmployeeID)))
WHERE EmployeeID = 1024

Online Index Rebuild in Denali

The previous versions of SQL Server never allowed index operations (CREATE, DROP, REBUILD) on ONLINE mode, if the index includes a Large Value Type column (VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX) etc).

This limitation has been removed in SQL Server 2012 (Denali). The above code runs without an error in a SQL Server 2012 instance.

 

Saturday, 14 July 2012

Restrictions on TEMPDB


  • Adding filegroups.
  • Backing up or restoring the database.
  • Changing collation. The default collation is the server collation.
  • Changing the database owner. tempdb is owned by dbo.
  • Creating a database snapshot.
  • Dropping the database.
  • Dropping the guest user from the database.
  • Enabling change data capture.
  • Participating in database mirroring.
  • Removing the primary filegroup, primary data file, or log file.
  • Renaming the database or primary filegroup.
  • Running DBCC CHECKALLOC.
  • Running DBCC CHECKCATALOG.
  • Setting the database to OFFLINE.
  • Setting the database or primary filegroup to READ_ONLY.