SQL Transactions

Transactions in SQL Server are very useful, however I have come across that not many people use them. Not sure why this could be, so thought I would share how transactions work and the different modes.

A transaction is a single unit of work. If a transaction is successful, all of the data modifications made during the transaction are committed and become a permanent part of the database. If a transaction encounters errors and must be canceled or rolled back, then all of the data modifications are erased.

SQL Server operates in the following transaction modes:
Autocommit transactions
Each individual statement is a transaction.

Explicit transactions
Each transaction is explicitly started with the BEGIN TRANSACTION statement and explicitly ended with a COMMIT or ROLLBACK statement.

Implicit transactions
A new transaction is implicitly started when the prior transaction completes, but each transaction is explicitly completed with a COMMIT or ROLLBACK statement.

Batch-scoped transactions
Applicable only to multiple active result sets (MARS), a Transact-SQL explicit or implicit transaction that starts under a MARS session becomes a batch-scoped transaction. A batch-scoped transaction that is not committed or rolled back when a batch completes is automatically rolled back by SQL Server.

From my experience Explicit transactions are the most commonly used out of all the different modes.

Distributed Transactions:
You can use Distributed transactions which specifies the start of a SQL distributed transaction managed by Microsoft Distributed Transaction Coordinator (MS DTC). This is only applicable to SQL Server and no Azure. Transaction-level snapshot isolation does not support distributed transactions.

Arguments:
transaction_name
Is a user-defined transaction name used to track the distributed transaction within MS DTC utilities. transaction_name must conform to the rules for identifiers and must be <= 32 characters.

@tran_name_variable
Is the name of a user-defined variable containing a transaction name used to track the distributed transaction within MS DTC utilities. The variable must be declared with a char, varchar, nchar, or nvarchar data type.

The instance of the SQL Server Database Engine executing the BEGIN DISTRIBUTED TRANSACTION statement is the transaction originator and controls the completion of the transaction. When a subsequent COMMIT TRANSACTION or ROLLBACK TRANSACTION statement is issued for the session, the controlling instance requests that MS DTC manage the completion of the distributed transaction across all of the instances involved.

For example, if BEGIN DISTRIBUTED TRANSACTION is issued on ServerA, the session calls a stored procedure on ServerB and another stored procedure on ServerC. The stored procedure on ServerC executes a distributed query against ServerD, and then all four computers are involved in the distributed transaction. The instance of the Database Engine on ServerA is the originating controlling instance for the transaction.

USE AdventureWorks2012;  
GO  
BEGIN DISTRIBUTED TRANSACTION;  
-- Delete candidate from local instance.  
DELETE AdventureWorks2012.HumanResources.JobCandidate  
    WHERE JobCandidateID = 13;  
-- Delete candidate from remote instance.  
DELETE RemoteServer.AdventureWorks2012.HumanResources.JobCandidate  
    WHERE JobCandidateID = 13;  
COMMIT TRANSACTION;  
GO  

Begin Transaction
Marks the starting point of an explicit, local transaction. Explicit transactions start with the BEGIN TRANSACTION statement and end with the COMMIT or ROLLBACK statement. BEGIN TRANSACTION increments @@TRANCOUNT by 1.

BEGIN TRANSACTION represents a point at which the data referenced by a connection is logically and physically consistent. If errors are encountered, all data modifications made after the BEGIN TRANSACTION can be rolled back to return the data to this known state of consistency. Each transaction lasts until either it completes without errors and COMMIT TRANSACTION is issued to make the modifications a permanent part of the database, or errors are encountered and all modifications are erased with a ROLLBACK TRANSACTION statement.

Although BEGIN TRANSACTION starts a local transaction, it is not recorded in the transaction log until the application subsequently performs an action that must be recorded in the log, such as executing an INSERT, UPDATE, or DELETE statement.

Naming multiple transactions in a series of nested transactions with a transaction name has little effect on the transaction. Only the first (outermost) transaction name is registered with the system. A rollback to any other name (other than a valid save point name) generates an error. None of the statements executed before the rollback is, in fact, rolled back at the time this error occurs. The statements are rolled back only when the outer transaction is rolled back.

BEGIN TRY
	BEGIN TRANSACTION
	
	DELETE	FROM HumanResources.JobCandidate
	WHERE	JobCandidateID = 13
	
	COMMIT
	
END TRY
BEGIN CATCH
	ROLLBACK
	
END CATCH

Hope this helps you in future when using transactions. If you are doing INSERTS, UPDATES and DELETES it is advisable that you use a transaction.

Setting an Alert on a Stored Procedure

Recently I came across an issue where I needed to set an alert that when a report is populated it needs to send out an alert. Unfortunately in SSRS you can only do subscriptions and I have not found a way to do those kinds of alerts on an SSRS report.

A workaround I found for this was to set an alert on the stored procedure the report is using. (Hoping you are using stored procedures for SSRS reports and not a large query in the report dataset) You can do this by applying the IF statement with the Execute dbmail proc. See below:

CREATE PROCEDURE dbo.DepartmentAlert
AS
SELECT *
FROM HumanResources.EmployeeDepartmentHistory

--If there are more than or equal to 1 record pulled back it will fire the email.	
IF
(
	SELECT	COUNT(*)
	FROM HumanResources.EmployeeDepartmentHistory
) >= 1
BEGIN
	EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Alerting'
								,@recipients = 'jeanne.c17@gmail.com'
								,@body = 'Department history available'
								,@subject = 'Department History Availability'
								,@body_format = 'HTML'
	;
END
;

Next I created an SSIS package that uses a SQL execute task to execute my Stored Procedure. Then scheduled this to run every few hours and when it meets the IF statement validity then the mail is sent out to alert whoever is in the @recipients list.

Hope this helps you as much as it helped me.

SSRS Cancel Report Still Running In The Background

There are some people who cancel a SSRS report because its taking to long by just closing visual studio or their browser. Then they reopen SSRS/browser and try and view the report again. In the mean time the previous report they were trying to run the query is still running on the database. This takes up quite a bit of resources and people think that there query has been cancelled, but this in not the case.

This is a common problem with users accessing reports. There is a way to stop the query from running in the background by using the KILL command, however you will need a DBA to do this for you unless you have permission to do this your self.

Hope this helps other who are struggling with the same issue 🙂

MDS “Service Unavailable” Error

After restarting a server that has MDS on, I could no longer access MDS via the browser. This seemed very strange as I have restarted the server multiple times and it had never done this before.

Turns out the Application Pool had stopped for MDS. So I started it up and then it would stop again, frustrating me further.

Then is occurred to me that it was using my domain details and my password had changed. Then I went into the advanced settings of my MDS Application Pool to update my details under Identity.

And there we go, it works and I can access MDS again via the browser!

Hope this helps someone else with this error.

Windows Update KB2919355 Not Applicable

When trying to install SQL, Windows Update KB2919355 is a prerequisite, and on a new server this is not always installed and the SQL install stops until it is rectified.

Occasionally when you try and install the update you get the error KB2919355 is not applicable, yet the update is not installed. After searching the internet high and low a solution was found.

If you install Windows Update KB2975061 first and restart the machine and then install KB2919355 and restart the machine again, the SQL install prerequisites will be successful and you can continue with your install.

Hope this solution helps you and you get less grey hair and frustration trying to figure it out.

Setting Up SSIS in SQL Server Management Studio

Before you can deploy a SSIS project you need to set up SSIS in SQL Server Management Studio. When installing SQL you will select to install SSIS, however, it does not setup the SSISSDB or configure any settings for SSIS.

Here are the steps for setting up SSIS in SQL Server Management Studio.

Step 1:
Right-click on Integrations Services Catalogs and select Create Catalog.

Step 2:
Check the box next to Enable CLR Integration and set a password for your SSISDB and click Ok.

You will then see you SSISDB listed under Databases and under Integration Services Catalogs.

Now you will be able to deploy your SSIS project to your server where you have set up SSIS.

The next post will be on deploying an SSIS project.

MDS Backup Strategies

Recently a colleague and I had a discussion about which method is the best to use for backing up MDS (Master Data Services) for disaster recovery. We came to the conclusion it all depended on the SQL environment version you were recovering to. Below are the 2 different methods for backing up an MDS database.

Model Backup
Pros:

  • Fast to create, can backup specific versions
  • Will always be able to deploy a model to a MDS install
  • Can be automated via SSIS
  • Can be restore to a server version that is +1, 0 and -1

Cons:

  • If you want to save backups of different models, then the models will have to be backed up separately.
  • Security is not retained; it will have to be redone. Can take about 2 minutes per user. On a small model this should not be a problem, however, when scaled out it could be very time consuming.
  • Saving more than one backup of a model would require a script/maintenance task to remove backups older than X amount of time.

When deploying the model, the option DeployClone must be used, not DeployNew. DeployNew will remove the MUID and therefore will not allow any changes to be deployed to that model at a later stage. DeployClone keeps it all intact and will allow for future updates.

Database Backup
Pros:

  • Security is retained
  • Simple to set-up and maintain
  • Easy to restore
  • Maintenance tasks to clean up old backups

Cons:

  • Cannot be restored to a server version less than the current version.

The database backup for MDS can be scheduled just like other backups. You can use the maintenance wizard to set up the backup and clean up task on a schedule.

If you know what version the disaster recovery environment will be on, for example the same version as the current MDS server version or higher, then it is simpler to use the Database Backup method. If you are uncertain of what version the disaster recovery version will be then you would want to use the Model Backup method in case you have to restore to a version less than the current MDS server version.

Hope this helps you to choose what is best suited for your environment!

Special thanks to Janlo Du Toit for the discussion on these methods 🙂

Contained Database

A contained database is a database that is isolated from other databases and from the instance of SQL Server that hosts the database. Basically it includes all database settings and the metadata within itself thereby resulting in no configuration dependencies on the instance of the SQL Server Database Engine where the database is actually installed.

This feature is available from SQL Server 2012 and it is available on Azure SQL Database. This feature is not available on Azure SQL Data Warehouse or Parallel Data Warehouse.

One of the benefits of contained databases is that you will no longer struggle with orphaned users when moving databases or working with high availability. From SQL Server 2012 it introduces the notion of boundaries which are borders that define where a database and its metadata and settings end and where server-level features, metadata, and dependencies begin. In addition to implicitly allowing increased portability, boundaries are also used to help explicitly isolate data, settings, features, and capabilities.

There are 3 types of containment levels:

  • NONE. This default containment mode is what exists prior to SQL Server 2012. There are no boundaries defined or visible.
  • PARTIAL. With partially contained databases, you have the ability to define clearer boundaries between databases and the server, making it easier for metadata to be hosted within the databases. This, in turn, makes SQL Server databases more portable and less dependent on underlying hosts.
  • FULL. Full containment will enable greater database portability and allow for strict enforcement of containment boundaries as a means of fully isolating databases from each other and from the underlying host.

In SQL Server 2016 there are 4 ways in which SQL Server helps users to isolate the database from the instance.

  • The metadata that describes the database is maintained in the database and not in the master database.
  • All metadata are defined using the same collation
  • User authentication can be performed by the database, reducing the databases dependency on the logins of the instance of SQL Server
  • The SQL Server environment (DMV’s, XEvents, etc.) reports and can act upon containment information.

There are two types of users for contained databases:

  • Contained database user with password. These are authenticated by the database.
  • Windows principals. Authorized windows users and members of authorized Windows groups can connect directly to the database and do not need logins in the master database. The database trusts the authentication by Windows.

Benefits of using Partially Contained Databases
Database Movement
When moving a database, problems occur with some important information being unavailable after moving to another instance. Login information is stored within an instance and not within a database, therefore when moving the database to another instance will lose this information. The partially contained database can store important information in the database so the database still has the information after it is moved.

Benefit of Contained Database Users with Always On
By reducing the ties to the instance of SQL Server, partially contained databases can be useful during failover when you use Always On Availability Groups. The contained users is a very significant feature in an Always On solution. If the users are contained users, in the event of a failover, people would be able to connect to the failover node without creating logins on the instance hosting the failover node.

Initial Database Development
When creating a database, the developer may not always know where the database will be deployed and all the environmental variables that could impact the database. With a non-contained model, the developer would need to consider all the environmental variables and the impacts it could have on the database and develop accordingly. With using partially contained databases, the developer can identify instance-level impacts on the database.

Database Administration
Database administration becomes easier with maintaining database settings, instead of having the settings in the master database you can now maintain them in the database. This allows each database owner to have more control over their database without giving the database owner sysadmin permission.

Limitations with Partially Contained Databases
With almost any feature it comes with certain limitations. The following are limitations of a Partially Contained Database where the below features are not allowed:

  • Cannot use replication, change data capture, or change tracking.
  • Numbered procedures
  • Schema-bound objects that depend on built-in functions with collation changes
  • Binding change resulting from collation changes, including references to objects, columns, symbols, or types.

Identifying Database Containment
There are two tools to identify the containment status of a database, sys.dm_db_uncontained_entities and database_uncontained_usage event. sys.dm_db_uncontained_entities is a view that shows all potentially uncontained entities in the database. However if any entities are using dynamic SQL, it cannot be determined until runtime whether the entity is contained or not. Database_uncontained_usage Xevent occur whenever an uncontained entity is identified at run time. Therefore, any uncontained user entities you have not run will not be identified by this XEvent.

This could be an awesome feature to use for database developers from SQL Server 2012, hope this post was helpful for you!

10252137_641249792618222_6085988990639587497_n

Temp Table VS Table Variable

There have been many discussions on whether a temp table is better to use or a table variable. It depends on what you would like to do. They both have their pros, cons and limitations.

Table Variable Limitations

  • Table variable changes do not get rolled back during a transaction ROLLBACK whereas temp table changes will.
  • Table variables cannot have Non-Clustered Indexes
  • You cannot create constraints in table variables
  • You cannot create default values on table variable columns
  • Statistics cannot be created against table variables
  • Table variables cannot have a parallel plan, whereas temp tables are not restricted in this manner
  • Table variables cannot be selected into, altered, truncated or be the target of the DBCC commands such as DBCC CHECKDENT or of SET IDENTITY INSERT
  • Query hints are not supported
  • CHECK constraints on table variables are not considered by the optimizer for simplification, implied predicates or contradiction detection.

Similarities between Temp Tables and Table Variables

  • Clustered indexes can be created on both
  • Both are logged in the transaction log
  • Instantiated in the tempdb
  • You can perform DML queries against the tables

Logical Location
Table variables behave more as if they were part of the current database. The table variable column collations if not specified explicitly will be that of the current database. Temp tables will use the default collation of tempdb. User-defined data types and XML collections must be in tempdb in order for you to use it in temp tables, however, table variables can use them from the current database.

  • In a contained database, temporary table data is collated in the collation of the contained database. All metadata associated with the temporary tables will be in the catalog collation.
  • Named constraints may not be used in temporary tables.
  • Temporary tables may not refer to user-defined functions, user-defined types or XML collections.

Visibility to different scopes
Table variables are limited in such that they can only be accessed in the current batch and scope in which they are declared. Temp tables are accessible within child batches (nested triggers, procedures, exec calls). Temp tables created at the outer scope (@@NESTLEVEL=0) can span batches too as they persist until the session ends.

Lifetime
Table variables are created implicitly when containing a DECLARE @MyTableVariable TABLE statement above any user code is executed, and they are dropped implicitly at the end. Temp tables are created explicitly when the CREATE TABLE #MyTemp statement is encountered and can be dropped explicitly with the DROP TABLE or will be dropped when the session ends. If created in a child batch with @@NESTLECT > 0 it will be dropped implicitly.

Within stored routines both types can be cached rather than repeatedly creating and dropping new tables. There are restrictions on when this can occur, it is possible to violate temp tables, the restrictions on table variables prevent any violations. The maintenance overhead on cached temp tables is slightly greater than for table variables. For more info on this, take a look here.

Transactions
Temp table operations are carried out as part of the user transaction itself, whereas table variable operations are carried out as system transactions, independent of any outer user transaction. A ROLLBACK command will therefore affect a temp table but leave a table variable untouched.

Logging
Both generate log records in the tempdb transaction log. A common misconception is that this is not the case for table variables. The quantity of logging is very similar, a very important difference is that log records related to temp tables cannot be cleared out until any containing user transaction finishes. A long running transaction that at some point writes to temp table will prevent log truncation in the tempdb whereas the autonomous transaction spawned for table variables do not.

Indexes
For versions prior to SQL Server 2016 indexes can only be created implicitly on table variables as a side effect of adding a unique constraint or primary key. This means that only unique indexes are supported. Table variables do not support included columns, filtered indexes (until 2016) or partitioning, whereas temp tables do.

In SQL Server 2014 non unique indexes can be declared inline in the table variable definition. Now in SQL Server 2016 filtered indexes can be applied to table variables.

Hope the above gives you a bit more clarity on the benefits of each one in helping you decide whether to use a temp table or table variable in your scenarios.

whatittakes

Limit Memory Used By SQL

Ever noticed SQL using most of the Memory (RAM) on the server and all your other applications start slowing down? This could be because SQL is using the maximum amount of memory it can get a hold of.

When SQL is installed the default max amount of memory it is allowed to use is set to 2147483647 MB. You can check this by right-clicking on the server at the top in the Object Explorer, selecting Properties, then selecting memory.
memory
Properties
To decide what to set the max memory of SQL to use you can use this simple rule; leave 4GB or 10% of total memory free, whichever is larger on your instance to start with, and adjust this as needed. The free memory will be consumed by the OS and other applications on the server.

To calculate this, remember 1GB is 1024MB. If your server has 32GB of RAM then you would take 28 x 1024 = 28672MB. You would then insert this into the max memory.
memoryset
NB! This does not require restarting your SQL Server instance, but it will cause execution plans to recompile.
motivational-quotes-for-students-studying-quotesgram-NQVh0v-quote