Visual Studio 2013 Database Project: Working the Project LocalDbs

Visual Studio 2013 Database Project: Working the Project LocalDbs

Overview

Visual Studio (VS) has evolved much to what it is capable of today for database development and deployment.  With the addition of a much better table create facility Visual Studio is the development tool of choice for the database developer.  SQL Server Management Studio still has its place as a light free tool, but for serious database development with Team Foundation Integration, Visual Studio is the tool of choice.

Using Local Databases

Visual Studio creates local databases when a database project is created, viz.

  1. MSSQLLocalDb — SQL 2014 local db
  2. ProjectsV12 — created by SQL Server Data Tools (SSDT)
  3. V11.0 — SQL 2013 local db
VS SQL Server Object Explorer

VS SQL Server Object Explorer

These local database instances are located on a file location on the local drive of the computer that runs Visual Studio at: C:\Users\hendriekusr\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances.

The server name consequently is: (localdb)\ProjectV12.  Details about these installed databases can be obtained by running the SQlLocalBD.exe (located in C:\Program Files\Microsoft SQL Server\110\Tools\Binn\) against the name of the database, e.g. info ProjectsV12.

VS Local Database Information

VS Local Database Information

Use the Instance pipe name (from above execution) to connect to a local database from SQL Server Management Studio (SSMS), as depicted below.

SSMS Local Db Connection Dialog

SSMS Local Db Connection Dialog

A connection (to view database objects) is possible from VS and SSMS with the latter using the pipe connection string.  The SSMS screen (below) illustrates how the connection is made to the localDB as well as a Stored Procedure created from a SSMS template that was added to the database.  The next article [Create a Stored Procedure Item Template] illustrates how this same SSMS Stored Procedure template is created for Visual Studio.

SSMS Connection to LocalDB with Stored Procedure from Template

SSMS Connection to LocalDB with Stored Procedure from Template

Previous Article (2) [Create Database Project Folders]. . . Next Article (4) [Create a Stored Procedure Item Template]

Advertisements

Visual Studio 2013 Database Project: Create a Stored Procedure Item Template

Visual Studio 2013 Database Project-Create a Stored Procedure Item Template

Overview

Using Visual Studio for database development requires that database objects like stored procedures be created, amongst other items such as tables, view, triggers, etc.  It is advantageous if creating an item that the proposed script is a template that already conforms to the company’s standards, by providing a boiler-plate script that speeds up the developer’s options.  Even better would be if these templates are available to all developers as they check items in-and-out of Team Foundation Server.  Bear in mind that a particular organisation might have a single development office or location, but others may be geographically dispersed.  It would be expedient if all of them are singing from the same hymnal.

This article explores the basics of Visual Studio Templates for database development.  Using this phrase as a search term, an Internet search engine divvies up a few articles in support.  One of which is called An Overview of Templates for Database and Server Objects (Microsoft).  It suggests that existing templates can be customised to affect all templates used by the team.  Customisable project item/ stubs accelerate development by providing the developer with a head-start. Templates are comprised of project and item templates.  Every time a user opens a menu item in Visual Studio to create something, a template is invoked, e.g. New Project, or Add New Item.

Templates exist in the user community (How to Add Items Using Online Item Templates) or a VSIX project with SideWaffle (Code4Fun), but these aren’t discussed here.  To Illustrate the concept, a template for a Stored Procedure is created and added as an Item Template to the local developer machine.  This is absorbed into Visual Studio and provides the developer with a new item in the Add New Item option.

Creating the Template

A Stored Procedure (via Add New Item ) in Visual Studio (VS) and for that matter in Microsoft SQL Server Management Studio (SSMS) is pretty uneventful.  Therefore, should a developer create quite a few items, the work required to customise these to the company standard absorbs unnecessary time and effort.  A template should make this much easier.

Either use VS or SSMS to create a stored procedure that conforms to all general company standards.  This one to be used in Visual Studio, so the usage between SSMS and VS is different from a parameterisation perspective.

In SSMS the code-block (below) illustrates how parameter are coded and then on invocation provided by Ctrl+Shift+M.  In Visual Studio this is done through an XML configuration and only what the respective dialog requests on New Item is user-provided, e.g. the project or object name.

1
2
3
4
5
6
CREATE PROCEDURE	[<Schema_Name, sysname, Schema_Name>].[<Procedure_Name, SYSNAME, ProcedureName>] 
					<@Param1, SYSNAME, @p1> <Datatype_For_Param1, , INTEGER> = <Default_Value_For_Param1, , 0>
				,	<@Param2, SYSNAME, @p2> <Datatype_For_Param2, , INTEGER> = <Default_Value_For_Param2, , 0>
				,	<@Param3, SYSNAME, @p3> <Datatype_For_Param2, , INTEGER OUTPUT>
WITH EXECUTE AS [CALLER	| SELF | 'user_name'] -- Default: CALLER				
AS

The flowing code-block represents the Visual Studio equivalent of the obove code for the same purpose.  The $fileinputname$ is the file name provided by the developer when creating a new stored procedure.  All the other parameters are configured in the Item Template’s XML file.  Therefore, the XML configuration should prescribe the purpose of the stored procedure, I.e. suit exactly what it is being created for. 

1
2
3
4
5
6
CREATE PROCEDURE	[$SchemaName$].[$fileinputname$] 
					$ProcParam1$	$ProcParam1DataType$	=	$ProcParam1Value$
				,	$ProcParam2$	$ProcParam2DataType$	=	$ProcParam2Value$
				,	$ProcParam3$	$ProcParam3DataType$		$ProcParam3Type$
WITH EXECUTE AS [CALLER	| SELF | 'user_name'] -- Default: CALLER <-- make a selection in the executing principle context.				
AS

A Visual Studio Item Template for a Custom Stored Procedure

The provided template fast-tracks a developer and provides a consistent standard for every stored procedure created base on this template.  This particular standard is comprised of a few sections, viz.

  • Create with execution principal context
  • Flower box containing the abstract and owner details
  • Initial proc-wide directives, e.g. NOCOUNT, EXACABORT, and TRANSACTION ISOLATION
  • Error-handling variables (for RAISERROR)
  • Local variable declaration & initialisation
  • Main query execution block wrapped in TRY. . . CATCH and transaction
  • CATCH block with detailed error handling with XACT_STATE
  • Return value for application event handling
  • Directive switch to system defaults
  • Metadata section for object details (metadata) when it’s built in the database
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
CREATE PROCEDURE	[$SchemaName$].[$fileinputname$] 
					$ProcParam1$	$ProcParam1DataType$	=	$ProcParam1Value$
				,	$ProcParam2$	$ProcParam2DataType$	=	$ProcParam2Value$
				,	$ProcParam3$	$ProcParam3DataType$		$ProcParam3Type$
WITH EXECUTE AS [CALLER	| SELF | 'user_name'] -- Default: CALLER <-- make a selection in the executing principle context.				
AS
/* ==================================================================================================================================
	ABSTRACT:	This procedure executes for $projectname$ and. . . 
	VERSION:	$ItemVersion$ (MAJOR.MINOR.PATCH)

	OWNER:		$ItemOwner$
	TEMPLATE:	$TemplateVersion$ (MAJOR.MINOR.PATCH)

	<NOTE: to developer, complete the metadata at the end of this procedure script.>

											<< See bottom of item for details >>
=================================================================================================================================== */
BEGIN
	SET NOCOUNT ON;
	--SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
	--				READ COMMITTED
	--				REPEATABLE READ
	--				SNAPSHOT
	--				SERIALIZABLE;
	
	-- --------------------------------------------------------------------------------------------------
	--	Renders entire txn uncommittable if constraint violation occurs. 
	--	ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_6tsql/html/cbcaa433-58f2-4dc3-a077-27273bef65b5.htm
	-- --------------------------------------------------------------------------------------------------
	SET XACT_ABORT ON;
	
	-- ------------------------------------------------------------------------------------
	--	VALIDATE INCOMING PARAMETERS
	-- ------------------------------------------------------------------------------------
	-- None.
		
	-- -------------------------------------------------------------------------------------
	--	DECLARE LOCAL VARIABLES
	-- -------------------------------------------------------------------------------------
	-- Error Handling Local Variables
	-- Delete this for SQL 2014 unless required for custom error logging
	DECLARE			@vErrorNumber		INTEGER	=	0
				,	@vErrorMessage		NVARCHAR(2048) = N''
				,	@vErrorSeverity		INTEGER = 0
				,	@vErrorState		TINYINT = 0
				,	@vErrorLine			INTEGER = 0
				,	@vErrorProcedure	NVARCHAR(256) = NULL;
	
	-- Processing Local Variables
	DECLARE			@LocalVariable1Name	INTEGER
				, 	@LocalVariable2Name	DATETIME;
			
	-- -------------------------------------------------------------------------------------
	--	INITIALISE LOCAL VARIABLES
	-- -------------------------------------------------------------------------------------
	
	-- Initialise processing variables
	SET			@LocalVariable1Name	=	0;
	SET			@LocalVariable2Name	=	'2011-01-01';

	/* =========================================================================================
		MAIN CODE EXECUTION
	============================================================================================ */
	BEGIN TRY
		BEGIN TRANSACTION;
		-- { sql_statement | statement_block }
		COMMIT TRANSACTION;
		RETURN(@vErrorNumber);
	END TRY
	BEGIN CATCH
		/* ------------------------------------------------------------------------------------------
			ERROR HANDLING
			Severity 0-10 (information) are not caught by TRY. . . CATCH
			Severity 11-16 (user-correctable)
			Severity 17 (Insufficient Resources)
			Severity 18 (non-fatal error)
			Severity 19 (SQL resource error)
			Severity 20-25 (fatal system error) execution termination with SQL close
			https://msdn.microsoft.com/en-us/library/ms164086.aspx

			Any sys.messages valid messages can be utilised but must be passed to RAISERROR or THROW.
		-------------------------------------------------------------------------------------------- */

		-- Grab error indicators.
		-- Delete this for SQL 2014 unless required for custom error logging
		SELECT			@vErrorNumber		=	ERROR_NUMBER() 
					,	@vErrorLine			=	ERROR_LINE()
					,	@vErrorSeverity		=	ERROR_SEVERITY()
					,	@vErrorState		=	ERROR_STATE()
					,	@vErrorProcedure	=	ERROR_PROCEDURE()
					,	@vErrorMessage		=	ERROR_MESSAGE()+ CHAR(10)
												+ N'Executing code item name or type ('+ISNULL(@vErrorProcedure,N'AdHoc SQL')
												+ N'), error number ('+ CONVERT(NVARCHAR(20), @vErrorNumber)
												+ N'), severity ('+CONVERT(NVARCHAR(20), @vErrorSeverity)
												+ N'), code error line ('+CONVERT(NVARCHAR(20),@vErrorLine)										
												+ N').'+CHAR(10) +CHAR(13);
		-- -------------------------------------------------------------------------
		--	END TRANSACTION CORRECTLY.		
		--	XACT_STATE =  1 ~ Transaction is committable.
		--	XACT_STATE = -1 ~ Transaction is uncommittable - roll back 
		--	XACT_STATE = 0 ~ No transaction - commit or rollback  results in error.
		--	ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_6tsql/html/248df62a-7334-4bca-8262-235a28f4b07f.htm
		--	https://msdn.microsoft.com/en-us/library/ms175976.aspx
		--  Any custom error loggin should be implemented here
		-- -------------------------------------------------------------------------
		IF (XACT_STATE() = 1) 
		BEGIN
			COMMIT TRANSACTION;
			PRINT N'Transaction('+ISNULL(ERROR_PROCEDURE(),N'AdHoc SQL')+N'): COMMITTED';
			RETURN(@vErrorNumber);
		END ELSE
		BEGIN
			IF (XACT_STATE() != 0) ROLLBACK TRANSACTION;
			SET @vErrorMessage	= @vErrorMessage + N'Transaction: ROLLED BACK';

			-- SQL 2000+
			RAISERROR	(	@vErrorMessage		-- { msg_id | msg_str | @local_variable }
						,	@vErrorSeverity
						,	@vErrorState		-- [ ,argument [ ,...n ] ] )  [ WITH option [ ,...n ] ]
						)
			-- SQL Server 2014+
			--THROW; 
			RETURN(@vErrorNumber);
		END
	END CATCH;

	-- Cleanup/ Housekeeping
	SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Database Default
	SET XACT_ABORT OFF;
	SET NOCOUNT OFF;
END

/* ==================================================================================================================================
	TEMPLATE AUTHOR: $TemplateAuthor$, COMPANY: $OwnerCompany$

	OBJECT VERSION HISTORY
	AUTHOR:	<Name.Last Name>, ORGANISATION: <Company Name>, AUTHORED DATE <YYYY-MM-DD>

	VERSION		DATE		NAME SURNAME		CHANGE DETAILS
	---------	-----------	-----------------	---------------------------------------------------------------------------------------
	$ItemVersion$		YYYY-MM-DD	Name.Surname		Change details are.....

												<< See bottom of item for details >>
=================================================================================================================================== */
GO
/* ---------------------------------------------------------------------------------------------
	OBJECT METADATA
	NOTE:	This section has to be edited and run for all attributes after each edit to keep the
			database metadata up to date.
	 Add extended properties to the view (in database objects)
		1. Author name
		2. View naming specifics
		3. View version
		4. Editor name
		5. Edit date
		6. Template version
 ----------------------------------------------------------------------------------------------- */

-- Author details
EXECUTE sys.sp_addextendedproperty	@name = N'Author Name'
				,	@value = N'$ItemAuthor$' --<-- Change this only!
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'	
				,	@level2type = NULL
				,	@level2name = NULL;
GO				

-- Object description
EXECUTE sys.sp_addextendedproperty	@name = N'Functional Description'
				,	@value = N'This procedure executes for $projectname$ and. . .' --<-- Change this only!
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'	
				,	@level2type = NULL
				,	@level2name = NULL;
GO

-- This view version
EXECUTE sys.sp_addextendedproperty	@name = N'Version'
				,	@value = N'$ItemVersion$' --<-- Change this only!
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'	
				,	@level2type = NULL
				,	@level2name = NULL;
GO

-- Editor details
EXECUTE sys.sp_addextendedproperty		@name = N'Editor Name'
				,	@value = N'$EditorName$' --<-- Change this only
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'
				,	@level2type = NULL
				,	@level2name = NULL;
GO					
-- Edit date
EXECUTE sys.sp_addextendedproperty		@name = N'Edit Date'
				,	@value = N'$EditDate$' --<-- Change this only!
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'
				,	@level2type = NULL
				,	@level2name = NULL;	
GO
-- Template version
EXECUTE sys.sp_addextendedproperty	@name = N'Template version'
				,	@value = N'$TemplateVersion$' --<-- Change this only!
				,	@level0type = N'SCHEMA'
				,	@level0name = N'$SchemaName$'
				,	@level1type = N'PROCEDURE'
				,	@level1name = N'$fileinputname$'
				,	@level2type = NULL
				,	@level2name = NULL;
GO
-- End --------------------------------------------------------------------------------------

Configuring the Stored Procedure Template in Visual Studio

The template can be created in any SQL editor, viz. SSMS or VS.  Once it is created, open Visual Studio and add a Stored Procedure to an existing Project, using the default VS functionality.

VS Add new Item to Project

VS Add new Item to Project

Follow these steps:

  1. Select Stored Procedure and create it by giving it a name that would suit the template Stored Procedure.
  2. Take the newly created template SQL code for Stored Procedure (listed in the code-block above) and paste it over the text of the newly created item.
  3. Save it and then open: File >> Export Template
  4. Select Item Template radio-button from the [Export Template Wizard] and the appropriate project into which the template Stored Procedure has been saved
  5. Click the [Next] button
  6. Select the name of the Stored Procedure just created and saved in the project
  7. There should be no Item References to select so click the [Next] button
  8. Provide a Template Name, description, Icon, Preview Image, and deselect the Automatically import the template into Visual Studio (reference), keep the Display an explorer window on the output files folder selected
  9. Click the [Finish] button, and the output files folder is presented

A zip-file is created that contains three files:

  1. TemplateIcon.ic
  2. MyTemplate.vstemplate
  3. <Stored Procedure>.sql (the template SQL)

The MyTemplate.vstempate file is the one used to configure how the stored procedure is created, i.e. provide the name, description, input parameters, and other functions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<VSTemplate Version="3.0.0" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" Type="Item">
  <TemplateData>
    <DefaultName>ACreateStoredProcedure.sql</DefaultName>
    <Name>ACreateStoredProcedure</Name>
    <Description>Default CreateStoredProcedure for Acme Corporation</Description>
    <ProjectType>SQLDB</ProjectType>
    <SortOrder>10</SortOrder>
    <Icon>__TemplateIcon.ico</Icon>
  </TemplateData>
  <TemplateContent>
    <References />
    <ProjectItem ReplaceParameters="true" TargetFileName="$fileinputname$.sql" >Z Create Stored Procedure.sql</ProjectItem>
    <CustomParameters>
	<CustomParameter Name="$projectname$" Value="API"/>
        <CustomParameter Name="$SchemaName$" Value="dbo"/>
	<CustomParameter Name="$ProcParam1$" Value="@InputParameter1"/>
	<CustomParameter Name="$ProcParam1DataType$" Value="INTEGER"/>
	<CustomParameter Name="$ProcParam1Value$" Value="0"/>
	<CustomParameter Name="$ProcParam2$" Value="@InputParameter2"/>
	<CustomParameter Name="$ProcParam2DataType$" Value="INTEGER"/>
	<CustomParameter Name="$ProcParam2Value$" Value="0"/>
	<CustomParameter Name="$ProcParam3$" Value="@OutputParameter"/>
	<CustomParameter Name="$ProcParam3DataType$" Value="INTEGER"/>
	<CustomParameter Name="$ProcParam3Type$" Value="OUTPUT"/>
	<CustomParameter Name="$ItemVersion$" Value="0.0.1"/>
	<CustomParameter Name="$TemplateVersion$" Value="0.0.1"/>
	<CustomParameter Name="$ItemOwner$" Value="c 2015 Acme (Pty) Ltd."/>
	<CustomParameter Name="$OwnerCompany$" Value="Acme (Pty) Ltd."/>
	<CustomParameter Name="$TemplateAuthor$" Value="Hendriekus J.Roux"/>
	<CustomParameter Name="$ItemAuthor$" Value="Hendriekus J.Roux"/>
	<CustomParameter Name="$EditorName$" Value="Hendriekus J.Roux"/>
	<CustomParameter Name="$EditDate$" Value="2015-03-09"/>
    </CustomParameters>
  </TemplateContent>
</VSTemplate>

The listing of the MyTemplate.vstemplate (above) illustrates two parameter functions, viz. system parameter and custom parameters. These are placed in the template SQL code as place-holders that are filled-out either by providing a name in response to an action in Visual Studio like New >> Project or New >> Item where a name is provided by the developer.  Or, the custom parameters are filled-out in correspondence to the values provided in the MyTemplate.vstemplate file.  The latter should be considered in affinity to the functional intension of the Item, I.e. Stored Procedure in this case.

 Present the Template Available in Visual Studio

Visual Studio present a dialog when a new Stored Procedure is created.

VS Create New Item Dialog Option List

VS Create New Item Dialog Option List

The template should be presented here in a section appropriate to the company and project.  To create a new section the template is added to a folder location on the developer’s machine.  The Visual Studio custom templates are located here: <drive>:\Users\<user name>\Documents\Visual Studio 2013\Templates\ItemTemplates.

VS Custom Templates Folder

VS Custom Templates Folder

By adding templates either into these locations or by creating a new folder, say Acme Items, any new custom templates could be placed there and become available from the New Item dialog.

In this example create a folder here called: ItemTemplates\Acme Corporation\Programmability into which the zip-file (<drive>:\Users\<user>\Documents\Visual Studio 2013\My Exported Templates) is copied.

To use the new template, restart Visual Studio and add a new Stored Procedure to the template using the new template.

VS Custom Template New Item Dialog

VS Custom Template New Item Dialog

Unfortunately, at this point, none of the parameters (except the procedure name) is substituted in the actual SQL code.  These, i.e. the custom parametes must yet be added to the MyTemplate.vstemplate file.

Adding Custom Parameters to a Template

Follow these steps to add these parameters and their values:

  1. Extract the template’s zip file from its location into the [My Exported Templates] location and open the  file in Notepad
  2. Take the parameters between the <CustomParameters> tag from the code (above) and add it to the current template’s file in the appropriate location
  3. Save the file and drag the saved version from the explorer window over the zip file in the template location  and choose the overwrite option to up date the template
  4. Restart Visual Studio and create a new Stored Procedure from the template to see all parameters being populated.

Previous Article (3) [Create a Database Project with Folder Structure]. . . Next Article (4) [Coming Soon]

Visual Studio 2012 Database Project (SSDT): Working with a Database Project

Introduction

In the past, in the days of Visual Studio and “data-dude” database projects, it was not common for database developers to use Visual Studio through which to create tables, and write database queries.  The data-dude database projects were very clunky, and more often than not, database developers were found using SQL Server Management Studio (SSMS) to develop and then copy the code into the Visual Studio project, for source control.

With the arrival of SQL Server Data Tools (SSDT), also as a plug-in to Visual Studio 2010 and the subsequent release of Visual Studio 2012, things changed.  Database developers can now actually use Visual Studio in which to develop, where as much, and even better facilities are available for development.  The build and deploy functions have also been made much easier.  No longer do we need to fiddle-faff the project’s properties, by setting-up different deployment  environments, connect them to environmental variables etc., before we can deploy to like two places.

Presently, Visual Studio 2012 provides the ability to create your own project folder structures [1], none is being pushed on the project, define deployment (publish) destinations, as well as co-locate CLR and native TSQL objects in the same project.  These are also deployed together without having to firsts whistle a tune through your bottom, while sucking syrup through a metre long straw.

It is pleasant to use Visual Studio 2012 for database development, and the features presented are great, you would wish for a 52″ LED display to get a better view of the IDE.

Creating Database Objects

Tables

Visual Studio provides all the facilities to easily define a database table.

The table can be tackled from two ends, viz. the designer and the code division of the window.  If SQL code can be typed out faster than filling in the columns of the designer, simply write the code, and the designer updates itself immediately.

Create a Table in Visual Studio Designer

Create a Table in Visual Studio Designer

From the screen-grab (above), it is quite clear that all the design features are present in the one screen. To the right of the columnar table design-grid, is a list of table attributes, viz. Keys, Check Constraints, Indexes, Foreign Keys, and Triggers.  These options can be right-clicked and the attributes created. Or, they might be added to the T-SQL design tab, below, and the grid updates itself accordingly.

Table Attributes

Because of the folder structure flexibility [1], it becomes a bit of a question whether the table attributes should be added to the table’s create script, flying through the design features of the screen, or should they be added, as per the previous Visual Studio file structures, where keys were put within their own folder, and given a particular name.  The decision to retain keys, such as the primary key in the table’s create script, would be that the designer is used.  If the primary key is scripted to a Key folder, the designer won’t display the primary key on the grid view, which could be counter intuitive, from a visual design perspective.  Table constraints is another attribute that should be kept as part of the table’s create script.  But, again, this depends on how complicated the project is.  Foreign keys may be scripted to a Key folder, because they are inter table attributes, only visible on a designer view where tables are listed with their referential attributes.

Extended Properties

Extended Properties is another neat feature of the designer.  While designing a table, these are added automatically to the create script.  When creating a table constraint, such as a default value, it is done in the Properties pane of the window, the same place where Extended Properties are defined.  For a Default Value or Binding it is specified as a constraint (in the table properties tab), right above another property, I.e. the Description, which is an Extended Property (table metadata) that is easily updated, at the same time.  Completing this  field, adds table metadata to the designer’s T-SQL tab, through [sp_addextendedproperty].  Because the developer is already active in this tab, the Extended Property does not consume any significant additional time, and provides such an extended benefit to the database that it should be part of the table’s design requirement.

NOTE: In replication, extended properties are only written to the replicated database on the initial replication, not after that [2].

Through using a table’s attributes and extended properties, a Data Dictionary (documentation) [3][4][5], with the ability to be exported to Word, Excel, and HTML, can be created quite easily.  This provides an immense informational ability of the database, in addition the actual stored data, hence the defining term: Database Metadata.

Below is a listing of TSQL, illustrating a simple way to obtain table metadata: (listed by permission from Dattatrey Sindol: SQL Server – Capturing Table and Column metadata and description – Building Data Dictionary Part 2) [6].

SELECT          [sCOL].[name]                                  AS  [ColumnName]
            ,   CASE
                   WHEN  [sTYP].[name] IN ('char','varchar','nchar','nvarchar','binary','varbinary')
                   THEN  [sTYP].[name] + '(' + CAST([sCOL].[max_length] AS VARCHAR(10)) + ')'
                   WHEN  [sTYP].[name] IN ('float','decimal','numeric','real')
                   THEN  [sTYP].[name] + '(' + CAST([sCOL].[precision] AS VARCHAR(10)) + ',' + CAST([sCOL].[scale] AS VARCHAR(10)) + ')'
                   ELSE  [sTYP].[name]
                END                                            AS  [DataType]
           ,  CASE    [sCOL].[is_nullable]
                   WHEN  0x1 
                   THEN  'Yes'
                   ELSE  'No'
               END                                             AS  [IsNullable]
           ,  CASE
                    WHEN  [IdxDtls].[column_id] IS NOT NULL
                    THEN  'Yes'
                    ELSE  'No'
          END                                                  AS  [IsPK]
          ,  CASE
                    WHEN  [sFKC].[parent_column_id] IS NOT NULL
                    THEN  'Yes'
                    ELSE  'No'
             END                                               AS  [IsFK]
         ,       [sEXP].[value]                                AS  [ColumnDescription]
FROM             [sys].[objects]                               AS  [sOBJ]
INNER JOIN       [sys].[columns]                               AS  [sCOL]
                 ON  [sOBJ].[object_id] = [sCOL].[object_id]
LEFT OUTER JOIN  [sys].[types]                                 AS  [sTYP]
                 ON [sCOL].[user_type_id] = [sTYP].[user_type_id]
LEFT OUTER JOIN  (  SELECT        [sIDX].[object_id]
                               ,  [sIXC].[column_id]
                    FROM          [sys].[indexes]               AS  [sIDX]
                    INNER JOIN    [sys].[index_columns]         AS  [sIXC]
                                  ON  [sIDX].[object_id] = [sIXC].[object_id]
                                   AND [sIDX].[index_id] = [sIXC].[index_id]
                     WHERE        [sIDX].[is_primary_key] = 0x1
                 )                                              AS  [IdxDtls]
               ON  [sCOL].[object_id] = [IdxDtls].[object_id]
                  AND  [sCOL].[column_id] = [IdxDtls].[column_id]
LEFT OUTER JOIN   [sys].[foreign_key_columns]                  AS [sFKC]
                  ON  [sCOL].[object_id] = [sFKC].[parent_object_id]
                    AND  [sCOL].[column_id] = [sFKC].[parent_column_id]
LEFT OUTER JOIN   [sys].[extended_properties]                  AS [sEXP]
                  ON  [sOBJ].[object_id] = [sEXP].[major_id]
                    AND  [sCOL].[column_id] = [sEXP].[minor_id]
                    AND  [sEXP].[class] = 1
                    AND  [sEXP].[minor_id] > 0
                    AND  [sEXP].[name] = N'MS_Description'
WHERE             [sOBJ].[type] = 'U'
                    AND  SCHEMA_NAME([sOBJ].[schema_id]) = N'dbo'
                    AND  [sOBJ].[name] = N'STA_StagingEAP'
ORDER BY           [ColumnName];
GO

Programmable TSQL Objects

Any SQL object containing TSQL code, e.g. stored procedure, is created in the same way as in any previous Visual Studio release.  Depending on the file structure of the project, the specific hierarchy of the folder-tree may differ, but going by the general rule of thumb, i.e. to follow the SSMS folder structure, would see the created object being put into a folder with a heading, corresponding to the object’s type, e.g. Stored Procedures.

An important thing to consider, when writing TSQL code,  is to assume a coding standard [10].  To a degree, when writing C#, the editor takes care of some of the formatting automatically.  In TSQL it is not as rigid, which means that divergent styles can erupt faster than zits on a teenager’s face.  Nobody wants to spend more time, than they would have to–in the zit phase–and so also, definitely not the company’s SQL code.

The SQL Server Object Explorer snap-in window is used just as you would in SSMS, to navigate the database folder-tree, from which to inspect tables and views to use in the programmable object’s code.

Quirks

SQL Server Object Explorer snap-in window

There are a few quirks with the SQL Server Object Explorer snap-in window:

  1. A new query, relative to a database, can be created by right-clicking
    1. the server node of the tree: New Query…
    2. the database node of the folder tree:  New Query…
  2. The right-click menu, on the database node of the folder tree, provides an option: Create new Project…
    1. It  launches an Import Database windows that is captioned Create new Project
    2. It imports and existing database, and adds a New Project, to the Solution Explorer snap-in window
    3. Take note that it may not put in under source control, therefore the folder may end up in another location, e.g. the Projects folder for Visual Studio 2012
  3. Dragging an object (table) name, from the SQL Server Object Explorer snap-in window, across to the TSQL query window, seems to indicate that it is actually dragging the name across (cursor with plus sign), but when the mouse-button is released, the name isn’t there.  This works in the SSMS editor of SQL Server 2012.  Alternatively, the F2 (edit) key can be used to copy the object’s name, or the intellisense pop-up
  4. When writing a simple query, Visual Studio displays a tool bar with a drop-down of databases (and the one currently connected to), as indicator of what the script window is connected to.
    1. When writing a stored procedure, this tool bar is missing. It makes it hard to get to the right database, so that intelli-sense picks up the objects that you want to work with, if a cross database query is being designed.  It logs into the project’s database, when using the right-click connect option, which makes sense, in the context of the project.  Without the ability to drag-and drop object names, from other databases, and with intelli-sense pointing to the project’s database, a manual “get” (F2) must be done for object names outside of the project’s database
  5. On some occasions, when right-clicking a project, or other folder, to either Build or Add a new item, the menu option is not present.  When clicking off the project or other folder (to highlight another folder) and then right clicking on the project or other folder, i.e. the one not highlighted, the option is in the menu list.

Cross Database Queries

When performing a cross-database query, using a three-part (fully qualified) naming standard, the source-database’s name has to be substituted by a SQLCMD variable, in the code.  The Warning that gets thrown, is a SQL 71562, i.e. “Procedure <storedprocedure> has an unresolved reference to object <source database object’s name>“.  This warning has to be suppressed, by issuing a Suppress Transact-SQL warnings: value, in the project’s properties window in the Build tab, i.e. the value 71562.  After that the project builds without interruption.

Common Language Runtime Scripts

In Visual Studio 2012, Common Language Runtime (CLR) scripts are created in the same folders, as their TSQL counterparts.  The project is built and published, using the same process.  There is a bit of gymnastics, to be done, when deploying CLR objects that require external data access (where the database is not marked TRUSTWORTHY) but the assembly is signed [11].

Global Classes

In order to retain globally set values, or classes, alive in the SQL CLR, as a common objects, a class can be added to the project by selecting from : Add New Item >> SQL CLR C# >>Class, where it is not a database object, but a script file that is visible to all CLR objects.  Variables can be declared in that class, and made available to other CLR objects, using getter-setter accessors.  NOTE: To create get-set accessors, from a private variable, do the following:

  • right-clicking the variable name >> Re-factor >> Encapsulate Field, or
  • typing prop, followed by 2x tab, or
  • Ctrl k,x >> Visual C# >> prop

These global scoped code files, are part of the assembly, and when published provides managed code objects that are usable by any other CLR created database objects.

However, the assembly must be SAFE, i.e. not have any writeable instance variables.  Also know that CLR object classes are static.  When variables are set in a non-database object class, by a static member of another class, changing or setting the variable’s value is not a safe operation.  For static classes, such variables should be readonly, which defeats the idea of having global variables that are changeable.

To provide the assembly with ease of mind, it has to be able to deem setting of the variables as a SAFE operation.  The options are to create the assembly in UNSAFE Permission level (in the project’s properties: SQLCLR) [making sure the Assembly owner, e.g. dbo,  has UNSAFE ASSEMBLY permission, and the database is set to be TRUSTWORTHY], or the assembly be signed with an asymmetric key with a login with UNSAFE ASSEMBLY permission [11].  NOTE: Much of the latter information is displayed by the SQL72014 error message, on an attempt to deploy the assembly as UNSAFE, with the assembly owner not having the appropriate permission.

Remember to sign the assembly via the Project Settings window, using the [Signing…] button, and set the Permission level: to UNSAFE.  This way the database does not have to be set to TRUSTWORTHY.

To determine which login, on the current database, has UNSAFE ASSEMBLY permissions, use the following TSQL query:

SELECT      [SP].name                AS  [Login]
FROM        sys.server_principals    AS  [SP]
INNER JOIN  sys.server_permissions   AS  [SPE]
            ON [SP].principal_id = [SPE].grantee_principal_id
WHERE      [SPE].[permission_name] = 'UNSAFE ASSEMBLY';
GO

SqlFunctions

To write functions in the CLR, is a bit more elaborate than stored procedures.  Both object types,allow for attributes in the decoration of the object’s method, e.g. name, data access, etc.  The SqlFunction takes many different attributes [12]:

  • IsDeterministic – Idempotent if true, i.e. returns same results when called with same parameters
  • DataAccess – DataAccessKind.None/ Read, if reading data from context SQL connection
  • SystemDataAccess – Reading system tables, SystemDataAccessKind.None/ Read
  • IsPrecise – Determines if to do floating point operations, true does not use FLOAT
  • FillRowMethodName – Method in CLR code, used to perform table-valued filling of data
  • TableDefinition – The table structure to be returned by the TVF
  • Name – Name of the SQL function, the one Published to the database

The attributes define the actual function that is built to the database, and used by queries.  Most of the attributes are known as things that a normal TSQL function would use, but the real magic of the SqlFunction lies in the FillRowMethodName.  The method specified against this attribute is implicitly called by the SQL-CLR , every time the method that implements IENumerable (Interface, with the MoveNext()), is called.  To a database developer, this might be a little strange.  No matter,  just accept that a magic weaver has sewn a thread between the Interface and the function’s method (FillRowMethodName) that loads the data rows into a data table, i.e. the one specified in the attribute: TableDefinition.

CLR Function Example

For illustrative purposes, an arbitrary CLR function, called CalendarTradingDateParameterFormatterFunction is used; it performs a trivial routine that formats input parameters for calling a TSQL function.  It receives a range of trading dates that are valid business, or office trading dates.  A TSQL function that does the actual database interrogation, is the recipient of the CLR function’s formatted output of these parameters.  The parameters define the input for the TSQL function’s three return types.

The formatting performed by the CLR function CalendarTradingDateParameterFormatterFunction, to suit the TSQL function is as follows:

  • Parameter: (@pRelativeDay > 1) => Business, or trading date range, i.e. all dates between (inclusive of) FromDate and ToDate;
  • Parameter: (@pRelativeDay <=0) => Business or trading date adjacent (smaller than or left of the datum) or the actual input date (equal to, i.e. the datum), i.e. is the current date a business date;
  • Parameter: (@pRelativeDay =1) => Business or trading adjacent (equal to 1) on the right of the datum

The output dates ensure that the system calendar is interrogated correctly for finding the business date(s):

  • where a range of business dates are requested
    • the start and end dates are both populated if one, or both is null
  • where an adjacent date (relative to a datum) is sought, that at least seven days are provided on the side of the adjacent date in question, or that the datum (@pRelativeDay parameteris evaluated as a trading date
SQL CLR Function Illustration

SQL CLR Function Illustration

The illustration (above) depicts the example’s architecture.  When a Requestor asks the database to return a lis, or date (relative to an input parameter range) from the Business Calendar, the TSQL Function queries the database, based upon a set of formatted parameters, returned by the CLR Function.

If either of the two dates are not supplied, they are set to the parameter, containing a date.  Today’s date is used for both parameters, if they both are NULL.  An adjacent call, with cognisance of the above rules, use the ToDate as the datum or relative trading date.  The CLR function only returns a FROM, and TO date, formatted in anticipation of the functionality, which is determined by the @pRelativeDay parameter’s value.

The code snippet, below, illustrates the CLR C# code, for the formatter function.  The method CalendarTradingDateParameterFormatter receives an input parameter (_functionalIndicator) value that determines the function, and bases the route of code to execute on it.

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using System.Collections;
using Microsoft.SqlServer.Server;

public partial class UserDefinedFunctions
{
    #region CalendarTradingDateParameterFormatterFunction
    /// <summary>
    /// This CLR function ensures that calendar query parameters are correctly formatted for the three types of calls:
    ///     1. Date range validation against trading calendar
    ///     2. Current date trading calendar validation (0), i.e. is the date a trading date?
    ///     3. adjacent  trading date validation, relative to a datum (-1) previous trading day
    ///     4. Adjascet trading date validation , relative to a datume (1) next trading day
    ///     
    /// The parameters are used in a set-based query, against the trading calendar, to return valid trading dates.
    /// </summary>
    /// <param name="_fromdate">first date in the range</param>
    /// <param name="_toDate">last date in the range</param>
    /// <param name="_functionalIndicator">[-1|0|1|2] (adjacent  previous|current|adjacent  next|range]</param>
    /// <returns></returns>
    [SqlFunction(   Name = "STA_CalendarTradingDateParameterFormatter"
                ,   DataAccess = DataAccessKind.None
                ,   IsPrecise = false
                ,   SystemDataAccess = SystemDataAccessKind.None
                ,   TableDefinition = "FromDate DATE, ToDate DATE"
                ,   FillRowMethodName = "FillRow"
                )
    ]
    public static IEnumerable CalendarTradingDateParameterFormatter(    SqlDateTime _fromdate
                                                                    ,   SqlDateTime _toDate
                                                                    ,   SqlInt32    _functionalIndicator
                                                                    )
    {
        // Where both input param. dates are NULL, assume default values, i.e. today
        DateTime fromDate = DateTime.Today;
        DateTime toDate = fromDate;

        ArrayList _resultTable = new ArrayList();
        ParameterStruct _resultRow = new ParameterStruct();

        /* -------------------------------------------------------------------------------------
        // Ensure that any parameter combination, is returned as an intelligible set
        // First ensure that the from and to dates have values.  If either value is NULL
        // assume the other value for both
         -------------------------------------------------------------------------------------- */
        if (_fromdate.IsNull || _toDate.IsNull)
        {
            if (!(_fromdate.IsNull && _toDate.IsNull))
            {
                if (_fromdate.IsNull)
                {
                    fromDate = (DateTime)_toDate;
                    toDate = fromDate;
                }
                else
                {
                    toDate = (DateTime)_fromdate;
                    fromDate = toDate;
                }
            }
        }
        else
        {
            // Both dates have been supplied by caller
            fromDate = (DateTime)_fromdate;
            toDate = (DateTime)_toDate;
        }

        /* -----------------------------------------------------------------------------------------
           Input prameters are all intialised with meaning values
         * The next step is to allow for previous and successive (adjacent ) trade date selections,
         * relative to a datumm, i.e. the ToDate.
         ------------------------------------------------------------------------------------------ */
        if (_functionalIndicator < 2)
        {
            DateTime _datum = new DateTime();

            // This condition tests current, or previous (adjacent ) day, for trading date
            if (_functionalIndicator <= 0)
            {
                _datum = toDate;
                fromDate = _datum.Subtract(new TimeSpan(7, 0, 0, 0));
            }
            else
            {
                _datum = fromDate;
                toDate = _datum.Add(new TimeSpan(7, 0, 0, 0));
            }
        }

        _resultRow.Fromdate = fromDate;
        _resultRow.Todate = toDate;
        _resultTable.Add(_resultRow);
        return _resultTable;
    }

    /// <summary>
    /// This method is implicitly called by the SQL/CLR framework each time the MoveNext() method, 
    /// on the returned IEnumerable object, or type that implements the IEnumerable interface, is called
    /// </summary>
    /// <param name="arrayObject">ArrayList used for storing the struct(s)</param>
    /// <param name="fromDate">value of the struct's equivalent item</param>
    /// <param name="toDate">value of the struct's equivalent item</param>
    public static void FillRow(object arrayObject, out DateTime fromDate, out DateTime toDate)
    {
        ParameterStruct pStruct = (ParameterStruct)arrayObject;
        fromDate = pStruct.Fromdate;
        toDate = pStruct.Todate;
    }
    #endregion
}

#region ExtraCode
/// <summary>
/// This struct stores the conformed parameters of the calendar function call
/// </summary>
struct ParameterStruct
{
    private DateTime _todate;
    private DateTime _fromdate;

    public ParameterStruct(DateTime _fDate, DateTime _tDate)
    {
        _fromdate = _fDate;
        _todate = _tDate;
    }

    #region Accessors
    /// <summary>
    /// Accessor for the FromDate
    /// </summary>
    public DateTime Fromdate
    {
        get { return _fromdate; }
        set { _fromdate = value; }
    }
    /// <summary>
    /// Accessor for the ToDate
    /// </summary>
    public DateTime Todate
    {
        get { return _todate; }
        set { _todate = value; }
    }
    #endregion
}
#endregion

As previously stated, the magic of this code lies in the FillRowMethodName.  In this example, the parameters (values to retur to the calling function) are wrapped in a struct that goes into an ArrayList.  Relating what you see here, to database terminology, consider the struct as a row in a table, and the ArrayList is the actual table, together being the package that holds the data.  The object, of the FillRow method, is returned by the CLR Function as the TableDefinition of the CalendarTradingDateParameterFormatter method that conforms to the table schema defined, to hold the returned data-set.

The TSQL function’s functionality is not important for this example, save to know that the CLR Function is called and the returned values used as input parameters, to query the business calendar in the database.  The TSQL statement, in the calling function, is used to initialise internal variables that are used in the actual queries predicates, to query the database.

  -- Format the input parameters, given the function, i.e. the value of @pRelativeDay parameter.
  SELECT      @vFromDate = FromDate
           ,  @vToDate = ToDate
  FROM      [dbo].[STA_CalendarTradingDateParameterFormatter](@pFromDate, @pToDate, @pRelativeDay);

The function’s main query sends these values to the database, to obtain valid business trading date(s).

Deploy/ Publish a Database

In Visual Studio 2012, to deploy a database into a desired environment, is a much reduced function, over previous versions.  Publish profiles are created, and customised per environment.  As many as required Publish Profiles, can be created, each corresponding to an environment.  Each profile is saved to the project and can be run from within the project’s environment, to the targeted platform.  Publish attributes are also configurable under the [Advanced] button on the: Publish Database <environment>.publish.xml window.  These govern what is done during the Publish, such as dropping the database before publishing, or stopping the publish where data loss might occur.

Creating a Publish Profile

A Visual Studio 2012 database project solution can have multiple database projects, each corresponding to a physical database.  Every project requires a Publish Profile.  This or these are created by right-clicking the project and selecting Publish.  The project build is a prerequisite to a publish, both for the execution, and the creation.

Create a Publish Profile for a Database

Create a Publish Profile for a Database

On a successful build, of the project, the Publish Database window is displayed.  All the required configuration is provided, e.g. database connection, name, and other Advanced properties.  The profile is saved for the purpose of re-execution, or republish.  Environment variables, just like in the previous versions of Visual Studio, can still be added or changed, for the project.

The Database Publish Profile

The Database Publish Profile

The Publish Profile is purposed to target an environment, i.e. a database hosted on a particular server or environment.  The Target database connection field (in the familiar connection window) is configured to connect to the required database.  NOTE: If the database does not yet exist, i.e. this is the first publish, simply provide the desired name of the database, in the Database name field, and leave the rest up to the Publish.  For a non-existing database,  [Test Connection] button, on the connection configuration window, fails, but Visual Studio will obediently create the database during the Publish execution.

The [Advanced] button provides access to database’s particular settings, controlling what actions to perform when publishing.  The settings are reasonably self explanatory.

The [Save Profile As…] button allows the Publish Profile to be saved into a specific a folder location.  It can subsequently be added to the project (placing it at the root), by adding an existing item, like adding any other existing database artefact.

Publish Output Report

Publish Output Report

The [Publish] button publishes the database to the designated server or environment.  The published results are displayed in the Data Tools Operations pane, one of the tabs at the bottom of the Visual Studio 2012 Database Project IDE.

Displayed in the illustration (left), is the Data Tools Operations output, for the Publish Profile of a database called Statistics, created on the Localhost, having been successful.

Published Database in SQL Server Management Studio

Published Database in SQL Server Management Studio

The published database can be inspected, by either opening an instance of SSMS, or using the SQL Server Object Explorer window in Visual Studio, by locating the correct database node and opening Databases folder, to reveal the newly added database.

Publish Profile Added to Project

Publish Profile Added to Project

The Publish Profile, added to Visual Studio’s database project for the respective database in question, is an xml file, available in the root (or wherever it has been saved), from which subsequent deployments can be conducted.  It is as simple as double-clicking the file to pop-up the Publish Database window, from whence to start the publish action.

This action, once again, builds the project, and on a successful build, pauses for the developer to click the [Publish] button.

NOTE:  Using SQLCMD variables, requires that the Publish Profile, and the Visual Studio Configuration (Debug/ Release) be synchronised.  That is, when releasing to a specific environment (in this case: using Visual Studio) then the Publish Profile, and Configuration must match.

When Publishing a database, the Publish Database window presents all Environment Variables in the SQLCMD variables grid.  These have to be completed for the profile.  The profile being a function of the Configuration (Debug/ Release), and the Publish Profile, i.e. that all the variables correspond to the particular environment being released to.  Where the Publish Profile hadn’t been saved, for the Environmental Variables, these are populated, using the set-up default values (defined in the project’s properties), by clicking the [Load values] button, on the Publish window.

In the next section, the post-deployment script execution is discussed, for loading data into new tables, which uses a SQLCMD environment variable to determine whether or not it should be loading the data, depending on the environment, to illustrate how the Publish Profile, and the Environmental Variables, work together.

Post-deployment Script: Populate new Tables with Data

It is often required, especially where a new database is being deployed, or where the Advanced configuration is set to drop the database for each deployment, to load data into tables.  For this purpose, a post-deployment script can be used.

To add a post- (and pre-) deployment script to the database project, is just like adding any other script file.  Right-click the project, select Add new item, and from the Add new Item window, select User Scripts >> Post-Deployment Script.

If data is sourced from an existing database, given no data-masking (obfuscation of client details) is necessary, it is probably best to use a SQL Server Integration Services (SSIS) task (right-click source database, and select Tasks >> Export Data) to put the required data into a delimited flat-file.  Once the file(s) are available, they are added to the Post-deployment script (in reference), and uploaded through a bulk copy script [7].  NOTE: Be careful with BULK COPY, when delimiting the data, because commas are frequently used in text, and they might be columns in the table, for which data is being imported, which means that a comma is present in the data, and is interpreted by the loading statement, as a new column.

SQLCMD

For some reason it is an arduous plough in dry earth, to get a nice article on how to write code for a post-deployment script.  Firstly, in order to execute any script that has sqlcmd [8] instructions, it  must be executed in SQLCMD Mode.

Post Deployment SQLCMD Syntax

Post Deployment SQLCMD Syntax: not in SQLCMD Mode

Notice the red squiggle line under the [:r] (in the above illustration) when the script is not set explicitly set  to SQLCMD Mode.  To get the script into SQLCMD Mode,  follow the path (on the main toolbar): SQL >> Transact-SQL Editor >> Execution Settings >> SQLCMD Mode.  The squiggle disappear.

SQLCMD Mode Enabled on Script

SQLCMD Mode Enabled on Script

When the script is in SQLCMD Mode, the sqlcmd specific instructions, are highlighted, as indicated in the snippet on the left.

There is a strong affinity between pre- and post-deployment scripts, sqlcmd, and environmental, or local variables.  These variables are used to substitute globally scoped information that is reused by scripts, e.g. database name, file paths, and the likes, as Environment Variables.

Because a post-deployment script can be used to load data into tables, but it may not be required for the test, or production servers, some kind of switch has to be set, through which to determine the action, corresponding to a particular environment or need.  Visual Studio is able to configure the type of build, in a configuration setting on the project’s properties tab.  Any configuration set-up is possible, with Debug & Release as the defaults.  The Configuration Manager is used through which to configure any additional Configuration types, and this function is a very familiar Visual Studio concept.

Visual Studio Configuration Manager

Visual Studio Configuration Manager

The Configuration, and the Publish Profile must be synchronised to use the appropriate SQLCMD variables that correspond to the environment to which the project should be deployed.  So, when creating the Publish Profile, take care to be cognisant of the Configuration (Debug/ Release) of the environment, and that the variables are affiliated.  The SQLCMD variables are either being populated by the Publish Profile, or manually, using the [Load Values] button, at the point of release.

A deployment script can take advantage of these SQLCMD variables, by providing place-holders for them, on which action is taken.  In the above example, the $(BuildConfiguration) variable is matched to a Configuration; and the specific execution of scripts are consequently chosen, e.g. to load data into specific tables, or any other.   MSDN has an article, speaking about this: “Use sqlcmd with Scripting” [9] providing a few guide lines on using sqlcmd in scripts that tells about built-in variables and how to interact with scripts, from the command-line, using sqlcmd.

Loading Data into Tables

Especially for development and testing, data is required in the data-model, for it to be used in constructing queries, and testing functionality.  Data can be hand-crafted, generated, or loaded from production systems, after applying a data-mask to obfuscate sensitive information.

This data is made available in text documents, either comma, tab, fixed length or pipe delimited.  The fixed length delimited files come from the mainframe era, and there’s good reason for the fixed-length attribute.  If any of the delimitation characters have been used in table data, i.e. in columns, the bulk upload fails, because delimiters indicate columns, not pauses in sentences.  So, the trick is to use a delimiter that no application would allow in the data, of which a comma is probably the most frequently expected culprit, so why not fixed length?

The actual loading of the data is done in the Post-deployment.  A script file, containing the BULK LOAD (example below), is referenced in the Post-deployment script (:r .Trade_dataInsert.sql).

BULK    INSERT [dbo].[Trade]
FROM    '$(DataFileDirectory)Trade.csv'
WITH    (    FIRSTROW = 2
        ,    FIELDTERMINATOR = ','
        ,    ROWTERMINATOR = 'n'
        );

Bulk inserts may also use format files, such as an XML document [13],  generated from command prompt: >bcp .dbo.format nul -c -x -f .xml -t, -T.  After generation, it is retrieved from the folder that the cmd-prompt is set at, when running the bcp command.  The format file contains metadata that depicts the conformance of  the flat data-file (containing the data) to the schema of the table, into which the insert is to take place.

The code snippet (below) depicts a script file, using a format file.

BULK    INSERT [dbo].[CAL_CalendarEvent]
FROM    '$(DataFileDirectory)CAL_CalendarEvent.csv'

WITH    (    FORMATFILE = 'C:<folder path>CalendarEvent_format.xml'
        );

Last Thoughts

Visual Studio 2012 provides a great leap of functionality and ease of use, to the database developer.  It should be the de-facto environment for SQL Server database development.  There are a few niggly things that frustrates an effortless experience, but they seem to be of such a nature that they will be taken care of in the first Service Pack, so there is really no excuse for us not using Visual Studio for database work.

Citations

No. Author Name URL Date Authored Date Cited
1.  Drikus Roux  Visual Studio 2012, SSDT: Create your own folder structure  http://datablazeit.wordpress.com/2012/11/12/visual-studio-2012-database-project-ssdt-project-folder-structure/  12 Novemebr2012 13 November 2012
2.  MSDN  Using Extended Properties on Database Objects  http://msdn.microsoft.com/en-us/library/ms190243.aspx  2008 13 November 2012
3.  Mr. SQL Automatically Generating a data dictionary using extended properties http://mrsql.wordpress.com/2011/02/23/automatically-generating-a-data-dictionary-using-extended-properties/  23 February 2011 13 November 2012
4.  CodePlex  Data Dictionary Creator (software) http://weblogs.asp.net/jgalloway/archive/2006/11/15/Data-Dictionary-Creator-1.2-is-out-on-CodePlex.aspx  15 November 2006 13 November 2012
5.  William Brewer

Towards the self-documenting database: extended properties

http://www.codeproject.com/Articles/17077/Towards-the-self-documenting-database-extended-pro 07 January 2007 13 November 2012
6.  Dattatrey Sindol  Building a data dictionary  http://dattatreysindol.com/2012/04/16/sql-server-capturing-table-and-column-metadata-and-description-building-data-dictionary-part-2/#comment-1491  16 April 2012 13 November 2012
7.  Pinal Dave Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server  http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/  06 February 2008 13 November 2012
8.  MSDN SQLCMD Utility  http://msdn.microsoft.com/en-us/library/ms162773.aspx  2012 14 November 2012
9.  MSDN  Use sqlcmd with scripting  http://msdn.microsoft.com/en-us/library/ms188714.aspx  2012 14 November 2012
10.  Drikus Roux  SQL Server Database Coding Standard  http://datablazeit.wordpress.com/2012/10/22/sql-server-database-coding-standard/  22 October 2012 15 November 2012
11.  Drikus Roux  SQL Server Data Tools (SSDT) Database Project, Signed Assembly  http://datablazeit.wordpress.com/2012/10/31/sql-server-data-tools-ssdt-database-project-signed-assembly/ 31 October 2012 15 November 2012
12.  Valentine Boairkine, Mark Horninger, Herleson Pontes The Real MCTS SQL Server 2008 Exam 70-433 Prep Kit: Database Design, Table 4.2, p170  http://books.google.co.za/books?id=5HqNdnPfQckC&pg=PA166&lpg=PA166&dq=%22SqlFunction%22+%22list+of+attributes%22&source=bl&ots=7l3x9G0xS6&sig=JSvzfcYhVXhZq-bV6TY7ne4JP_M&hl=en&sa=X&ei=HjKRUJfmH8iShgfapYHgAg&ved=0CDEQ6AEwAQ#v=onepage&q=%22SqlFunction%22%20%22list%20of%20attributes%22&f=false 2008 16 November 2012
13.  MSDN Create a Format File (SQL Server)  http://msdn.microsoft.com/en-us/library/ms191516.aspxe 2012 16 November 2012

Last Updated: 20 November 2012: 09h17

Visual Studio 2012 Database Project (SSDT): Project Folder Structure

Introduction

Database projects have been around for a while, using Visual Studio, but they were semi-clunkers, and people avoided Visual Studio, and preferred SQL Server Management Studio, for most of their database related work.  With the advent of SQL Server Data Tools (SSDT), it became a much smoother ride.  Less rigidity, more flexibility.

The older Visual Studio Database Projects were knows as Data Dude projects, named after Gert Drapers, the original data-dude, according to Bob Beauchemin (SQL Skills) [1].  The Data Dude exemplar suffered some deficiencies, which were addressed by SSDT, the Visual Studio in-house replacement for SQL Server Management Studio [2], aimed to reduce the difficult database deployment of Visual Studio.

New Data Tools Data Project

New Data Tools Data Project

Visual Studio 2012 comes with a new template: SQL Server, which is the SQL Server Data Tools (SSDT) version of a database project.

Once a database project is opened from the template, a minimalistic project structure presents itself.  It does not create a boiler-plate folder structure at all.

To experiment with a structure, any existing database can be imported, to see how Visual Studio 2012 organises it.

Import an Existing Database

Import an Existing Database

Getting Going

SQL Server 2012 Database Folder Structure

SQL Server 2012 Database Folder Structure

Choosing a folder-structure, for the SSDT database project in Visual Studio 2012, is important from a familiarity, readability, and maintainability point of view.  The structure has no bearing on the deployment, and does not influence anything from a database point of view.

The folder-structure, as seen from the [Object Explorer] tab, in SQL Server Management Studio (SSMS) is still very much the same-olde.  Because Visual Studio has no bearing on it, the two environments could be diametrically opposed in their structures.

It will probably make a lot of sense, to mimic the SSMS structure, from a familiarity perspective, if the database design is not heavily schematic.

Choosing a Folder structure, from the Import Database wizard, provides a template, according to which Visual Studio arranges database objects.

Obviously, where nothing, or None is chosen, no folder structure is created.  A blank project leaves it up to the project team to define the structure.  An import of an existing database (where no structure is chosen, or defined), will dump all files in the root of the project, which is moderately messy, so it must be carefully considered.

Importing an Existing Database

The Import Database wizard, accessed by right-cliking the database project, and following the path: Import >> Database, is used to choose a folder structure.

Import an Existing Database

Import an Existing Database

The wizard provides four options for a project folder-structure, viz. None, Schema, Object Type, and Schema Object Type.   Because None, is the default of a new SSDT database project, it is not treated in the list below, but in the next section.

Making a folder-structure choice, really depends on the characteristics of the database project, more specifically what the visual presentation of database objects should conform to, from a organisational perspective.  Within each option, the rationale for choosing it, is discussed.

NOTE:  Where a database is imported, let’s say that has no Stored Procedures, the folder for it is also not created.  When such objects are subsequently added to the project, it would necessitate that the folder first be created.  This could infringe upon the project standards for database development, where such exists.  If no database coding standard exists, there would be little guidance, and projects could diverge greatly, from a standardising and conforming perspective.

Schema

The Schema import option for the folder structure, imports all dbo objects into the folder structure: <solution> >> <project/ database> >> <schema|[dbo]>.  It stores all objects connected with a schema, in a corresponding folder.

Schema Folder Structure

Schema Folder Structure

Schema separation for database objects, should be considered where the database is heavily schematic, i.e. object split into schemas.

Interestingly, this option exercises some intelligence, by grouping like-named objects into sub-folders, naming them Tables1 . . . n, for example.

This seems a bit counter-intuitive, because the grouping of database objects, is an Object Type function, whereas this option is Schema, or seems to indicate Schema only.  It could be, where the import wizard identifies objects starting consistently with the same text (indicative of some kind of grouping), it automatically assumes a differentiation according to Object Type.

This might be irritating, depending on whether or not it is appreciated, from an organisational perspective.

Often, objects are named by providing a prefix, denoting a grouping that causes the sorting of the objects in the folder, accordingly.  If Visual Studio provides an intuitive grouping, through the Schema Object Type  import option, it overstates the obvious, by placing all like-named objects in sub-folders, something already achieved through the sorting of the names.

It does, however, stay in tune with the dbo (schema) folder idea.  Where other schemas are created, these are all stored in their own folder structures, following the convention of storing all similarly named objects in their corresponding sub-folder structures, within their schemas.

It also seems, that where objects cannot be categorised, in accordance with the schema, or type, or and naming convention, they are dumped under the schema folder root.  These could be anything, ranging tables, views, functions, and stored procedures.  It’s relatively easy to remedy this, after the import (given the size of the project, i.e. the number of objects) by manually creating folders for the skipped objects, and dragging the artefacts into those folders.

Object Type

The Object Type import option for the folder structure, imports all objects by Object Type, placing them in a folder structure.

Object Type Folder Structure

Object Type Folder Structure

This structure follows the SQL Server database folder structure, as displayed through SQL Server Management Studio.  The choice between this option and the next (Schema Object Type) is probably the one  through which a database project should be defined.

Where a database is not heavily schematic, specialised into schemas, it would not be required to provide such a level of separation.  By looking at Objects, might be the easiest way to sort, and find them.

Where a database is very large, and separated by various schemas, the additional level of separation, makes finding artefacts easier, and this type of folder structure, i.e. Object Type, may not be as suitable.

The Object Type folder structure may thus be more suitable for databases with a singular schema definition.

Schema Object Type

Folder Structure Schema_Object Type Sub-Folders

Folder Structure Schema_Object Type Sub-Folders

The Schema Object Type import option, for the folder structure, imports all objects in similar fashion to the previous data-dude folder-structure for Visual Studio.

As the name suggested, this is a combination between schema, and object type.   For example, database objects are sorted into sub-folders, in accordance with similar naming conventions.  NOTE: Any unidentified artefacts, are dumped under the schema folder, similarly to the Schema option above.

Choosing this folder structure, for the database project, would be done where a project is large, and specialised into a few schemas, probably also according to Object Types, having them split into sub-folders, according to their systems, by providing indicative naming conventions.

NOTE: This discussion does not explore how folders are created, when importing another (previous) database project from an earlier Visual Studio version.  There is a MSDN.social discussion about this, titled: SSDT project structure after conversion from VS 2010 DB project [3].

Creating a Database from Scratch

Blank-slate for Database ACMECorporation

Blank-slate for Database ACMECorporation

To create a database from scratch, should incorporate the same thought processes (on the folder structure of the database project) as when importing an existing database,and  for the same reasons.

When presented with a blank-slate, it does not propose going ape.

It’s intuitive that sticking with a folder-structure definition, like that seen through SSMS, is probably the better option, bar the reasons explained in the respective folder-structure options, above.

The reason that a blank-project starts with no folder structure, is probably because Microsoft does not know (and don’t want to propose they do) what kind of project (database) is being envisaged, or imported.  Rather than casting a structure in stone, which is more often than not counter intuitive, it is left up to the project team to define.

A problem (one that is not new) is that the folder structure in SSMS is not conducive to multi-schema systems, in that it does not provide a Schema Object folder structure, for ease of reference.  Visual Studio to SSMS alignment is only possible where the Object Type folder structure is chosen.  It is probably the one that would be chosen by most database professionals.

Summary

Visual Studio 2012 (SSDT) provides options for creating a database project folder-structure, when importing an existing database, or Visual Studio Project.  However, when creating a project from scratch, it starts with no folder structure.

It seems intuitive to follow the structure proposed by SQL Server Management Studio, for a database, keeping the project’s structure as familiar as possible.  However, a various schematic-database, might benefit from a schema specialised folder structure, and in that instance be divergent from the normative SSMS folder structure.

Citations

No. Author Name URL Data Authored Date Cited
 1.  Bob Beauchemin The Rest of the Story: Co-existing VS201 and SSDT Data Projects  http://www.sqlskills.com/blogs/bobb/post/The-Rest-of-the-Story-Co-existing-VS2010-and-SSDT-database-projects.aspx  17 April 2012  12 November2012
 2.  David Ramel

SQL Server Data Tools: A Work in Progress

 http://visualstudiomagazine.com/blogs/data-driver/2012/06/sql-server-data-tools-a-work-in-progress.aspx  05 June 2012  12 November2012
 3.  DNETFX SSDT project structure after conversion from VS 2010 DB project http://social.msdn.microsoft.com/Forums/da-DK/ssdt/thread/cd57aa97-57e9-44bd-9040-79bbce818e04 13 June 2012 12 November2012
 4.

Last Updated: 12 November 2012, 13h04