Setting up a big query service account.

I have several posts I am working on for the future that will require big query service account keys. So rather than repeating this in the posts this will give the posts a reference to what to do.

First setup an account for Google, activating an account will require a payment card, and a Google account may cost so keep an eye on costs so you don’t get hit with an unexpected bill.

Service account

to create a service account go to the IAM & admin menu item an open service account.

click on the Create Service Account option.

put a meaningful name into the serviceaccountname and a good description.

create and continue and put the least required level of permission.

And complete the creation by clicking DONE the service account will appear in the service account list.

in the service account options click the menu item Manage Keys

add a key by selecting Create new key, the options of JSON or P12 can be chosen. I chose JSON. Click create. A file will be created. This needs to be kept in a secure place. As your project can be accessed with it.

Posted in Uncategorized | Leave a comment

Searching for underscores with LIKE

I got caught out by a Gotcha which has caught me out many times and hopefully wont again ( but it will). I was looking for a specific piece of text within a sql module in a Database, simple right! So in this case I was looking for a cdc text identifier this was in the format of SCHEMA_TABLENAME.

For illustration purposes I’m going to use dbo_myDataTable. I was trying to look for any stored procedure that had this. so I used my go to query of

FROM sys.sql_modules
WHERE definition like '%dbo_myDataTable%'

Like is a great method of filtering data. I’d expected only one maybe two records returned. However I got loads of results. So I started looking for the text fragment. The first record did not have this text anywhere with in the definition column. I thought this is strange and moved to the second, again no references.

It was about this time I kicked myself, I’ve been caught out again.

The underscore character is a wildcard search flag. This checks if there is 1 and only 1 character of any character. This is different from the Percent(%) symbol, which stands for 0 to many characters of any character.

So my search was pulling any record dbo<any char>myDataTable within the definition column. this meant anything for example dbo.myDataTable would be returned. I didn’t want these records.

So more importantly how do you fix this?

If you use square brackets it is possible to search for any single character within the brackets. I.e. Value Like a[bcd_]e can return abe, ace, ade or a_e but not aee. This also allows you to use the specific underscore symbol, as an underscore character. So I reran my above query with a little tweaking.

FROM sys.sql_modules
WHERE definition like '%dbo[_]myDataTable%'

This returned the one record I was looking for.

Posted in SQL, XML | Leave a comment

SQL Server and sending messages to Slack. ( or how I stopped worrying about emails) Part 3

In Part 1 and 2 of this series I demonstrated how to create a Slack Integration and then use Azure Data Studio to Test the integration. These will need to be completed for the following SQL Server implementation to be completed

Preparing your SQL Server

So as stated in the beginning of this article series this uses SQL Server ML’s Python implementation and as this has been documented very well previously it would be better to read this if you have not set up this on your SQL Server.

Overview of SQL Server Machine Learning Services

Installation Information

Installing the SlackClient python package to an Instance

In the previous article we used the Azure DataStudio GUI to install slackclient. To Install this python package to the instance we are going to use the sqlmlutils package in ADS to load to a server instance using python.

import sqlmlutil

connection = sqlmlutils.ConnectionInfo(server="<sql server>", database="master")

You will need to update the Server Name to your instance name, This code is using Windows Authentication. If you need to use SQL Authentication a username and Password needs to be added.

SQL Code Cutting

The Token code create in part one needs to be handled very securely. To this end I have put the token in to a Table. The final version uses an encrypted Column in the table and a signed stored procedure to access the the procedure to obscure this private information from users who wish to use the procedure

Use master

    Token_name NVARCHAR(80),
    Information NVARCHAR(80)

INSERT INTO SlackParams (Token_Name, Information)
VALUES ('SLACK_BG_TOKEN','tokenstring from ')

My Stored Procedure searches the table for a token. (multiple slack bots could be run off the same server). If a threadID is passed into the procedure if the update flag is 1 then the threadID is used to update the message. If the update = 0 then the threadId is used to post a new message under the original message.

python is a very format reliant language and the python scripts within the @pythonCode string have to be formatted in the same way. This means the SQL indentations can cause problems as there can be no preceding whitespace within the variables.

@Channel NVARCHAR(200) = '#dba' OUTPUT,
@ThreadId NVARCHAR(200) = '' OUTPUT,
@Update bit = FALSE
    DECLARE @ThreadInfo VARCHAR(500)

    SELECT @CertId = Information
    FROM master.dbo.SlackParams
    WHERE Token_Name = 'SLACK_BG_TOKEN'

    IF (@Update = 0) 
    --New Message
        IF @threadId IS NULL OR @threadId = ''
            SET @ThreadInfo = ''
            SET @threadInfo = ' , thread_ts = "' + @ThreadId + '"'
            SET @PythonCode = N'import slack
# token is taken from the table Params  and entered in thepython script below.
client = slack.WebClient(token=''' + @CertId + ''')

#  post to channel submitted to the proc @channel 
MessageReturn = client.chat_postMessage(channel="' + @Channel + '", text="' + @Message + '"' + @threadInfo + ')

thread_ts = MessageReturn["ts"]
channelid = MessageReturn["channel"]

        EXECUTE sp_execute_external_script @language = N'Python',
                @script = @PythonCode,
                @params = N'@thread_ts VARCHAR(200) OUTPUT, @channelid VARCHAR(200) OUTPUT',
                @channelid = @Channel OUTPUT, @thread_ts = @threadId OUTPUT
    --Update a Message
        IF @threadId IS NULL OR @threadId = ''
            PRINT 'Thread Id required to update a message!!'
            SET @threadInfo = ' , ts = "' + @threadId + '"'
            SET @PythonCode = N'import slack
client = slack.WebClient(token=''' + @CertId  + ''')

#update message posted to Channel
MessageReturn = client.chat_update(channel=''' + @Channel + ''', text="' + @Message + '"' + @threadInfo + ')

thread_ts = MessageReturn["ts"]
channelid = MessageReturn["channel"]'
            EXECUTE sp_execute_external_script @language = N'Python',
                @script = @PythonCode,
                @params = N'@thread_ts VARCHAR(200) OUTPUT, @channelid VARCHAR(200) OUTPUT',
                @channelid = @Channel OUTPUT, @thread_ts = @threadId OUTPUT


In the sp_execute_external_script call I am using

@params = N'@thread_ts VARCHAR(200) OUTPUT, @channelid VARCHAR(200) OUTPUT',
                @channelid = @Channel OUTPUT, @thread_ts = @threadId OUTPUT

This means that the variables thread_ts and channelid within the python call can be outputted to the SQL side of the procedure. The output variables are mapped to the SQL variables @threadId and @channel

Due to the article formatting being iffy I have a notebook of the above information in Create Slack procedures for SQL Server

To test the Stored Procedure

The Stored Procedure can be tested, using the following code

EXEC slackToChannel @Channel = '#bot-integration', @Message = 'Hello World'

again errors that I have come up against when testing are around the firewall rules of my system. I had to go into Windows firewall outgoing rules and allow SQL Server to send. possible TCP/IP enabling had to be done as well.

Once the stored procedure is created it is possible to put the Stored procedure to use within Agent Jobs and called from within other procedures.

A complex example has been included has been written and is in this location Example usage. This example goes into more depth on Posting under an existing Message, and updating the messages. A Gif of the example is below.

This example Creates a Slack Post, Then under the post sets a simple progress bar, and below this updates an imaginary process. when each step is completed the progress bar and the Status is updated. Finally the original post is updated with a completed message and final status is updated.

Posted in python, slack, SQL | Leave a comment

SQL Server and sending messages to Slack. ( or how I stopped worrying about emails) Part 2

This is part 2 of a set of articles Part 1 which details the setting up of the Slack Integration can be found SQL Server and sending messages to Slack. ( or how I stopped worrying about emails) Part 1

Preparing The Slack Integration with Azure Data Studio (ADS)

To build a simple Hello World application using ADS the python package slackclient needs to be installed to your Python Environment. if you’ve not installed a package to ADS It is quite simple. Having chosen the Kernel to use a Python 3, there should be a option on the top of the page to Manage Packages

By clicking this the menu option allows you to “Add New”, in the search pip package enter slackclient

clicking install starts a process in your tasks window to install the slackclient. If you are doing this for the first time it would be useful to also install sqlmlutils at this point. Using this method of loading the python packages is only loading the package to your local environment. if you want to load slackclient to your SQL Server to run the later SQL based scripts you need to run a python script which will require sqlmlutils package loaded locally.

Testing the Slack integration

The code to send a simple message to Slack is 3 lines of code in python.

import slack

#initialize client with the token.
client = slack.WebClient(token = "<insert token here>" )
#Send the message to Slack
client.chat_postMessage(channel="#bot-integration", text= "Hello World")

the command import slack references the python package we installed in the earlier step.

Creating the client object is done by loading the slack token taken (in this case redacted) from the above picture during the slack set up. This identifies which slack integration to send to.

chat_postMessage command in it’s simplest format requires a Channel Name to send to and a Message String. in the previous article I set up the integration in the #bot-integration channel. It is possible to capture the return variables from this function which can provide useful information such as the thread id of the message. this can be used to update the message later if required. and the Channel ID (Note: Channel Name is friendly identifier ChannelID is a unique id the integration uses required for certain operations)

This is where this just works or there is some debugging needed. Some of the issues i’ve faced at this point are typos missing quotes, The channel name not being spelt correctly, or the firewall blocking sending the message. I have once had the issue the slackclient package was incorrectly loaded and I managed to uninstall and reinstall the package. hopefully this will just work for you.

Not what I promised you?

Okay agreed, so far this is a 3 line script running in Azure Data Studio. it should be giving some idea of what is possible, but this is not SQL Server, but I promise this is coming up shortly. This however is a good place to extend on the properties of the chat_PostMessage and chat_Update functions.


This is a python function to send a message to Slack. in it’s simplest form you only need to use a channel name and a Message string as shown above. When this function completes it returns a string of information to let you know if it returned correctly but also some additional useful data.

thread_ts: as a Return type this is a reference to the Message that was posted.

ChannelID: This is the ChannelID (not the Channel name)

Using the thread_ts you can pass this into the chat_PostMessage function with the reference ts to post under the original thread, This can be done multiple times.

import slack
#initialize client with the token.
client = slack.WebClient(token = "<insert token here>" )

MessageReturn = client.chat_postMessage(channel = '#bot-integration', test="Hi")
thread_ts = MessageReturn["ts"]

#Send message in a conversation of original post.
client.chat_postMessage(channel="#bot-integration" test="goodbye", ts= thread_ts)

# in Slack Message will appear as
# Hi
#    goodbye

The ChannelId return parameter is what Slack uses to name then Channel if you are just posting message this is not required but if you want to update the message you will need both the ChannelId and the Thread_ts.


if you have captured the ts Id and the ChannelID (channel in MessageReturn) you can update the message.

import slack
#initialize client with the token.
client = slack.WebClient(token = "<insert token here>" )

MessageReturn = client.chat_postMessage(channel = '#bot-integration', test="Hi")
thread_ts = MessageReturn["ts"]
channel_id = MessageReturn["channel"]

client.chat_update(channel=Channel_id, Text="Hello", ts=thread_ts)

#Final output will be 
# Hello

The above code changes the original posted message of Hi to Hello.

The code above is a good place to stop and play. These 2 functions are the base of my SQL Servers procedure calls. These will be shown in Part 3 of this article

Posted in python, slack, SQL | Leave a comment


Several years ago I joined a company that used Slack. I could not fathom why. It was basically a messenger tool, but I had spent so many years being sent emails that I saw Slack as an extra thing to check it took me a long time to fully appreciate it. Eventually I came up with a list of pros and cons. The main pro was unlike email all of the conversation is in one place. you don’t end up entering half way through a dialog potentially missing parts of the conversation. Or finding out that a separate conversation had been carried out elsewhere as someone had used a different email list that you had not been added to.

The biggest issue still remained though, all my daily reports for the health of my servers continued to be transmitted by email. All the failure messages of things needing attention, sales figures or something not completing still came via emails. It was only when I changed to another company that had a paid Slack subscription did this issue get sorted. In the paid version of slack an integration for receiving emails could be installed on a specific channel in your Slack Workspace and you could see the emails in the channel feed. At this point I went from a DBA who had an email client open constantly through the day to checking it once or twice a day. My work tickets were arriving in Slack, the emails were arriving in Slack when needed, my team communicated via Slack. There was minimal need for my email client for hour to hour working.

The Email integration in Slack had solved a problem, but it is clunky. You could see the email subject and heading but to get to the contents of the email you had to expand it to see any more than a couple of lines, it was not pretty especially for something that could be summarised into a single message. You could see information but the format wasn’t great, but I was getting what I needed to keep the servers running and all from within Slack.

I’d noticed some people had built custom Slack Bot integrations, but I am not a Developer, My main hat is still a DBA one (it’s a green bowler hat, very dashing) . I can code rudimentary stuff, mainly in SQL, I can get by in python if not too complex. So I never really looked at creating bot integrations till SQL Server started allowing python code in its ML component, it opened my comfort zone a little and then I wondered if I could use SQL Server Python to send messages to Slack you can and it’s simple. The following 3 article show how I implemented a SQL Server based python script to send Messages to one or more slack channels.


To follow along with what I created you will need a SQL Server Instance, I used 2019, with The ML Component of Python installed. You will also need a Slack Workspace. I have a corporate account, but for testing I created a simple stand alone workspace for testing. I used Azure Data Studio for testing as I find this a useful method when switching between SQL and Python. SSMS can be used but the python scripts need a bit more packaging to run in the early testing.

Setting up the Bot

create a new Slack Bot go to and click on the top right corner option of your apps. you will need an account set up linked to your workspace.

click on the button Create New App a menu option comes up and select “create from scratch”.

Next option page gives the opportunity to choose the Name and which work space to install the App in. I would recommend developing this off your main production systems at first then once established moving the code to a more professional environment. In this case I’m going to call my Bot “Barbara Gordon” after the DC Comics heroine “Oracle” because it amuses me.

On the “Add Features and Functionality” Click the Bots Item Menu, then go to “Review Scopes to Add”

Under the item Scopes click Add an OAuth Scope 

In this article we will only be adding the chat:write Scope, the other options add a more features that you could use for example to look up your users in the workspace for example but this will not be covered in this series of articles.

Once the Scope has been selected click the button at the top of the page Install to WorkSpace at the top of the page. and then the Allow button, once you’ve reviewed you choices. If you are part of an organization there may be a Slack Administrator who will have to approve the adding of this Bot to your companies workspace. The chat:write option is going to allow anyone with the token that is created to spam your Slack system, however this is a required option to allow you to spam your workspace. if you add additional features (and they can be added at a later time) be aware you may have to justify why the additional scopes have been added in the same way a DBA really needs justification why the application really needs sysadmin.

Once the integration has been created for your workspace you will be presented with the Bot User OAuth Token. This allows access to your Bot. This is a very private string do not share, do not put into GitHub or any other insecure source control, I once Someone I know once accidentally put a token into and 30 seconds later their integration was disabled by Slack Technologies and They (I) got an Email alerting them to the breech.

Congratulations you now Have a Slack Bot Integration app.

Now it just needs adding to a channel so you can send messages. to do this click on the channel name to get to properties of a Slack channel and click the integrations tab.

The Apps window allows you to Add an App.

from the list of apps select the one you’ve just made. You now have created an Slack integration that can receive messages from python. You can change the Icon if you want to something more fun, The Icon I eventually used for this integration was Dina Meyer who played Barbara Gordon in the Birds of Prey TV series.

In the next article I will cover how to the above token with Azure data Studio to send a “hello world” message to your slack work space.

Posted on by ph1lqu1nn | 1 Comment

Subqueries for XML blobs without self-referencing

One of the issues I see several times within SQL Server is people creating SQL Subqueries that reference the parent query so that an XML blob can be created.

In this example The simple table query returns the table contents and creates an XML blob, to create this it references the ID in the subquery (b) and links to the ID in the main query (a). In the cases where there is only one ID record, for example primary key, the linking to the subquery is not required. This query is looking up each record causing which then creates a work table that then has 7216 scans, The iCollections table is scanned twice. in this example the table is only 182 pages. The issue is where the Table increases the size of the work table starts to increase proportionally.

Table 'Worktable'. Scan count 7216, logical reads 29578, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'iCollections'. Scan count 2, logical reads 364, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

It is possible to build a subquery that references the data in the main query. The above query can be rewritten without the linking to the parent query simply as.

As you can see the “FROM iCollections b” and the Where id = id join has also been removed. As the ID, in this case, is a unique field on the dataset it is possible to do this rewrite. If there were expectation of using other data rows from the table this would not be possible.

As there is only one table call the IO Statistics are now much simpler and the table is only scanned once. has reduced 29,760 logical page reads

Table 'iCollections'. Scan count 1, logical reads 182, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.

The XML blob uses the primary Table to get the data for the blob. This can reduce the time taken for the server to return the result by half in this example. Less resources are consumed to get this information. so as the data scales the resource demand does not increase as rapidly.

Thanks to the Natural History Museum for providing data so I could produce these example patterns could be demonstrated. Natural History Museum provides access to some of it datasets via.

Posted in Uncategorized | Leave a comment

Auditing Your Database Changes

SQL Server can capture Database level Trigger information using XML.  I personally use this on my home PC’s because

  1. I’m poor and can’t afford one of those amazing DB Source control products that are out there on the market.
  2. I’m forgetful and always forget to check in  Source Control data.
  3. I don’t trust anyone else to remember either 🙂

So in an attempt to track what i do on my databases I have implemented the following.

I create a Database called Logs (imaginative) that  I place on all my instances of SQL Server.  Within this database I create all those fun DBA type tables that mean during meetings you can casually say, all our indexes are optimized, or we have never changed that.  This Audit Table is one of them.
CREATE TABLE [dbo].[SchemaEventLog](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[DatabaseName] [varchar](120) NOT NULL,
[EventType] [varchar](120) NOT NULL,
[SchemaName] [varchar](120) NOT NULL,
[ObjectName] [varchar](120) NOT NULL,
[ChangeDate] [datetime] NOT NULL,
[ChangeUser] [varchar](120) NOT NULL,
[CommandText] [varchar](max) NULL,


I then run in the model database several scripts all based on the following:

CREATE trigger [change_Index] on database
for create_Index, alter_Index, drop_index
set nocount on
declare @data xml
set @data = EVENTDATA()

insert into Logs.dbo.SchemaEventLog(DatabaseName,eventType,schemaName, ObjectName, changeUser, CommandText)

values(@data.value(‘(/EVENT_INSTANCE/DatabaseName)[1]’, ‘varchar(120)’)
, @data.value(‘(/EVENT_INSTANCE/EventType)[1]’, ‘varchar(120)’)
, @data.value(‘(/EVENT_INSTANCE/SchemaName)[1]’, ‘varchar(120)’)
, @data.value(‘(/EVENT_INSTANCE/ObjectName)[1]’, ‘varchar(120)’)
, @data.value(‘(/EVENT_INSTANCE/LoginName)[1]’, ‘varchar(120)’)
, @data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘varchar(max)’) )

These capture and record in the Logs db all changes to indexes, tables, stored procedures and functions.  When I create a new database as the triggers are placed in the new database due to them being in the model database.

The Trigger makes a call to the EVENTDATA() for the action this returns an XML Blob like


The information that I’m interested in is taken via the .value xml command and placed in the audit table using a normal insert.  you could take all the information from the xml but I Like to limit it to a generic set.

It is possibly to extend the functionality of the triggers to include other object modifications but i normally find the key ones to use are tables, indexes, stored procedures and functions but this can be extended beyond these to almost any object.

There is a clean up script that I created that removes any entries that are older than 1 month (3 months/a year etc. depending on the DB use)  but this is a simple delete mechanism, run off a daily SQL Agent maintenance Job using the date predicate in the table.

Posted in Uncategorized | Leave a comment

Recursive XML branches

One of the main features of XML is that it handles hierarchical data structures.  I agree that so does SQL either using the hierarchy variable or creating a Recursive CTE.  but nothing compares to the simplicity  of the XML format to show data.  the following snippet of XML is a Geographical  script I occasionally use.  The Highest level of the Tree is the Country Data,  Then branched off this is Region information and bellow this is the town/village information.
The position in the tree is pivotal to the Area that it describes. however the data that each level is related to the branch above.  i.e. Cardiff is in Wales.

<Areas AreaName=”Wales”>

<Area AreaName=”South Wales” GridRef=”123994″>

<Area AreaName=”Cardiff” GridRef=”332233″/>

<Area AreaName=”Pontypool” GridRef=”545676″/>

<Area AreaName=”Newport” GridRef=”123322″/>

<Area AreaName=”Cwmbran” GridRef=”123432″/>



Now this is only a small fragment of a much larger document (and made up data ).  but the examples should still work.
If I were to search for a Specific Town/City (specific to the location in the XML document) with an AreaName of ‘Cardiff” I could use

SELECT @XML.value(‘(/Areas/Area/Area[@AreaName =”Cardiff”]/@GridRef)[1]’,varchar(50)’)
This would specifically find the 2nd Area Level of Cardiff.  But if I didn’t care what level the area was at Town or Region or even country I could change the query to…

SELECT @XML.value(‘(//Area[@AreaName=”Cardiff”]/@GridRef)[1],’varchar(50)’)

This query will return the grid reference for the first Torfean Element that it encounters regardless of the level.

This is an expensive query and should be considered very carefully before being used in a production environment.

Posted in Uncategorized, XML | Leave a comment

Enforcing XML Document Structure

by default the xml variable supports a content structure of xml this is very useful if you are constructing an xml blob.  It works for typed and untyped code, and it has it’s place.

The following code shows an xml query that would work if the data passed the schema checks.

Content XML Query

This query should pass

Though the following query manadates the need for a single root node and would fail.

document xml image

this query will fail

in the case where you are processing data and you have the data laid out in a schema you might want to constrain the incoming information to a single document.  If two xml blobs are passed in to be processed but both blobs meet the schema criteria your dataload may fail.  or worse it could pass correctly, without warning you of issues.

The code below would pass as it fulfils the schema matching requirement and also has a single root node.

this query will pass

A very simple solution to what could be a awkward fix after the fact.

Posted in XML, XML Schemas | Tagged , | Leave a comment

Schema visibility using xml_schema_namespace

One of the issues I’ve faced is the constant referencing and rereferencing of XML Schemas that are stored against a database.  the xml_schema_namespace dmv helps with this.  it takes 2 parameters.  the schema of the database object XML Schema and Name of the XML Schema (and as I mentioned Schema too many times here is an example).


This returns a list of the contents of the schema, similar results can be returned by scripting CREATE AS … on the XML schema but this command returns purely the XML schema with out CREATE XML SCHEMA AS …. and can the query can be used in the snippet manager of your choice.

I extended this capability for my own purposes, using a cursor to retrieve each instance of an xml schema in the database (note: db specific command not server) and return it in one large xml blob. This is part of a longer term project that i am working on but i thought it might be useful to blog about this specific part of it.

--Author: Phil Quinn
--Date: 5th October 2012
--Contact: Twitter @SQLPippy
-- Email:
-- http:\\

DECLARE @AllSchemas varchar(max) =”
DECLARE @SchemaNameVar varchar(50)
DECLARE @XMLSchemaNameVar varchar(100)

DECLARE schema_cursor CURSOR FOR
SELECT AS SchemaName
, AS XMLSchemaName
FROM sys.xml_schema_collections XS
INNER JOIN sys.schemas S ON s.schema_id = XS.schema_id
WHERE <> ‘sys’
–get a list of all xml schema’s and including DB Schema name

OPEN schema_cursor

FETCH NEXT FROM schema_cursor
INTO @SchemaNameVar, @XMLSchemaNameVar

–use @AllSchemas as a varchar as the xml var does not have a + operator
–could use and insert variable but that will be version 2 🙂
SET @AllSchemas=@AllSchemas + CAST(xml_schema_namespace(@SchemaNameVar,@XMLSchemaNameVar) AS VARCHAR(max))
— Get the next Schema.
FETCH NEXT FROM schema_cursor
INTO @SchemaNameVar, @XMLSchemaNameVar
CLOSE schema_cursor;
DEALLOCATE schema_cursor;
–Close and deallocate cursor

–Transfer back from a varchar to a XML variable

Posted in SQL, XML, XML Schemas | Leave a comment