Convert String To DateTime For A Specific Time Zone

August 31, 2018

I recently had a project where we were receiving a file with a date stamp in the file name, in the format yyyyMMddHHmm, so the date plus the hour and minute. The file was produced on the West Coast in Pacific time (I’m in the Eastern time zone). We want to store that date/time, where we store time as UTC.
Because of daylight saving time (DST), we can’t just add a set number of hours to the Pacific time to get UTC, unless we can determine if the input time in in standard time or DST. I’ve worked at places where we maintained a table with all of the start and end dates for DST, and would look up each time in order to convert the time.
Luckily, SQL Server has support for standard time and DST to determine which is appropriate for a specific time.
Here is the SQL code I ended up using.

DECLARE @input char(12) = '201808311200';
DECLARE @output datetimeoffset;

DECLARE @year int = CAST(SUBSTRING(@input, 1, 4) AS int);
DECLARE @month tinyint = CAST(SUBSTRING(@input, 5, 2) AS tinyint);
DECLARE @day tinyint = CAST(SUBSTRING(@input, 7, 2) AS tinyint);
DECLARE @hour tinyint = CAST(SUBSTRING(@input, 9, 2) AS tinyint);
DECLARE @minute tinyint = CAST(SUBSTRING(@input, 11, 2) AS tinyint);

SET @output = (DATETIMEFROMPARTS(@year, @month, @day, @hour, @minute, 0, 0)) AT TIME ZONE 'Pacific Standard Time';

SET @output = @output AT TIME ZONE 'UTC';

SELECT @output;

First, I parsed the input string into the separate date/time parts. Alternatively, We could have added formatting to the input string (Like 2018-08-31 12:00:00) and converted that directly to a datetime.
If we use the datetimeoffset data type, we can use the AT TIME ZONE statement to designate what time zone our time is in. In this case (with Pacific time), we’ll end up with an offset of -7:00. We’ll use ‘Pacific Standard Time’ even though DST is in effect, since the point of the functionality is for us to not have to make the DST determination.
You can query the sys.time_zone_info system table to see a list of the time zones that can be used with this statement.
We’ll make one last conversion to UTC to end up with the datetime that we’ll store.

While on the subject of time zones, here are a few things on UTC. UTC stands for Coordinated Universal Time, the acronym is a compromise between the English and French phrases. (Wikipedia).
People use UTC and GMT(Greenwich Mean Time) as synonyms, and the time for each in within fractions of a second, but there is a difference between the two. GMT is a time zone, and is based on astronomical observations. UTC is a time standard (not a time zone) as in based on atomic clocks.


Incorrect syntax was encountered while parsing GO

June 30, 2018

I have a Powershell script I created to script out the views and stored procedures in a database. Usually I just use it to find all the references to a particular table or column, but in this case I wanted to run the procedure on another database. When running the script, I ran into this error:
“A fatal scripting error occurred.
Incorrect syntax was encountered while parsing GO.”

The error came from the SQL in this script. (The issue is dependent on formatting, I couldn’t get the format to come through in the WordPress page).

At first I was puzzled, everything looked fine to me. I did have the extra space in the front of the third line, so I deleted it and re-ran. Still with the same error. I knew that GO needed to be on its own line (unless I added a number to indicate I wanted the batch to run that many times).
I came across this post that described the same error. I didn’t have anything on the same line as the GO, but I decided to check my Powershell script. Instead if using a carriage return and line feed after the GO:
`r`n

I had inadvertently used a `v (a vertical tab) instead of a `n. Error on my part, but easily corrected.
Instead of regenerating my script, I could also save a copy of the script from SSMS.
In the Save dialog, select the arrow next to the Save button and select Save With Encoding.
Under Line Endings, change ‘Current Setting’ to ‘Windows (CR LF)’, then click OK and then Save.


Running Totals

May 28, 2018

Displaying a running total in SQL Server is very straightforward, thanks to window functions and the OVER clause.
We’ll set up an example with some records for a payment amount over several days.

drop table if exists #Test;

create table #Test (
PaymentDate date not null primary key,
Amount int not null
);

insert into #Test(PaymentDate, Amount) 
values ('2018-05-29', 25), ('2018-05-30',10), ('2018-05-31',20), 
	('2018-06-01',30), ('2018-06-02',40), ('2018-06-03', 15);

select * from #Test;

Adding a running total is just a matter of SUM with an OVER clause.

select PaymentDate, Amount,
	sum(Amount) over (order by PaymentDate) as RunningTotal
from #Test
order by PaymentDate;

The RunningTotal column will show the total of all payments made on that day and any earlier date.

We can also use PARTITION to display a running total for just that month as well.

select PaymentDate, Amount, datename(month, PaymentDate) as MonthName,
	sum(Amount) over (partition by month(PaymentDate) order by PaymentDate) as MonthRunningTotal
from #Test
order by PaymentDate;

In the first example we used default settings for the OVER clause, which is for the current row plus all rows before it. Here is the first query with the default explicitly given.

select PaymentDate, Amount,
	sum(Amount) over (order by PaymentDate rows unbounded preceding) as RunningTotal
from #Test
order by PaymentDate;

We can change this setting to return other combinations. In the final example, we’ll return the running total as the payment amount for the current row plus the amount from the day before.


select PaymentDate, Amount,
	sum(Amount) over (order by PaymentDate rows 1 preceding) as RunningTotal
from #Test
order by PaymentDate;

Window Functions
SELECT – OVER Clause


T-SQL – SET vs SELECT

April 22, 2018

When assigning values to a variable, we can use SET or SELECT. SET is the ANSI standard method. If we have a set value we want to assign to a variable, either method works just as well as the other. When we are retrieving values from a table or view, then there are some differences in behavior.
In order to run through some examples, we’ll set up a temp table with 1 column and 2 rows of data.

drop table if exists #Test;
create table #Test (RecordId int not null primary key);
insert into #Test(RecordId) values (1), (2);

When selecting by the primary key, we know we’ll get one value, so things are straightforward.

declare @VarSelect int;
declare @VarSet int;

select @VarSelect = RecordId from #Test where RecordId = 1;
set @VarSet = (select RecordId from #Test where RecordId = 1);

select @VarSelect as VarSelect, @VarSet as VarSet;

Let’s try an example of retrieving a value that doesn’t exist in the table. We’ll set 0 as the default value for the variables.

declare @VarSelect int = 0;
declare @VarSet int = 0;

select @VarSelect = RecordId from #Test where RecordId = 3;
set @VarSet = (select RecordId from #Test where RecordId = 3);

select @VarSelect as VarSelect, @VarSet as VarSet;

Using the SET method will return NULL, where the SELECT will leave the default value in place. If we’re using the variable one time, this behavior may not cause issues, but reusing the variable could cause confusion.
In the following example, the first select will return 1. The second select won’t find a value, so it will also return 1.

declare @VarSelect int = 0;

select @VarSelect = RecordId from #Test where RecordId = 1;
select @VarSelect as VarSelect;

select @VarSelect = RecordId from #Test where RecordId = 3;
select @VarSelect as VarSelect;

When returning multiple values, a SET will raise this error:
“Msg 512, Level 16, State 1, Line x
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, , >= or when the subquery is used as an expression.”
The SELECT will return one of the values from the result set.

declare @VarSelect2 int = 0;
declare @VarSet2 int = 0;

select @VarSelect2 = RecordId from #Test;
set @VarSet2 = (select RecordId from #Test);

select @VarSelect2 as VarSelect, @VarSet2 as VarSet;

In this case, I would rather an error be thrown that to just return one of the result set values.
Overall, it seems like SET it the better choice. We’ll get an error if multiple values are returned, and if no value is returned then we’ll get a NULL.


Converting Float Values To Strings

March 26, 2018

A coworker and I were looking at a request where we needed to write float values to a file, but have the values be of a uniform length (In this case, the float column was storing a monetary value, which isn’t a good idea, but that’s what we had to work with). In this case, all values were out to 4 decimal places, so it was a matter of padding the value to the left of the decimal to get everything to an equal length. The first thought was to convert the value to a string, pad the from with zeros, and from that take a string of the specified length.
To my surprise, it turns out that casting a float to a string data type (a varchar in this case) ends up rounding the value in some cases, and converting to scientific notation in others. With a little Googling we found out that using the STR function is the recommended way to make this conversion. STR takes has length and decimal value parameters to control how many digits are output.
I had used the FORMAT function for dates before, but it also handles numbers as well. This turned out to be the approach that was used, since in the FORMAT function we can specify padding to a certain length.
Here’s SQL for some test data long with the different approaches.

create table dbo.FloatTest(
FloatValue float not null
);

insert into dbo.FloatTest values (123.45678);
insert into dbo.FloatTest values (2.66993256);
insert into dbo.FloatTest values (0.00001);
insert into dbo.FloatTest values (55555.84);
insert into dbo.FloatTest values (321.0987654);

select  FloatValue as OriginalValue, 
        cast(FloatValue as varchar(15)) as CastValue,
	str(FloatValue, 20, 10) as StrValue, 
	format(FloatValue, 'F9') as FormatValue,
	format(FloatValue, '0000000000.##########') as CustomFormatValue
from dbo.FloatTest;

go

With FORMAT, we can specify a standard format for floats with F, followed by the number of decimal digits. Or we can create a custom format. Using a 0 for a required digit and a # for an optional digit allows us greater control over what is returned.


SQL Server And JSON

February 11, 2018

SQL Server 2016 added support for working with JSON. Although there isn’t a JSON datatype, there is still the ability to output query results to JSON, and to break down JSON into rows and columns.
This post will run through some basic examples of working with JSON. I’m using SQL Server 2017 although 2016 can be used as well.

First I’ll create some test data.

drop table if exists dbo.JsonTest;

create table dbo.JsonTest(
FirstName varchar(20) null,
LastName varchar(20) not null,
StatusCode varchar(10) not null
);
insert into dbo.JsonTest(FirstName, LastName, StatusCode) values ('Mike', 'Smith', 'Inactive');
insert into dbo.JsonTest(FirstName, LastName, StatusCode) values ('Jane', 'Doe', 'Active');
insert into dbo.JsonTest(FirstName, LastName, StatusCode) values (NULL, 'Jones', 'Pending');

Next is returning relational data as JSON. Much like the FOR XML clause returns XML, the FOR JSON clause returns the selected data in a JSON string. AUTO will return a default structure, where using PATH will allow more control on the output, like naming the root node and returning NULL values instead of omitting them.

select FirstName, LastName, StatusCode FROM dbo.JsonTest FOR JSON AUTO;
select FirstName, LastName, StatusCode, 'Atlanta' as [Address.City], 'GA' as [Address.State] FROM dbo.JsonTest FOR JSON PATH, ROOT('People');
select FirstName, LastName, StatusCode FROM dbo.JsonTest FOR JSON PATH, INCLUDE_NULL_VALUES, WITHOUT_ARRAY_WRAPPER;

The second select also shows how to nest data, in this case in an Address node.

OPENJSON will return one row for each node in a JSON string.

declare @JSON as varchar(4000) = '[{"FirstName":"Mike","LastName":"Smith","StatusCode":"Inactive"},{"FirstName":"Jane","LastName":"Doe","StatusCode":"Active"},{"LastName":"Jones","StatusCode":"Pending"}]';

SELECT * FROM OPENJSON(@JSON);

With OPENJSON, we can also parse JSON into relational rows and columns, provided that the column name matches the JSON attribute name. If the names don’t match then NULLs are returned.

declare @JSON as varchar(4000) = '[{"FirstName":"Mike","LastName":"Smith","StatusCode":"Inactive"},{"FirstName":"Jane","LastName":"Doe","StatusCode":"Active"},{"LastName":"Jones","StatusCode":"Pending"}]';

SELECT * FROM OPENJSON(@JSON)
WITH (
FirstName varchar(20),
LastName varchar(20),
StatusCode varchar(10)
);

It is also possible to map a JSON attribute name to a different name for the output, we’ll need to specify which value to match to.


declare @JSON as varchar(4000) = '[{"FirstName":"Mike","LastName":"Smith","StatusCode":"Inactive"},{"FirstName":"Jane","LastName":"Doe","StatusCode":"Active"},{"LastName":"Jones","StatusCode":"Pending"}]';

SELECT * FROM OPENJSON(@JSON)
WITH (
GivenName varchar(20) '$.FirstName',
Surname varchar(20) '$.LastName',
StatusCode varchar(10)
);

There are also a few JSON Functions available.
ISJSON will determine if a string is valid JSON or not.
JSON_VALUE will extract a scalar value.
JSON_QUERY will return a JSON fragment or an array of values.
By default, the JSON functions are in Lax mode, which means that an error won’t be raised with an invalid operation, a NULL values will be returned instead. Strict mode can be specified, in which case an error will be raised with an invalid operation.


declare @JSON as varchar(4000) = '[{"FirstName":"Mike","LastName":"Smith","StatusCode":"Inactive", "Language":["English","Spanish"]},{"FirstName":"Jane","LastName":"Doe","StatusCode":"Active"}]';
declare @NotJson as varchar(4000) = 'Not a JSON string';

-- Return bit to determine if a string is valid JSON or not
SELECT ISJSON(@JSON);
SELECT ISJSON(@NotJson);

-- Get scalar value - 0 based array
SELECT JSON_VALUE(@JSON, '$[1].FirstName');

-- Return JSON fragment or array of values
SELECT JSON_QUERY(@JSON, '$[0].Language');

-- Default is lax mode - returns NULL on error
-- Strict will raise error

SELECT JSON_QUERY(@JSON, 'lax $[1].Language');
SELECT JSON_QUERY(@JSON, 'strict $[1].Language');

All of the SQL is also posted on GitHub.

Links:
Simple Talk has a good introduction to JSON functionality.
Microsoft Docs – OPENJSON
Microsoft Docs – JSON Functions


Running Totals

November 28, 2017

The guys from the SQL Server Radio podcast posted a blog post from their episode on calculating running totals in SQL Server. Their script (link in the post) covers several different ways to calculate a running total in a query (A total of a value in the current row added to each row that came before it). The best method (in my view) is using the SUM function with the OVER clause. A lot of the other methods involved a cursor or some other messy method.
I imagine most developers familiar with T-SQL have used the OVER clause with ROW_NUMBER or some other windowing function, but the OVER clause can also be used with aggregate functions, like COUNT or SUM or AVG.
An example:

create table [dbo].[RunningTotal](
	RecordId int identity(1,1) NOT NULL,
	GroupId tinyint NOT NULL,
	Amount decimal(6, 2) NOT NULL
);

insert into [dbo].[RunningTotal](GroupId, Amount) values (1, 12.34);
insert into [dbo].[RunningTotal](GroupId, Amount) values (1, 56.78);
insert into [dbo].[RunningTotal](GroupId, Amount) values (1, 55.66);
insert into [dbo].[RunningTotal](GroupId, Amount) values (2, 33.33);
insert into [dbo].[RunningTotal](GroupId, Amount) values (2, 22.10);
insert into [dbo].[RunningTotal](GroupId, Amount) values (2, 12.34);
insert into [dbo].[RunningTotal](GroupId, Amount) values (3, 98.76);

select GroupId, Amount, 
	sum(Amount) over(order by RecordId) as RunningSum,
	sum(Amount) over(partition by GroupId order by RecordId) as RunningSumByGroup
from dbo.RunningTotal;

go

This example gives a running total over the whole dataset, plus a running total by Group (GroupId).