Multi-Model Databases

April 15, 2017

NoSQL databases offer alternatives for situations where the data being processed doesn’t fit neatly into a relational database. Different types of systems have evolved to meet different needs. Some have adopted polyglot persistence, using multiple databases systems to meet specialized needs within a system. However, such a system greatly increases the complexity in programming and maintenance.
Some relational systems have added features to deal with alternate data structures. Postgres supports the JSON data type, which would allow you to replicate features of a document database, while SQL Server has supported XML for a while, and looks to be moving toward supporting JSON as well.
One solution is Multi-Model databases, which combine several database types into one system. This gives you one system to install and maintain. Plus, one query can access data from the various models, and one transaction to ensure consistency across those models.
Two system that I’ve looked at are ArangoDB and OrientDB.
Both systems support Document, Key/Value, and Graph database types in one engine. OrientDB supports SQL, while Arrango had a SQL-like query language, although for selects the language seems to resemble LINQ where the table/collection is specified first.
Both can operate as a single node or as a cluster, and have built in replication and sharding ability.
The multi-model system is an interesting idea and seems quite useful. I’ll follow up with a post with a deeper dive into one of these systems. It is interesting that neither system implements a column family model, perhaps that falls more into a analytical system that would be separate from the transacional system anyway.

Links:
10 Reasons To Consider A Multi-model Database
Data Modeling With Multi-model Databases
Datastax Believes Multi-model Databases Are The Future

Advertisements

SQL Antipatterns by Bill Karwin

March 27, 2017

One of my favorite database books is SQL Antipatterns by Bill Karwin. The book lists many approaches taken in database design, querying and development that may be suboptimal, and gives different approaches to solve a particular issue.
For each issue, the author will list the objective, describe the antipattern and how to recognize it, legitimate use cases for the antipattern, and a suggestion on an alternate way of achieving the objective.
Here is a list of the antipatterns described, along with the solution to each. You’ll want to read the book to get more detail on each item.

Logical Database Design

1. Storing comma separated list of values.
Solution: Use an intersection table for a many to one relationship

2. Hierarchy – storing a parent ID – Adjacency list
Solution: Use a path structure (1/5/7/)

3. Always having an incrementing integer as the PK
Solution: Consider natural and compound keys

4. Not using constraints, thinking that leads to simplier design
Solution: Use constraints

5. Using the Entity-Attribute-Value design
Solution: Each entity has its own table

6. Using a “Dual-Purpose foreign key” – Storing an ID that could refer to values from multiple tables, plus a 2nd column to store the table name.
Solution: Set up a proper foreign key relationship

7. Repeating columns – Column1, Column2, Column3
Solution: Set up one to many relationship

8. Splitting data in separate tables based on values – Sales2015, Sales2016, Sales2017
Solution: Use partitioning

Physical Database Design

1. Using the Float data type to store fractional values when exact values are important
Solution: Solution: Use the Numeric type

2. Specifying valid values for a column in a check constraint
Solution: Solution: Put valid values in a table and create a foreign key

3. Storing paths to external files – Stored outside the database where they can’t be managed by the database system.
Solution: Store data in a Blob, or use Filestream to manage the files

4. No indexes or creating too many indexes
Solution: Analyze usage data to understand where indexes are needed

Query

1. Not allowing NULLs and using a different value to represent missing data
Solution: Use NULL to represent missing data

2. Listing columns in a SELECT clause that aren’t either listed in a GROUP BY clause or contained in an aggregate function (Not valid in SQL Server, which will raise an error)
Solution: Follow the rules for GROUP BY

3. In trying to pick a record at random, sorting the dataset by a random number (Like RAND or NEWID) and taking the first record.
Solution: Find other ways to pick a random value. In SQL Server, use TABLESAMPLE (1 ROW).

4. Simulating a search engine by using LIKE ‘%SearchTerm%’.
Solution: In SQL Server, use full-test search

5. Trying to return complex datasets all in one query – Lots of times there are unintended results, like Cartesian joins.
Solution: Break a complex problem into smaller steps

6. Depending on column order, with SELECT * or INSERT without explicitly listing columns.
Solution: Avoid wildcard queries and explicitly list columns for INSERTs

Application Development

1. Storing passwords in plain text
Solution: Store a salted hash of the password

2. Added unverified text to a SQL query – Open to SQL injection.
Solution: Filter input or use parameterized queries.

3. Backfilling any gaps in identity columns.
Solution: Treat keys as a uniwue indentifier, not as a row number.

4. Ignoring return values from a database API in an effort to have simpler or less code.
Solution: Store any dynamically generated SQL and return values from executing queries to aid in troubleshooting.

5. Not treating SQL as code – No source control, tests, etc.
Solution: Treat SQL as any other application code.

6. MVC – Using the Model as an Active Record.
Solution: Decouple the model from the tables.


Calendar Table And Marking Holidays

March 15, 2017

A lot of Data Marts or reporting databases will include a Calendar or Dates table to list all days within a certain range, along with data on those days (Day of the week, quarter, etc.).
I’ve posted my script on GitHub that will generate these records.
At a previous job, we wanted to track days the office was closed so that we could measure how many business days certain functions took. Determining weekends was easy, but calculating holidays took a little more effort. You could just manually enter the holidays for each year, but it is possible to calculate the holidays for each year in one pass.
Some holidays are the same date every year, like Christmas.

UPDATE dbo.Calendar SET IsHoliday = 1 WHERE [Month] = 12 AND [Day] = 25;

Others are on a specific day of the week, like Labor Day being the first Monday in September (This script is marking US holidays).

UPDATE c
SET IsHoliday = 1
FROM dbo.Calendar as c
JOIN (
	SELECT DateKey,
		ROW_NUMBER() OVER (PARTITION BY [Year] ORDER BY DateKey) AS RowNumber
	FROM dbo.Calendar
	WHERE [Month] = 9
		AND [WeekDay] = 2
) AS z
ON z.DateKey = c.DateKey
	AND z.RowNumber = 1;

This statement will find the first Monday of each September for the range of dates in the table, and mark it as a holiday.
Another method I saw used was to look for a Monday in September where the day was between 1 and 7, since the first Monday of the month will always be in that range.
Of course, the hardest holiday to calculate is Easter. Luckily, I found an algorithm on the US Naval Observatory site to determine the date. I’ve included a function in the Calendar script to calculate the date.
Another thing to consider (which I didn’t include in this script) is if a holiday falls on a weekend, a business may choose the nearest weekday to observe that holiday.


BETWEEN

March 6, 2017

While writing a query recently, I made use of the BETWEEN operator, which will match all values within a specified range. Without thinking, I put the greater value first:
WHERE RecordId BETWEEN 100 and 90

I was somewhat surprised that no records were returned, I knew there should be matches. Reversing the order gave me the results I expected:
WHERE RecordID BETWEEN 90 and 100.

So running the query and getting the execution plan, I saw that the WHERE clause was transformed to:
WHERE RecordID >= 90 and RecordID <= 100

Seeing this, it becomes obvious why the order would matter.

MSDN – BETWEEN


SQL Server and R Integration – Setup

February 27, 2017

A new feature in SQL Server 2016 is the ability to run R scripts within SSMS. For those not familiar with the R language, it is an open source language used to performed statistical computations, to make use of machine learning algorithms, and for data visualization.
In this post, I’ll go through a quick explanation on installing and setting up the services.

Background:
The R language comes with a great deal of built-in functionality, but its real power comes from the hundreds of packages available, created by the R community to extend the functionality of R. R is an interpreted language, using vectors and data frames (similar to tables) as the primary data structures.
R can connect to SQL Server to retrieve data, but there are a few drawbacks. For one, R works with data in memory, so your dataset is limited to the amount of memory available in your workspace. Also, R is single-threaded, so we can’t take advantage of distributing the workload. Working with R within SQL Server will allow us to use a larger dataset as well as take care of

Installation:
The R components for SQL Server are installed from SQL Server setup. If the services weren’t installed with the initial installation of the database engine, you can go back and add the components.
One option is to install a stand alone R Server, that would be used as a separate instance to use in analyzing data using R. There is also an option to install in-database R services. This optional also installs an additional service, the SQL Server Launchpad, that allows integration with R. In my testing, I elected to work with the in-database component with my existing SQL Server instance.

Setup:
Once the components have been installed, the ‘External Scripts Enabled’ setting will need to be set to true.

EXEC sp_configure 'external scripts enabled', 1;
GO
RECONFIGURE;
GO

We’ll also need to restart the database instance to have this setting take effect. We also need to make sure that the SQL Server Launchpad service is running.

Test:
We’ll make a call to the sp_execute_external_script stored procedure to make sure that R is properly set up and that we can make a call.
I found this sample code on MSDN that will allow us to test the R set-up without writing R code or retrieving data from tables right now.

EXEC sp_execute_external_script  
  @language =N'R',    
  @script=N'OutputDataSet<-InputDataSet',      
  @input_data_1 =N'SELECT 1 AS hello'    
  WITH RESULT SETS (([hello] int not null));    
GO 

If everything is setup correctly, we should get a single value of 1 returned with the column header ‘Hello’.

Next Steps:
In the next post, I’ll put together some actual R code and look at some of the things we can do with SQL Server data and R.

Additional Information:
SQL Server Central: Introduction to Microsoft R Services in SQL Server 2016
Syncfusion E-Book: R Succinctly – Introduction to the R language – Free, login required
Coursera – Data Science: Series of courses in Data Science, several deal with learning R.


Integers vs Strings For Joins

January 31, 2017

One recurring argument that comes up with databases is determining a primary key for a table. Should a natural key (which may be a string value) be used, or should a integer surrogate key be generated? In terms of query performance, the common wisdom is that integer keys will perform better. I’ve always been doubtful about that claim, I can see where if values were of different sizes then the performance would be different.
I ran across this post that compared string vs integer key values for query performance.
I wanted to perform my own test to compare the performance. I’ve posted a script on Github to build out some test tables. We end up with a Customer table with a little over 2 million records. We’ll use two different queries to join to a State table, one joining on an integer StateId value, the 2nd joining on a string StateCode. The StateId will be a smallint, and the StateCode a char(2) value, so that both columns are 2 bytes. The Customer table has a clustered index on the Customer ID, with nonclustered indexes on both the StateId and StateCode columns. The State table is a heap (it only has 62 records), since ordering by either StateId or StateCode may give an advantage to that type.
So with running the two queries, the first joining on StateId is 46% of the run time, with the StateCode string join as the other 54%. So the integer column key is faster in this case. However, on examining the query plan, Hash joins were used, so certainly we can get a more efficient plan.
I went back and added INCLUDE columns to the two Customer indexes, so that all of the returned attributes are in the index leaf level. Once I made this change, the query plan showed that the queries used Merge joins, and the query cost is the same for both queries.
So my conclusion is that there is no gain in query performance when joining on strings or integers, assuming that the values are of the same size. I need to do more research on Hash joins, but it appears that a string value hash may possibly be larger than a integer value hash, which would account for the difference in query performance.


Links – Database Theory

December 31, 2016

A few interesting links of database information I’ve run across:
Armstrong’s Axioms: Rules on functional dependencies.
Logic For Serious Database Folks: Chapters from a work in progress on Logic as it pertains to relational databases, no release date set for the complete work.
A Simple Guide to Five Normal Forms in Relational Database Theory: Covering the normal forms, most resources don’t go past Third Normal Form.
Comparison of different SQL implementations: Very comprehensive list of how several relational database products adhere or stray from the ANSI SQL standard.