Azure Cosmos DB

July 17, 2017

Microsoft’s Azure Cosmos DB is a Multi-Model cloud database system, introduced in May 2017. DocumentDB served as Azure’s document database, but it is now part of Cosmos DB.

Set Up:
To set up an instance, we can either connect to Azure or we can use a local emulator to develop against to avoid using Azure credits. For this run through, we’ll connect to Azure.
To create a Cosmos instance in the Azure dashboard, click New, Databases, then select Azure Cosmos DB. After a unique ID is specified, we have to pick the database model to use. We can choose between Gremlin (graph), MongoDB (document), SQL (DocumentDB) or Table (key/value). For this example, I’ll go with Document DB.

Data Manipulation:
Once the instance has been created, the Azure dashboard will bring up the Quick Start page. From here we can create a Collection (the Document DB term for a table). This will create a database called ‘ToDoList’ with a collection named ‘Items’. From here we can download a sample app to connect to this Collection. This app will come already configured to connect to the correct URL, along with an authorization key. Running the solution will start a web app for a simple to-do list.
The Quick Start also links to some sample code and additional documentation.
Within the Azure dashboard, we can also bring up the Data Explorer, which is a graphical tool to view, create, and edit documents. We can also create new collections here.

Settings:
On the Azure dashboard, there are several settings that are of interest.
With ‘Replicate Data Globally’, we can create a read only version of our instance in a different data center. For example, we could have the primary read-write instance in the US East data center, but also have a read only instance in US West, or perhaps in a different country. This will allow is to distribute our data closer to our users, as well as give us a replicated data set in case we need to fail over to the back up instance.
Under ‘Default Consistency’, we can set the consistency level for our data. The levels range from Strong consistency, where any updates must be synchronously committed, to Eventual consistency, which gives the best performance but doesn’t guarantee that all copies of the data are up to date. The default setting is Session consistency, where a session will have strong consistency for its updates but eventual consistency for other sessions.
The ‘Keys’ page will give the connections strings necessary for our applications to connect to the database, either as read-write or as read-only.

Additional Information:
Azure – Cosmos DB Introduction
Cosmos DB – Getting Started
Syncfusion E-Book – Cosmos DB


Microsoft Azure Notes

February 29, 2016

I haven’t checked out Azure in a while (it was about 2 1/2 years ago since my last Azure post ), so I jumped back on to see what was different. These are more notes that I took than a complete overview on the service.
Most MSDN subscriptions include Azure credits, so if you have a subscription that’s a great way to check out the services at no risk.

Article on Azure SQL Database services vs virtual machines hosted on Azure

Azure SQL Database
DTU – Database Throughput Unit – A metric that combines I/O, CPU and memory to allow comparison between tiers.

Service Tiers – Basic, Standard, Premium – Sub-tiers within Standard and Premium
Max DB Size – Basic – 2 GB : Standard – 250 GB : Premium – 500 GB / 1 Tb
DTUs – Basic – 5 : Standard – 10-100 : Premium – 125-1750

Elastic Database Pool – Collection of databases with varying workloads that share a pool of resources. Uses eDTUs. Allows for predictable budget.

Backup:
Daily backups plus log backups every 5 minutes – Basic can go back 7 days for backups, Standard and Premium 35 days – Point in time restores also available.
Geo-Restore – Backups are stored in multiple locations, so a database can be restored in a different region if there is an issue with a particular location – All 3 tiers.
Geo-Replication – Makes a replica of a database – Standard and Active
Standard – One offline replica in a ‘paired’ region (Standard or Premium)
Active – Up to 4 secondaries – online and readable (Premium)
Manual failover

Resource Group:
When using Azure, you’ll create a Resource Group to group together items used by the same application. You’ll create a group even when only provisioning one item, this is the default container.

Limitations:
TCP/IP only – No Windows authentication
No distributed transactions

Virtual Machine:
Allows you to run other database systems (Oracle, DB2, MariaDB, etc)
Traditional SQL Server licensing
Allows DB over 1 TB- more control over configuration

Azure Products:

NoSQL:
DocumentDB: JOSN Document database
Azure Table Storage: BLOB storage – Key/Value
Hbase: Column family database – Part of HDInsight
Redis: Key/Value cache

Relational:
SQL Database: Database as a service
SQL Data Warehouse

Other:
Data Lake: Hadoop File System – Store data in its native state
Machine Learning: Cloud-base GUI
Data Factory: Cloud-based data integration
Azure Search:


Windows Azure – Blob Storage and Tables

October 1, 2013

Along with SQL Azure (The relational data store) Windows Azure provides several options for non-relational storage.
Blob Storage: For storing images, videos, documents, and other larger binary or text data.
Table Storage: Key-value storage for non-relational data
Queue: Messaging

Microsoft is currently offering several options to experiment with Azure. Most MSDN subscriptions come with a certain amount of free time to use, and there is also a free one month trial available:
Free Trial

I also downloaded an Azure Client from NuGet.

I used the free trial to investigate the Blob and Table storage. I found a good tutorial here.

One you’re signed in, you can go to the ‘Storage’ tab to create a ‘Storage’ account, which includes Blob, Queue and Table storage. You’ll select an account name, and that will be used for three URLs, each an endpoint for each type of storage. An account name and a key are used to access storage.

For the Blobs, the records can be organized into containers (similar to tables). The Container name must be lower case.
For Table storage, the account is sub-divided in tables, which store entities. Entities have properties, which are key-value pairs. When creating an entity, you’ll need to specify values for the Row Key and for the Partition Key. These two values will uniquely identify a record within a table. There is also a timestamp property. The code Entity should inherit from TableEntity.

Here is a short code example to create a container for, upload and retrieve an entity for both the Blob and the Table Storage.

using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.IO;

namespace AzureStorage
{

    class Program
    {
        static void Main(string[] args)
        {
            // Credentials
            string accountName = "AccountName";
            string accountKey = "key";
            StorageCredentials credentials = new StorageCredentials(accountName, accountKey);

            BlobExample(credentials);
            TableExample(credentials);

            Console.WriteLine("Completed...");
            Console.Read();
        }

        private static void BlobExample(StorageCredentials credentials)
        {
            Uri uri = new Uri("http://accountname.blob.core.windows.net");
            CloudBlobClient blobClient = new CloudBlobClient(uri, credentials);

            // Blob Container - Name must be lower case
            Console.WriteLine("Create Blob Container");
            CloudBlobContainer blobContainer = blobClient.GetContainerReference("testcontainer");
            blobContainer.CreateIfNotExists();

            // Upload Blob
            Console.WriteLine("Upload Blob");
            CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference("Photo1");
            string photoPath = (@"C:\Photo1.jpg");
            using (var fileStream = File.OpenRead(photoPath))
            {
                blockBlob.UploadFromStream(fileStream);
            }

            // Retrieve Blob
            Console.WriteLine("Retrieve Blob");
            foreach (IListBlobItem item in blobContainer.ListBlobs(null, false))
            {
                CloudBlockBlob blob = (CloudBlockBlob)item;
                Console.WriteLine("Blob: " + blob.Name);
            }
        }

        public class FootballPlayer : TableEntity
        {
            public string FirstName { get; set; }
            public string LastName { get; set; }
            public string Position { get; set; }

            public FootballPlayer(int jerseyNumber, string teamName)
            {
                this.PartitionKey = teamName;
                this.RowKey = jerseyNumber.ToString();
            }

            public FootballPlayer(){}
        }

        private static void TableExample(StorageCredentials credentials)
        {
            Uri uri = new Uri("http://accountname.table.core.windows.net");
            CloudTableClient tableClient = new CloudTableClient(uri, credentials);

            // Create table
            Console.WriteLine("Create table");
            CloudTable table = tableClient.GetTableReference("footballplayer");
            table.CreateIfNotExists();

            // Create entity 
            Console.WriteLine("Create Entity");
            var entityMR = new FootballPlayer(2, "Falcons");
            entityMR.FirstName = "Matt";
            entityMR.LastName = "Ryan";
            entityMR.Position = "QB";

            // Insert Record
            Console.WriteLine("Insert Record");
            TableOperation insertOperation = TableOperation.Insert(entityMR);
            table.Execute(insertOperation);

            // Retrieve entities
            Console.WriteLine("Retrieve Record");
            TableOperation retrieveOperation = TableOperation.Retrieve<FootballPlayer>("Falcons", "2");
            FootballPlayer player = (FootballPlayer)table.Execute(retrieveOperation).Result;

            Console.WriteLine("#" + player.RowKey + " " + player.FirstName + " " + player.LastName);
        }
    }
}