Archive

Archive for the ‘OLTP’ Category

Why “Date BETWEEN FromDate AND ToDate” is a dangerous join criteria

June 29, 2012 10 comments

I have been meaning to write this blog post for some time and the discussion about Data Vault finally prompted me to do it.

Sometimes, you find yourself in situations where you have to join a table that has a structure like this:

CREATE TABLE TemporalTracking (
  SomeKey INT
, FromDate DATETIME
, ToDate DATEIME
, <more columns>
)

The join criteria is expressed:

FROM <OtherTable> OT
INNER JOIN TemporalTracking T
  ON OT.SomeTimeColumn BETWEEN T.FromDate AND T.ToDate
  AND OT.SomeKey = T.SomeKey

 

Or more commonly, this variant with a semi open interval:

FROM <OtherTable> OT
INNER JOIN TemporalTracking T
  ON OT.SomeTimeColumn >= T.FromDate
  AND OT.SomeTimeColumn < T.ToDate
  AND OT.SomeKey = T.SomeKey

Data models that promote these types of joins are very dangerous to relational optimizers and you have to step carefully when executing queries with many of these joins. Let us have a look at why this is so.

Read more…

Implementing Message Queues in Relational Databases

May 25, 2012 18 comments

At the last SQL Bits X I held the FusionIO fireside chat during the launch party. During this presentation, I demonstrated how it is possible to build a table structure inside a relational engine that will act is a message queue and deliver nearly 100K messages/second.

Read more…

Reading Material: Abstractions, Virtualisation and Cloud

May 1, 2012 9 comments

Two SocketsWhen speaking at conferences, I often get asked questions about virtualization and how fast databases will run on it (and even if they are “supported” on virtualised systems).  This is complex question to answer. Because it requires a very deep understanding of CPU caches, memory and I/O systems to fully describe the tradeoffs.

Read more…

Don’t Become a One-trick Architect

December 8, 2011 12 comments

imageWe are near the dawn of a new workload: BigData. While some people say that “it is always darkest just before the dawn”. I beg to differ: I think it is darkest just before it goes pitch black.  Have a cup of wakeup coffee, get your eyes adjusted to the new light, and to flying blind a bit, because the next couple of years are going to be really interesting.

In this post, I will be sharing my views on where we have been and a bit about where we are heading in the enterprise architecture space. I will say in advance that my opinions on BigData are just crystalizing, and it is most likely that I will be adjusting them and changing my mind.

Read more…

Boosting INSERT Speed by Generating Scalable Keys

October 5, 2011 20 comments

Throughout history, similar ideas tend to surface at about the same time. Last week, at SQLBits 9, I did some “on stage” tuning of the Paul Randal INSERT challenge.

It turns out that at almost the same time, a lab run was being done that demonstrated, on a real world workload, a technique similar to the one I ended up using. You can find it at this excellent blog: Rick’s SQL Server Blog.

Now, to remind you of the Paul Randal challenge, it consists of doing as many INSERT statements as possible into a table of this format (the test does 160M inserts total)

CREATE TABLE MyBigTable (
    c1 UNIQUEIDENTIFIER ROWGUIDCOL DEFAULT
NEWID ()
    ,c2 DATETIME DEFAULT GETDATE ()
    ,c3 CHAR (111) DEFAULT ‘a’
    ,c4 INT DEFAULT 1
    ,c5 INT DEFAULT 2
    ,c6 BIGINT DEFAULT 42);

Last week, I was able to achieve  750K rows/sec (runtime: 213 seconds) on a SuperMicro, AMD 48 Core machine with 4 Fusion-io cards with this test fully tuned. I used 48 data files for best throughput, the subject of a future blog.

Read more…

DL980 Configuration Guidance published

June 10, 2011 Leave a comment

The Asian CAT team, together with Jimmy May (The Aspiring geek himself), the MS PFE SAP team and HP have published a very interesting blog about DL980 configuration:

If you plan to run on that machine, I highly recommend you read up on the great information in the blog.

Whenever you play around with an HP server, I generally recommend you download and skim through the QuickSpec. It gives you the details of how the machine is laid out – the PCI card speeds and placement come in quite handy when you configure the machine with FusionIO.

Another good resource for large machine tuning is to take a look at the full disclosure reports on TPC-E runs

Diagnosing and fixing SOS_OBJECT_STORE spins for Singleton INSERTS

May 30, 2011 2 comments

Following up on my previous post, my next target for “optimization”, while I am waiting for an even faster I/O system, is the SOS_OBJECT_STORE spin.

Recall that I am having high waits for WRITELOG, but still see 100% CPU, which indicates that spins may be our sinner. The big spin on the system is still LOGCACHE_ACCESS – but until we get hardware to work on that – we might as well stay greedy and learn a bit about SQL Server in the process. We just got rid of the OPT_IDX_STATS spin by running TF2330.

Unfortunately, the documentation available on our next spin: SOS_OBJECT_STORE is rather sparse. It is one of the SQLOS internal data structure used many places inside SQL Server. But there are ways, even for the public (which is why I can share it here), to get more information about what is going on. You can capture the call stacks of SQL Server when it does this spin and use publicly available symbols to lookup the function names inside the code.

One way to do this is to run an Xperf trace of the sqlservr.exe, another is with WinDbg. Anything that can collect and bucketize call stacks can help you. I will not get into more details here, but follow the links in this paragraph to learn more. I also have an article on SQLCAT that should help you get started on setting public symbol paths.

Suffice to say that I got hold of the sqlservr.pdb file (the publicly available symbols) and had a look at the call stacks that leads to SOS_OBJECT_STORE spins:

SpinlockBase::Sleep
LockManager::GetLockBlocks
lck_lockInternal
GetLock
PageContext::AcquireLock
IndexDataSetSession::AcquireLocksForInsert
IndexDataSetSession::InsertSmallRecord
IndexDataSetSession::InsertRowInternal
DatasetSession::InsertRow
RowsetNewSS::InsertRow

Aha! So this is related to the lock manager acquiring a lock on a page. Now, you may then ask: how can we influence this, surely we cannot control the way locks are acquired.

Well, as a matter of fact, we DO have a knob that gives us a tiny bit of influence. How about building the index like this:

CREATE CLUSTERED INDEX MyBigTable_cl
ON dbo.MyBigTable (c1)
WITH (ALLOW_PAGE_LOCKS = OFF)

That should get rid of one level of the lock hierarchy (ROW/PAGE/TABLE), restricting us to either table level locks or row locks. Since we are playing OLTP system here – who needs page locks anyway? Total leftover from old times Smile… (I am only half kidding here)

Interestingly, this was just what was needed, the SOS_OBJECT_STORE spin is now gone:

image

But throughput has not changed at all. This is not surprising, given the much larger amount of spins on LOGCACHE_ACCES. But we learned something new: Disabling PAGE level locks can save some CPU cycles by eliminating some of the code paths – we can speculate that this might lead to increased throughput once other bottlenecks are gone.

At this time, before I am moving to a faster transaction log, these are my waits:

image

Notice the high SOS_SCHEDULER_YIELD up there right after the WRITELOG? I have a feeling those spins are not yet done tormenting us….

Secret Scale Sauce : Heavy Singleton INSERT spinlocks

May 28, 2011 6 comments

Paul Randal recently ran a very interesting test on a simple INSERT workload. His results were promising and provide both good conclusions and more suggestions for research.

You should read Paul’s blog post first, because if you have not, this blog will not make much sense.

Read more…

Good keys, what are they like?

May 13, 2011 28 comments

A central value add of data warehouses is their ability to restore the sanity that comes from using good keys. Taking a model-agnostic view of keys, they refer to “something” that is uniquely identifiable. Depending on what you are modeling, those “somethings” have different names, for example: entities, rows, tuples, cells, members, documents, attributes, object instances, and relations between the any of the latter. Because this article is about relational databases and because I don’t want to move up to the “logical” level with you (as I promised before), I will use the term: “row” as the “something” that a key refers to, and the term “column” as the data structure that implements the key.

Moving on, the definition of a “good key” is a column that has the following properties:

  1. It forced to be unique
  2. It is small
  3. It is an integer
  4. Once assigned to a row, it never changes
  5. Even if deleted, it will never be re-used to refer to a new row
  6. It is a single column
  7. It is stupid
  8. It is not intended as being remembered by users

Read more…

Grade of the Steel: Introduction

May 1, 2011 Leave a comment

There are a lot of things I want to say on this blog, so I do apologize in advance for jumping a bit between subjects. I will shortly write the next installment of my data modeling series – but before that, I want to let you know about some exciting new tests that I am working on in my copious amount of spare time Smile.

Read more…