Upcoming Presentations in May

atlNext month, I’ll be making a couple of stops at SQL Saturday events in the south.  On May 3, I’ll be attending SQL Saturday #285 in Atlanta and presenting my SSIS Performance session.  I’ll be traveling with my good friend and neighbor Ryan Adams, who is also presenting at the event.  In addition, several of my Linchpin People cohorts are also in attendance, which means I should probably bring along some extra bail money.  Although I’ve been to Atlanta several times, I’ve never gotten to attend the SQL Saturday there, so I’m looking forward to meeting some new people.

SQLSAT308_SPEAKINGThe following weekend, I’ll be visiting the good folks in Houston for SQL Saturday #308.  At this event I’ll be sharing two of my favorite topics: SSIS performance and SSIS scripting.  Having previously attended SQL Saturday in Houston, I know a little about what to expect (Texas barbecue for lunch…. holla!), and I always dig hanging out with fellow Texans down south.  This event will be a first for me in that I’m taking along my whole family for the trip.

If you’re in Atlanta or Houston for either of these events, come by one of my sessions and say hello!  I hope to see you there.

SQL Saturday Lisbon

sqlsatportIt’s a little over a week until this year’s SQL Saturday festivities kick off in Lisbon, Portugal, and I’m very excited to be a part of it.  Registration is nearly full, so if you’re in the area and are planning on attending, register now!

For this event, I’m delivering a full day workshop entitled “Real World SSIS: A Survival Guide”, during which I’ll share design patterns and practical lessons I’ve learned over my 10-ish years in the BI/ETL space.  This workshop will be held on Thursday, April 10th (the Thursday prior to the main SQL Saturday event), and there are still some seats available.  You can register for this workshop online.  I’ve also recorded a teaser video of what’s to come in this workshop.

In addition to the full-day workshop on Thursday, I’ll also be presenting two, one-hour sessions on Saturday.  I’ll be sharing “Handling Errors and Data Anomalies in SSIS” and “15 Quick Tips for SSIS Performance” during the regular SQL Saturday event.

If you plan on attending SQL Saturday in Lisbon, please stop by and say hello!  I’m looking forward to seeing you there.

Lunch with someone new

shakeI met up for lunch with a good friend and former coworker today, and among the topics of discussion was how we as professionals often neglect personal relationships when work and life get busy.  I’ve found that to be especially true since I started working from home last year.  I don’t miss a lot about working in an office setting, but I do long for the days of hallway conversations and working lunches with colleagues. When working in isolation, it can be easy to get into cocoon-mode, shutting out the rest of the world – to the detriment of interpersonal skills and relationships.  Through my work as a professional presenter, I get to talk to a lot of people, but more often than not I’m talking to them in a group setting with little one-on-one interaction.  While the former is useful for building a list of contacts, it doesn’t do much to truly build relationships.

Five years ago, in January of 2009, I set a goal for myself to have lunch or drinks with someone new – not necessarily a stranger, but someone with whom I had not spent any one-on-one face time – on a monthly basis.  I exceeded that goal in a big way.  And I don’t think it’s an accident that 2009 and 2010 were two of the biggest growth years of my career. I didn’t land any work directly as a result of those relationships – in fact, several of the people with whom I met weren’t business associates but personal acquaintances. For me, the bigger benefit was to get out of my comfort zone and get to know more people on a personal basis, whether or not I saw a direct career benefit to meeting with them.  I firmly believe that, five years later, I’m still seeing benefits of getting out of that comfort zone.  And just as importantly, I had a lot of fun!

So I’m going to rekindle this goal.  Since it’s not January, I don’t have to call this a New Year’s resolution, but I’m going to commit to share a meal or drinks with someone new at least once a month (including this month) for the remainder of this year.  I’ll hope that I exceed the goal as I did in 2009.

If you’re not regularly spending face time with peers and acquaintances, I would encourage you to give it a try.  Go out for coffee with someone you meet at a professional event.  Have lunch with an acquaintance.  Even if it’s uncomfortable for you – no, especially if it’s uncomfortable for you – it can pay big dividends in the long run.

Parent-Child SSIS Architecture

This is the first in a series of technical posts on using parent-child architectures in SQL Server Integration Services.  The index page for all posts can be found here.

In this post, I will provide an overview of the architecture and describe the benefits of implementing a parent-child design pattern in SSIS structures.

Definition

The simplest definition of SSIS parent-child architecture is that it consists of packages executing other packages.  In SSIS, the package is the base executable; it is the most granular component that can be executed independently1.  Every version of SSIS includes the ability for one package to execute another package through the use of one of the following:

  • The Execute Package Task
  • T-SQL commands
  • The Execute Process Task (using the dtexec.exe utility)

Though the reference to parent-child architecture implies that there are exactly two layers to this design, that does not have to be the case.  You may have a design where a package executes a package which executes a package, and so forth.  Although there may be a hard limit to how deeply nested a parent-child architecture may go, I have never encountered such a limitation.  I have found it useful on a few occasions to go deeper than two levels in this type of architecture, particularly when designing a formal ETL framework (to be discussed further in a future post in this series).  In cases where greater than two levels exist, finding the right terminology for those layers is important.  You can refer to them by patriarchy (grandparent/parent/child) or by cardinality (level 1, level 2, level n), as long as you remain consistent – especially in your documentation – with those references.

Conceptually, a parent-child architecture is a form of code abstraction.  By encapsulating ETL actions into discrete units of work (packages), we’re creating a network of moving parts that can be developed, tested, and executed independently or as part of a larger collection.

Benefits

As I mentioned in my introductory post, there are several benefits to using parent-child structures in SSIS.

Reusability.  In any ETL environment of significant size or complexity, it’s quite normal to discover common ETL behaviors that are reusable across different implementations.  For a concrete example of this: In my spare time, I’m working on an ETL application that downloads XML files from a Major League Baseball web service.  There are files of various formats, and each file format is processed a different way, but with respect to the download of the files, I always perform the same set of operations: create a log entry for the file; attempt to download the file to the local server; log the result (success or failure) of the download operation; if the download has failed, set the HasErrors variable on the main package.  If I were to load this behavior into a group of tasks in the package for each XML format, I’d have five different copies of the same logic.  However, by building a parameterized child package that performs all of these core functions, I only have to build the file download/logging logic once, and execute the resulting package with the appropriate parameters each time I need to download a file.

Easier development.  Working with large and complex SSIS packages can be a pain.  The larger the SSIS packages, the longer it takes for the BIDS or SSDT environment to do its validation checks when the package is opened or modified.  Further, when multiple ETL developers are working on the same project, it is much easier to break apart the project into discrete units of work when using numerous smaller SSIS packages.

Easier testing and debugging.  When working through the test and debug cycles during and after initial development, it’s almost always easier to test and debug smaller packages.  To test a single task that resides in a large SSIS package would require either running the task by itself manually in the Visual Studio designer, or disabling all of the other tasks and redeploying the package.  When working with packages that each perform one unit of work, one can often simply execute the package to be tested through the normal scheduling/execution mechanism.

Clarity of purpose. An architecture that uses small, single-operation packages lends itself to clarity of purpose by virtue of naming.  When browsing a list of deployed packages, it is much more clear to see package names such as “Load Customers Table”, “Merge Product Table”, and “Remove Duplicates in Vehicle Table” than to find do-everything packages with names like “Load Production DB”, “Update DW”, etc.

Performance. In some cases, breaking out multi-step SSIS package can bring some performance gains.  One distinct case that comes to mind is using a distributed architecture, where packages within a single execution group are executed on multiple servers.  By distributing packages across different SQL Server machines (either physical or virtual), it may be possible to improve performance in cases where the processing load on a single SSIS server has become a bottleneck.  I want to emphasize that using a parent-child architecture does not arbitrarily improve performance, so this should not be used as a silver bullet to improve a poorly performing group of packages.

The Tools

As I mentioned earlier, there are three tools that can be used to execute a package from within another package.

The execute package task.  This is the easiest and most common means of executing a package from within another.  This task can trigger the execution of a package stored on the file system, deployed to MSDB or the SSIS catalog, or residing in the same project.  If using SSIS 2012 with catalog deployment mode, you can also use the execute package task to pass parameter values from the parent package to the child package.  It is important to note that the execute package task behaves differently in SSIS 2012 than it does in older versions.

T-SQL commands (via the execute SQL task).  For SSIS projects using project deployment model in SSIS 2012, the built-in stored procedures in the SSIS catalog can be used to execute packages.  This method for executing packages, like the execute package task, allows you to specify runtime parameters via T-SQL code.  One significant advantage of using T-SQL commands to execute packages is that, unlike the execute package task, you can use expressions to set at runtime the name of the package to be executed.  This is useful in cases where you are iterating over a list of packages that may not be known at runtime, such as a pattern found in ETL frameworks.

dtexec.exe (via the execute process task).  Using this method allows you to trigger package execution via the command-line application dtexec.exe.  Although this method is typically used to execute packages in a standalone environment – for example, when using third-party scheduling tools to orchestrate package execution – but dtexec can also be used within SSIS by way of the execute process task.  As an aside, I rarely use dtexec to execute child packages – in most cases, it’s easier to use either the execute package task or T-SQL commands to execute one package from within another.

I’ll also briefly mention dtexecui.exe.  This is a graphical tool that serves the same purpose as dtexec.exe, except that the former exposes functionality via a graphical user interface rather than forcing the user to use command-line parameters for configuration.  Except for this brief mention, I’ll not cover dtexecui.exe in this discussion of parent-child architecture, as that tool is intended for interactive (manual) execution of packages and is not a suitable tool for executing one package from within another.

Parent-Child architecture in the real world

To illustrate how this can work, let’s model out a realistic example.  Imagine that we have charge over the development of a sizeable healthcare database.  In addition to our production data, we’ve got multiple environments – test, development, and training – to support the development life cycle and education needs.  As is typical for these types of environments, these databases need to be refreshed from the production database from time to time.

The refresh processes for each of these environments will look similar to the others.  In each of them, we will extract any necessary data for that environment, retrieve and restore the backup from production, and import the previously extracted data back into that environment.  Since we are dealing with sensitive healthcare data, the information in the training database needs to be sufficiently anonymized to avoid an inappropriate disclosure of data.  In addition, our test database needs to be loaded with some test cases to facilitate testing for potential vulnerabilities.  Even though there are some differences in the way each environment is refreshed, there are several points of shared – and duplicate – behavior, as shown below (with the duplicates in blue).

ParentChild-Duplicate

Instead of using duplicate static elements, we can eliminate some code redundancy and maintenance overhead by encapsulating those shared behavior into their own container – specifically, a parameterized package.  In doing so, we can avoid having multiple points of administration when (not if) we need to make adjustments to those common elements of the refresh process.  The updated architecture uses parameters (or package configurations, if using package deployment mode in SSIS 2012 or any older version of SSIS) to pass in the name of the database environment to refresh.

ParentChild-HighLevel

As shown, we’ve moved those shared behaviors into a separate package (RefreshDB), the behavior of which is driven by the parameters passed into it.  The duplicate code is gone.  We now have just one SSIS package, instead of three, that need to be altered when those common behaviors change.  Further, we can individually test and debug the child package containing those common behaviors, without the additional environment-specific operations.

Note that we haven’t reduced the number of packages using this architecture.  The goal isn’t fewer packages.  We’re aiming for a modularized, easy-to-maintain design, which typically results in a larger number of packages that each perform just a few (and sometimes just one) functions.  In fact, in the parent-child architecture shown above, we could even further refine this pattern by breaking out the individual operations in the RefreshDB package into packages of their own, which would be practical for cases in which those tasks might be executed apart from the others.

Exceptions to the rule

Are there cases in which parent-child structures do not add value?  Certainly.  A prime example of such a case is a small, simple package developed for a single execution with no expectation that its logic will be reused.  I call these throwaway packages.  Because of their single-use nature, there is likely little value in going through the effort to building a parent-child architecture around their business logic.

Up Next

In my next post in this series, I’ll work through the mechanics of using a parent-child pattern in SSIS 2005 or SSIS 2008.

1 Technically, there are lower-level items in the SSIS infrastructure that can be executed independently.  For example, from the BIDS or SSDT design surface, one can manually execute a single task or container within a package.  However, when deploying or scheduling the execution of some ETL behavior, the package is the lowest level of granularity that can be addressed.

Edit: Corrected typo on one of the graphics.

SSIS and PowerPivot training in Baton Rouge

I’m happy to announce that I’ll be teaming up with my Linchpin People colleague Bill Pearson for a day of BI and SSIS training next month.  On Wednesday, February 12th, we’ll each be delivering a full-day presentation in Baton Rouge, after which we’ll be joining the Baton Rouge SQL Server User Group for their monthly meeting.

SSIS Training with Tim Mitchell

I’ll be presenting Real World SSIS: A Survival Guide, which is aimed at beginning-to-intermediate SSIS developers.  In this day-long training session, I’ll be sharing and demonstrating many of the ETL lessons that I’ve learned in my 10+ years working in the SQL Server business intelligence ecosystem.

At the same time, Bill Pearson will be delivering Practical Self-Service BI with PowerPivot for Excel, which will provide a crash course for those who are new to PowerPivot.  Following these day-long presentations, Bill will also share more on PowerPivot at the Baton Rouge SQL Server User Group that evening.

Registration for both of these day-long courses is currently open, and early-bird pricing is available for a limited time.  If you’re around the Baton Rouge area and are interested in learning more about SSIS or PowerPivot, we’d love to have you join us next month!

New Blog Series: Parent-Child Architecture in SSIS

This month I’m kicking off a new series of blog posts discussing the topic of parent-child architectures in SQL Server Integration Services.

elephant I still remember the first SSIS package I ever deployed to a production environment.  It was the mid-2000s, and I was working on a large data migration project which would take my then-employer, an acute care hospital, from an old UNIX-based system into a modern SQL Server-based OLTP back end.  The entire solution, which pushed around a few hundred million rows of data, was completely contained in a single SSIS package.  And this thing was HUGE.  When I say huge, I mean that the package metadata alone was 5mb in size.  I had a bunch of duplicate code in there, and when I opened or modified the package, it took sometimes a minute or more to go through the validation for the dozens of different tasks and data flows.  In hindsight, I can admit that it was not a well-designed architecture.

Fast forward about a decade.  Having learned some lessons – the hard way – about ETL architecture, I’ve relied on a completely different way of thinking.  Rather than relying on a few, do-everything SSIS packages, I prefer to break out tasks into smaller units of work.  In using more packages that each do just one thing, I’ve discovered that this architecture is:

  • Easier to understand
  • Simpler to debug
  • Much easier to distribute tasks to multiple developers
  • In some cases, better performing

As part of my role as an independent consultant, I also do quite a bit of training, and in those training sessions the topic of parent-child ETL architecture comes up quite often.  How many packages should I have?  Should we have lots of small SSIS packages, or fewer, larger packages?  This is also a topic on which I find a lot of questions on SQL Server discussion forums as well.

To share my experience on this topic, I’m starting a new series of post discussing parent-child architectures in SSIS.  As part of this series, I will cover:

  • Overview of parent-child architecture in SSIS
  • Parent-child architecture in SSIS 2005 and 2008
  • Parent-child architecture in SSIS 2012
  • Passing configuration values from parent to child package
  • Passing values from child packages back to the parent
  • Error handling and logging in parent-child structures
  • Parent-child architectures in an ETL framework

I’m looking forward to writing this series over the next few months.  As always I look forward to your feedback.

Goodbye, 2013

For me, 2013 was one of the most interesting and busy years of my life.  It was a good year for me, especially on the career front, and it’s certainly been the busiest year in several years.  Among the highlights of 2013:

Going independent

The most significant event for me this year was when I fulfilled a long-time dream of mine to become an independent consultant.  Back in June, I left my full-time (W2) consulting job to launch my independent consulting practice.  At the same time, I joined up with the fine folks at Linchpin People, which allowed me to maintain my status as an independent consultant while aligning myself with other like-minded folks in the same space.  The downside was that this move meant my leaving Artis Consulting.  The folks at Artis – the ownership as well as the employees – are some of the best people I know, and the decision to tell them goodbye was one of the hardest things I’ve had to do in my career.  As difficult as that decision was, I think it was a good move for me.  As an independent consultant, I’ve already gotten to work on some exciting consulting projects, as well as focusing on other related initiatives (including training and tools development).  There are still a lot of unknowns and a great deal of risk along this path, but I’m glad I made the move and am very excited for the future.

Presenting

I got to see a lot of you people in person in 2013.  Last year, I got the opportunity to speak at numerous events here in the States, including the SQL PASS Summit in Charlotte, the DevConnections conference in Las Vegas, six different SQL Saturday events, and five user group meetings.  In addition, I was invited to speak at SQLBits in Nottingham, England, which was my first international speaking engagement.  All told, I delivered 23 technical presentations this year, five of which were full-day workshops.  This is one of my favorite parts of being involved in the SQL Server community.

Volunteering

For the past 4 years, I’ve been a member of the board of directors for my local user group, the North Texas SQL Server User Group.  My time on the board was an incredibly rewarding experience, one that I would not trade for anything.  However, with the demands of my independent consultancy, I found myself with less and less time to focus on user group responsibilities.  My seat on the board was up for election this fall, and I made the difficult decision to step aside and not seek reelection to the board.  Although I’ll miss being a part of the NTSSUG board, I’ll still be around, attending user group meetings and other functions as my schedule allows.  As an aside, I want to extend congratulations and best wishes to my friend Dave Stein, who was elected to the open board position.

Travel

Holy schnikes, this one caught me off guard.  Between travel to client sites and my conference travel, I was gone almost as much as I was home this fall.  I don’t mind some travel, but I got a full year’s worth of travel in about three months.  Particularly with my new role as an independent consultant, there will be at least some travel involved, but I hope to avoid repeating the brutal travel schedule I had during the last 3 months.

Writing

I wrote – a little.  Very little.  This blog, which used to be a busy highway of information, has evolved into a rarely-traveled side street.  I love to write, and it is a rewarding endeavor in many ways, and yet I’ve neglected this part of my career this year.  I don’t want to use the term resolution, but it is my expectation of myself that I will write more in 2014.

Personal stuff

Though my professional highlights are almost all positive, there were a few other things that brought sadness this year.  My former sister-in-law, who is the mother of my 10-year-old nephew, died quite unexpectedly early this year.  I lost an aunt and uncle this year as well.  I also marked a sad milestone on what would have been my late son’s 18th birthday.  After nearly three years in business, my wife and I decided to cease operations on our small photography business, a marginally profitable but time consuming endeavor that taught us a great deal about choosing the right business.

There were others around me who struggled this year as well.  I have friends and family who have battled with health issues, job losses, family friction, and other hardships.

Although there were some sad events in my personal life, there were many positives as well.  I got to surprise my kids with a couple of vacations and spend some quality downtime with them.  On one of my visits to a client, I was able to visit Fenway Park for the first time and see the Red Sox beat the Rays.  We added a new family member, of the four-legged, canine variety.

Hello, 2014

I had many successes in 2013, as well as areas I want to work on improving during the new year.  I’m excited for what I see on the horizon, and I hope that 2014 is as good to me as its predecessor.

PASS Summit 2013 Keynote, Day 2

Today is the second day of the 2013 SQL PASS Summit, and I’m again live blogging the event.  image

8:15: Here we go.  Looks like a thinner crowd today.  Everyone sleeping in from #sqlkaraoke last night?

8:19: The morning is kicked off with a video montage about networking.  It’s good to see emphasis on getting to know your peers.

8:21: Douglas McDowell talking about PASS finance.  A full 75% of PASS revenue comes from the annual PASS Summit.  Looks like the new Business Analytics conference contributes significantly (about 20%) to the budget, generating about 100k in net profit.  Interesting that SQL Rally is not mentioned.  Interesting note that PASS has been working on building up a rainy day fund, and the organization now has $1MM in financial reserves.  Good financial update – relatively brief and to the point.

8:30: Bill Graziano recognizes the outgoing board members, Douglas McDowell and Rushahb Mehta.

8:35: Tom LaRock, new PASS president, takes the stage.  He announces that PASSTv has reached 3,000 people in 79 countries this year.  He also recognizes the incoming ExecCo and board members for the PASS board.  He announces that the PASS Business Analytics Conference (BAC) will be held in San Jose, CA in May of next year, and the next PASS Summit will return to Seattle in 2014.

8:41: Dr. David DeWitt takes the stage.  He’s always a crowd favorite.  Heads may explode in the next hour.  He’ll be talking about Hekaton, the in-memory database technology.  He starts off by poking some fun at the marketing team and their habitual renaming of products.

8:45: Hekaton is memory-optimized but durable.  If the power goes out, you haven’t lost anything.  It is fully baked into SQL Server 2014, which was released in CTP2 earlier this week. The aim for Hekaton is 100x performance improvement, which cannot be gained through improvements elsewhere (CPU, etc.).  Hekaton is more than just pinning tables in memory, which in itself would not yield the expected performance gains.  Hekaton is an engine, not a bolt-on.

8:56: DeWitt talks about database concurrency, showing a small example of competing write operations.  Concurrency, locks, and serialization, oh my.  I think I just heard the first head explosion of the morning.  The slide deck can be downloaded here

9:05: Hekaton uses a timestamp mechanism rather than latching.  Related: lock-free data structures.  DeWitt confesses that lock-free data structures are really hard to understand, but gives a couple of brief examples (via animation) of how performance of latches versus lock-free structures operates under a workload.  Hekaton concurrency control is based on three techniques: optimistic data read/write, multiversioning, and timestamping.  When he describes the timestamp mechanism with start/end valid periods, end timestamp of infinity used to represent the current row, it sounds a little like an in-memory version of a slowly changing dimension.  A non-blocking garbage collector will clean up expired or no-longer-valid rows as a transparent background process.

9:37: I have to admit that this stuff blows my mind.  As a BI practitioner, I pay attention to database performance and concurrency issues, but I’ve never dived into the engine in this level of detail.  Though I’m not a performance guru, it’s apparent to me that this stuff is truly revolutionary.  Note to self to find a way to experiment with this soon.

9:57: David DeWitt wraps up.  Great stuff, and it’s obvious the crowd was behind him the whole way (even if he blew minds along the way).

PASS Summit 2013 Keynote, Day 1

image Today is the first full day of the PASS Summit 2013 conference.  This year I’m again joining the blogger table and will be live blogging throughout the opening ceremonies and the keynote.  This year I’m sitting between Colin Stasiuk and Andy Warren. 

I’ll be updating this post periodically through the keynote.


7:59 am: Found my seat and got wired up.  It appears that coffee will not be served until after the keynote.  In related news, most of the room is already asleep.  I’ve appealed to Twitter in hopes that someone might kindly bring me some Starbucks.

8:20 am: We’re underway. PASS president Bill Graziano kicks off the event and introduces the board members.

8:27 am: This year, 700,000 training hours have been delivered through 227 chapters and 22 virtual chapters.

8:28 am: Giving some love to SQL Saturday via a brief video presentation.  Showing the locations of SQL Saturday events around the world using Power Map, which really drives home how much reach these events really have.

8:30 am: Bill announces that Amy Lewis has been selected as the PASSion Award winner this year.  Big congrats to my friend Amy!  Also, Ryan Adams gets props from Bill as an honorable mention.

8:38 am: Quentin Clark takes the stage, wearing his smart business casual attire.

8:42 am: Interesting analogy.  Quentin asserts that the relationship between on-prem storage/processing and the cloud as being similar to the relationship between brick-and-mortar stores and online retailers.  E-commerce did not end retail stores, and similarly, the cloud will not eliminate the need for on-prem data.

8:46 am: SQL Server 2014 CTP2 is now generally available for download.

8:51 am: Tracy Daugherty takes the stage to demonstrate in-memory capabilities of SQL Server 2014.  He demonstrates an implementation of in-memory technology on a fictional online store, showing the before-and-after query times when adding memory optimization to a key table.  Queries taking several seconds occur almost instantaneously.

9:02 am: On-prem database backup to Azure?  I’m interested.  Tracy shows the new UI feature where you can select URL as a destination for a backup.  Also, new encryption options available in 2014.  Also, automatic log backups on SQL Azure?  Tracy also shows the new feature (available via free download) that allows you to backup databases in older versions of SQL Server to Azure.

9:09 am: Oops.  Network failure during the demo is a perfect example of why you can’t wholly rely on the cloud for the success of your business.

9:28 am: After 20 minutes of mostly marketing hype, it looks like we’re going to see more demos.  A little dose of Power BI – using Power Query to bring together disconnected sets of data into a unified view, a function that can be performed by non-technical business users.  Brief glimpse of Power BI on mobile devices, even fruity ones.

9:40 am: Impressive… querying a database using plain English.  “Show number of calls per capita by country” yields a valid set of data.  Adding “as map” changes the output from bar chart to map.  We’re breezing over the details of how this works, but if it really works as shown, this is going to be a game changer for self-service BI.  Hopefully not just a rehash of English Query.  Go to PowerBI.com to sign up and use this.

9:44 am: Power BI contest is announced.  Facebook.com/microsoftbi to participate.  Tell your Power BI story to win prizes including XBox One, Surface 2 Pro, trip to PASS Analytics conference.

9:48 am: That’s a wrap.  Nothing earth-shattering here.  Most interesting to the general populous is the release of SQL Server 2014 CTP2.  A few cool things with Power BI as well, which I’ve still yet to explore.