Advanced SSIS Training in Dallas

I’m very excited to offer a new course entitled “Advanced SSIS” in the Dallas area this spring. My friend and colleague Andy Leonard and I will be delivering this new 3-day course March 9-11, 2015 at the Microsoft offices in Irving, Texas. This course is intended for those who have experience using Integration Services who are looking to take their skills to the next level. In this course, we’ll each be sharing experiences and war stories from 10+ years in the data integration space.

Among the topics we’ll cover:

  • Data flow internals
  • Performance design patterns
  • Learning how to fail properly
  • Security in SSIS
  • Deployment design patterns
  • Data quality
  • Metadata management patterns
  • Biml
  • Known limitations of SSIS, and workarounds for them
  • ETL data edge cases

Also, there may or may not be stories of chicken farming.

We’ll bring the coffee and lunch for all three days. All you need to bring is your experience using SSIS and your readiness to advance your data integration skills.  Space is still available, and early bird pricing is in effect until February 6th. If you’ve got a team of folks who would benefit from this training, contact me about a group discount. If you’re coming in from out of town, there are lots of hotels close by, including the NYLO right next door and the Hampton Inn about a mile away.

Feel free to contact me with any questions. We hope to see you there!

Six practical tips for social media success in 2015

Social media is the new résumé.  In many ways, it’s even better than a résumé – a person’s social media stream can reveal attitudes, biases, and deficiencies that wouldn’t dare appear on a résumé.  Your online thoughts – blogs, Instagram pictures, tweets on Twitter, posts on Facebook, among others – help to make up the digital you, which friends and strangers alike will use to assess who you are and what you can contribute.  The things you share on social media become part of who you are.

Even more importantly, there’s a permanence to social media content that requires us to pay special attention to anything posted on the Internet.  There’s no Undo on the Send button; once you publish something to the Internet, it can be there forever.  Remember that potential clients and employers will most likely review your social media activities before making a hiring decision; in fact, a recent survey of human resources personnel revealed that over 90% of respondents looked to social media when checking out a candidate.  Even if you’re not looking for a job, consider that what you post today may still be around for years afterward.  Sure, you can edit or delete content or restrict its privacy settings, but have you read the terms of service for the medium on which you’re sharing that information?  In some cases, content you share online may be used in ways you don’t expect, according to the provider’s terms of service.  The bottom line is that privacy settings and deletion won’t necessarily keep your content private, so think twice before posting angry rants or NSFW after-hours photos.

With that, here are a few basic rules I try to follow when posting to social media.

Don’t write anything in the heat of the moment, especially if you’re hurt or angry.  Intense emotion often leaves logic behind, and those types of posts tend to be the ones you regret.  If you routinely find yourself posting to social media and later editing or deleting those posts, you might have a problem with this.  Things posted on social media can have a long life span, even when the original media is deleted.  The few minutes of satisfaction you get from sharing that angry tweet, Facebook post, or blog post might cost you years of embarrassment.  Take an hour and walk around the block before you post in an emotional state.

Find your pace.  Everyone has their own speed at which they share on social media.  Some will write a new blog post almost daily, while others do so just once or twice a month.  There are folks who post to Twitter a dozen times each day.  These are all acceptable, but the most important thing to remember is to be consistent.  Don’t publish a dozen blog posts in January and then stop blogging for the year.  Your audience, however larger or small, will follow you in part because of your volume and velocity.  Find a pace that you’re comfortable with, and most importantly, that is sustainable for the year.  The right scheduling tool can help with this, especially when the amount of time you have to devote to social media can vary from week to week.  (As a sidebar, I use HootSuite, though it’s just one of many such tools available, many of which are free.)

Check ur grammar.  I’ll admit it – I’m dogmatic when it comes to proper grammar and spelling, and I evaluate the quality of social media entries based in part on those criteria.  If your posts are littered with misspellings and grammatical errors, you could end up being passed over for a job or a gig.  It’s a fact that some folks are simply more attentive to this than others, so if you struggle with spelling and grammar, find a trusted adviser to proofread your posts (especially longer and more permanent compositions, such as web articles and blog posts).

Rid yourself of negative influence.  The things you read will affect how you write, and negativity breeds negativity.  You know the type – the blogger who complains about everything, the person on Facebook who’s all about drama, or the Twitter follower who’s always posting in anger.  I exercised a social media purge recently, either hiding or completely removing some folks who were constantly angry and negative.  Following people who post a constant stream of bile will almost certainly affect your mood and attitude, and is an unnecessary distraction.  Don’t disengage from someone over one online rant, but if they demonstrate a pattern of this behavior, cut ‘em off.

Have conversations.  Your social media presence can be advertisement, an online résumé, and a series of conversations.  Don’t neglect the last one!  You don’t want to be known as someone who simply broadcasts without listening.  The more you establish yourself as an expert on a topic, the more folks will want to chat with you, whether it’s to ask for advice, share an idea, or simply to get to know you.  While you don’t have to engage with everyone who reaches out to you (see the prior bullet), it’s usually best to err on the side of openness.

Last and most importantly, be you.  Don’t look to mimic someone else’s blog posts, tweets, or Facebook activity.  Your followers will read what you write because it’s yours, not because it resembles that of someone else in the community.  In fact, being different is a good way to gain even more followers; if you’re writing about things few other people are writing about, or if you’re approaching it on a level or from a perspective others aren’t, you’re likely to be different enough from the crowd that people will seek out your content.

Everyone uses social media differently, and each of us will have our own set of internal guidelines on what to post.   Just remember that your social media stream becomes an extension of, and a window into, your personality.  Take care in what you share, pace yourself, and be accessible.

[OT] Blog is in maintenance mode

I’m spending part of this holiday break repaying some technical debt on my website.  Among other things, I am importing some old content that I never brought over when I did my migration to WordPress a few years ago.  Most of the content I’m bringing over is old (most of it is at least 5 years old), and I’m adding it to this site to integrate all of my content, both recent and historical, in one place.  I don’t expect that the old posts will show up in RSS readers as new content.

In addition, I’m planning to try out a new WordPress layout.  I’ve been using the same simple (read: dull) theme for a while now, and I’d like to dress it up just a bit.

SSIS Parent-Child Architecture in Catalog Deployment Mode

This is the third in a series of posts about SSIS parent-child architecture.  You can find the index page here.

In my previous posts on SSIS parent-child package architecture, I described the benefits of the parent-child package design pattern and demonstrated the implementation of such a pattern in package deployment mode.  In this post, I will demonstrate the parent-child pattern for implementations using catalog deployment mode, which is the default design on SSIS 2012 and 2014.

Catalog deployment mode in SSIS

If you’re reading this post and find yourself asking, “What is catalog deployment mode?”, here it is in a nutshell: Starting with SQL Server 2012, there were significant changes to the architecture of SSIS, most notably the move to a deployment/storage structure called catalog deployment model (which is also frequently referred to as the project deployment model).  In this model, SSIS code is more project-centric than package-centric; packages are deployed as an entire project instead of individually (though each package can still be executed independently).  Catalog deployment mode in SSIS also brought the addition of parameters, which can be used to externally set runtime values for package executions, as well as project connections, which can be used to easily share connection settings across all packages in a project.  Many other changes were introduced, including a simpler logging model and a dedicated SSIS database.

Among the many changes brought about by the catalog deployment model, the one that had the most impact on the parent-child design pattern is the addition of parameters.  In older versions of SSIS, it was possible to pass runtime values to a package, but the process was clunky at best.  When using SSIS 2012 or 2014 in catalog deployment mode, setting runtime values for a child package (or a standalone package, for that matter) is much easier and more straightforward than performing the same task in previous versions.

It is also worth noting that you don’t have to use the catalog deployment model in recent versions of SSIS.  Although catalog deployment model is the default setting in SSIS 2012 and 2014, you can set your project to use the package deployment model.  You’ll lose many of the new features (including parameterization and simplified logging) by choosing package deployment model, but this might be practical if your organization has made a significant investment in SSIS architecture that would be broken by moving to catalog deployment model.

Parent-child package execution

At the heart of parent-child architecture is the collection of parameters.  In catalog deployment mode, we can set up parameters at the package level or at the project level.  For values that would affect just a single package, using a package parameter would make the most sense.  However, if a value might need to be shared among several (or all) packages in a particular project, a project parameter would allow you to create the parameter once for the entire project rather than one for each package.

Execute package task

When executing a child package, the simplest method is still the execute package task.  Introduced in 2012, the execute package task now has a dropdown list (shown below, on the Package tab) to allow the SSIS developer to specify the target package.


There are a few limitations with this approach.  Most notably, this dropdown list selection only works when calling a package that exists in the same project.  You’ll notice that the selection above the package name, entitled ReferenceType, is set to Project Reference.   Though you can change ReferenceType to use a project located elsewhere, oddly enough you can’t use it to execute a package in a different project deployed to the SSIS catalog (you can read more about that limitation, as well as upvote the issue on Connect here).  I’ll discuss a couple of workarounds for this momentarily.

Clicking over to the Parameter bindings tab, we can specify which values to pass into the child package.  For each child package parameter, we specify exactly one value to be supplied at runtime.  Remember, like the dropdown list for package selection, this piece only works when executing packages in the same project (using the Project Reference setting on the ReferenceType from the Package tab).


Keep in mind that you have to use a parameter or variable (again, choosing from the dropdown list) to map to the child parameter.  You can’t simply type in a static value in the Binding parameter or value field.  Also, remember that you will only see package parameters (not project parameters) in the list of child package parameters that may be mapped.  This is by design – it wouldn’t make sense to map a value to a project parameter when executing a package in the same project, since that child package would already implicitly have access to all of the project parameters.

Another distinct advantage of using the execute package task is the process for handling errors in the child package.  In the event that a child package fails, the execute package task will fail as well.  This is a good thing, because if the child package does fail, in almost all cases we would want the parent package to fail to prevent dependent tasks from improperly executing.  Even better, error messages from the child package would be bubbled up to the parent package, allowing you to collect error messages from all child packages within the parent package.  Consolidated error handling and logging means less development time upfront, and less maintenance effort down the road.

If you have the option of using the execute package task for starting packages stored in the SSIS catalog, I recommend sticking with this method.

Execute SQL task

Another method for executing one package from another is by using the T-SQL stored procedures in the SSIS catalog itself.  Executing a package in the SSIS catalog in T-SQL is actually a 3-step process:

  • Create the execution entry in the catalog
  • Add in any parameter values
  • Execute the package

Catalog package execution via T-SQL, another new addition in SSIS 2012, allows us to overcome the limitation in the execute package task I mentioned earlier.  Using a T-SQL command (via the execute SQL task in SSIS), we can execute a package in any project.  It’s certainly more difficult to do so, because we lose the convenience of having the list of available packages and parameters exposed in a dropdown list in the GUI.  Here there be typing.  However, being able to execute packages in other projects – and for that matter, on other SSIS catalog servers entirely – somewhat makes up for the manual nature of this method.

To execute a child package using this method, you’d create an execute SQL task and drop in the appropriate commands, which might look something like the following:

DECLARE @execution_id BIGINT

EXEC [SSISDB].[catalog].[create_execution] @package_name = N'ChildPkgRemote.dtsx'
	,@execution_id = @execution_id OUTPUT
	,@folder_name = N'SSIS Parent-Child'
	,@project_name = N'SSIS Parent-Child Catalog Deployment - Child'
	,@use32bitruntime = False
	,@reference_id = NULL

-- Set user parameter value for filename
DECLARE @filename SQL_VARIANT = N'E:\Dropbox\Presentations\_sampleData\USA_small1.txt'

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 30
	,@parameter_name = N'pSourceFileName'
	,@parameter_value = @filename

-- Set execution parameter for logging level
DECLARE @loggingLevel SMALLINT = 1

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 50
	,@parameter_name = N'LOGGING_LEVEL'
	,@parameter_value = @loggingLevel

-- Set execution parameter for synchronized
DECLARE @synchronous SMALLINT = 1

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 50
	,@parameter_name = N'SYNCHRONIZED'
	,@parameter_value = @synchronous

-- Now execute the package
EXEC [SSISDB].[catalog].[start_execution] @execution_id

-- Show status
SELECT [status] AS [execution_status]
WHERE execution_id = @execution_id

Two things in particular I want to point out here.  First of all, by default when executing a package using T-SQL, the package is started asynchronously.  This means that when you call the stored procedure [SSISDB].[catalog].[start_execution], the T-SQL command will return immediately (assuming you passed in a valid package name and parameters), giving no indication of either success or failure.  That’s why, on this example, I’m setting the execution parameter named SYNCHRONIZED to force the T-SQL command to wait until the package has completed execution before returning.  (Note: For additional information about execution parameters, check out this post by Phil Brammer).  Second, regardless of whether you set the SYNCHRONIZED parameter, the T-SQL command will not return an error even if the package fails.  I’ve added the last query in this example, which will return the execution ID as well as the execution status.  I can use this to check the execution status of the child package before starting any subsequent dependent tasks.


As shown, I’ve set the SQLStatement value to the T-SQL code block I listed above, and set the ResultSet value to Single row, the latter of which will allow me to capture the output status of the executed package.  Below, I’ve set that execution status value to a new package variable.


To round out this design pattern, I set up my control flow as shown below.  Using precedence constraints coupled with SSIS expressions, I execute the package and then check the return value: a successful catalog execution returns a value of 7, and my parent package handles any return value other than a 7 as a failure.


You may also have to give special consideration for errors in child packages when using T-SQL for package execution – especially when running packages interactively in the BIDS/SSDT designer.  Since the T-SQL command does not report the failure of a package by default, it also doesn’t “bubble up” errors in the traditional SSIS manner.  Therefore, you’ll need to rely on capturing any child package error messages from the SSIS catalog logging tables, especially when developing and testing packages in Visual Studio.

Script Task

It is also possible to execute SSIS packages programmatically from the script task.  This method is significantly more complicated, but also offers a great deal of flexibility.  A fellow SQL Server MVP and acquaintance of mine, Joost van Rossum, has a detailed blog post on how to programmatically execute SSIS packages from the SSIS script task.  I won’t restate what he has already covered in his comprehensive and well-written post on the topic, but if you need to programmatically fire child packages in the SSIS catalog, check out his write-up.


In this post, I’ve covered the essentials of executing child packages in the SSIS catalog, including provisioning for errors in the child package.  Though it can be quirky, especially when executing child packages in a different project or on a different server, there are several different ways to address this parent-child design pattern.

In my next post, I’ll talk a little more about passing values in a parent-child package, and will illustrate how to pass values from child packages back to the parent package.

How to burn down your house while frying a turkey

Linda.  I have lots more photos of this assignment including horizontals if you need it for the web.  Just make contact with me.  Ed will only show you 5 photos so I have  to really edit it down.  It’s an odd query, yes, but in preparation to write this post I actually typed the above phrase into my browser.  No, I’m certainly not looking to burn down my house.  In fact, wait here while I clear my search history, just in case.

For the sake of argument, let’s say you’re planning to fry a turkey over the upcoming Thanksgiving holiday.  Think about the research you’d do: What type of equipment will I need? How much oil should I buy?  How big should the turkey be?  How long should I cook it? All valid queries that should be answered before taking on the task of dropping a frozen bird into boiling oil.  But are those the only questions you should ask?  Talk to anyone about the dangers of frying a turkey, even those who have never done it, and they’ll tell stories about a brother-in-law, or a coworker, or some guy on YouTube who set ablaze the family homestead in a misguided effort to cook Thanksgiving dinner.

Statistically, it may seem like a silly question to ask.  What are the odds that frying this turkey will set my house on fire?  All in all, probably pretty low.  But it does happen – and if it does, the consequences can be disastrous.  So, when taking on this task – especially for the first time – asking questions (What factors make it more likely that this turkey fry will turn into a huge bonfire?) that can help reduce the risk seems like a good investment.

Be a data pessimist

If you’ve met me in person, you probably remember me as a glass-half-full guy.  But when it comes to data management, I’m a full-on pessimist.  Any data I get is crap until proven otherwise.  Every data load process will fail at some point.  And, given enough time and iterations, even a simple data movement operation can take down an entire organization.  It’s the turkey burning down the house.  Yes, the odds of a single data process wreaking havoc on the organization is very, very low, but the impact if realized is very, very high.  High enough that it’s worth asking those questions.  What part of this process could wreck our financial reporting?  What factors make this more likely to happen?  How can we mitigate those factors?

For the record, I don’t suggest that we all wear tin-foil hats and prepare for space aliens to corrupt our data.  However, there are lots of unlikely-yet-realistic scenarios in almost any process.  Think about your most rock-solid data operation right now.  What potential edge cases could harm your data infrastructure?  Sometimes it’s the things that might seem harmless:

  • Is it possible that we could run two separate loads of the exact same data at the same time?
  • What if a process extracts data from a file that is still being written to (by a separate process)?
  • What if a well-meaning staff member loads a properly formatted but inaccurate data file to the source directory?

Others, while even less likely, could lead to a real mess:

  • Is it possible for my data load process to be purposefully executed with inaccurate or corrupt data?
  • Could some situation exist within my ETL process that would allow essential rows of data to simply be lost, silently and without errors?
  • Do I have any processes that could make unaudited changes to my financial records?

Each potential scenario would have to be evaluated to determine the cost to prevent the issue versus the likelihood of realization and the impact if realized.

Fortunately, most of the data problems we deal with are not as catastrophic as igniting one’s home with a fried turkey on Thanksgiving.  However, as data professionals, our first responsibility is to protect the data.  We must always pay attention to data risk to ensure that we don’t allow data issues to take the house down.

Join me in DC for a full day of Biml

I’m excited to announce that my Linchpin People colleague Reeves Smith and I will be delivering a full day Biml preconference seminar the day before the upcoming SQL Saturday in Washington, DC.  This seminar, entitled “Getting Started with Biml”, will introduce attendees to the awesomeness of Business Intelligence Markup Language (Biml). 

In this course, we’ll cover the basics of Biml syntax, show how to use BimlScript to make package creation even more dynamic, and will demonstrate lots of design patterns through numerous demos.

Registration is now open for this course.  Lunch will be provided.  We hope to see you there!

SQL PASS 2014 Summit Diary – Day 6

Today is the last official day of the PASS Summit.  The sessions will wrap up at the end of the day, and we’ll all go our separate ways and resume our semi-normal lives.  Having delivered my presentation yesterday, my official PASS duties are over, and I’m planning to spend the day taking in a few sessions and networking.

IMG_694808:15am: No keynote today, so the sessions are starting first thing in the morning.  I’m sitting in on a Power BI session delivered by my friend Adam Saxton.  He’s an excellent and knowledgeable presenter, and I always enjoy attending his presentations.  For Power BI, this has been one piece of the Microsoft BI stack that I have largely ignored due to the fact that it runs exclusively in the cloud.  However, I’d like to get up to speed on the cloud BI offerings – even though the on-premises solutions will continue to represent the overwhelming majority of business intelligence initiatives (in terms of data volume as well as Microsoft revenue), I expect to be fluent in all of the Microsoft BI offerings, whether “earthed” or cloud-based.

11:00am: After stopping by the Linchpin booth again, I sit down in the PASS Community Zone.  And by sit down, I mean that I collapse, exhausted, into one of the bean bags.  I spent some time chatting with Pat Wright, Doug Purnell, and others, and met up with Julie Smith and Brian Davis to talk about a project we’re working on together (more on that later).

11:45am: Lunch.  Today is the Birds of a Feather lunch, in which each table is assigned a particular SQL Server-related topic for discussion.  I headed over with my Colorado buddies Russ Thomas and Matt Scardino to the DQS/MDS table, at which only two other folks were sitting (one of whom worked for Microsoft).  We had a nice chat about DQS and data quality in general.  I have to admit a bit of frustration with the lack of updates in DQS in the last release of SQL Server.  I still firmly believe that the core of DQS is solid and would be heavily used if only the deficiencies in the interface (or the absence of a publicly documented API) were addressed.

02:45pm: I don’t know why, but I want to take a certification exam.  The PASS Summit organizers have arranged for an onsite testing center, and they are offering half price for exams this week for attendees of the summit.  I registered for the 70-463 DW exam, and after sweating through the MDS and column store questions, I squeaked through the exam with a passing score.  I’m not a huge advocate for Microsoft certification exams – I find that many of the questions asked are not relevant in real-world scenarios, they are too easy to cheat, and I’m still very skeptical of Microsoft’s commitment to the education track as a whole after they abruptly and mercilessly killed the MCM program (via email, under cover of darkness on a holiday weekend, no less) – so I’m likely not jumping back into a full-blown pursuit of Microsoft certification any time soon.  Still, it was somewhat satisfying to take and pass the test without prep.

04:00pm: Back in the community zone.  Lots of folks saying their good-byes, others who are staying the night are making plans for later in the evening.  For me?  I’ve been craving some seafood from the Crab Pot all week, and I find 6 willing participants to join me.  I’m also planning a return trip to the Seattle Underground Tour.  For the record, I love having this community zone, and I particularly dig it right here on the walkway – it’s a visible, high-traffic location, and it’s been full of people every time I’ve come by.

06:30pm: An all-out assault on the crab population has commenced.  And by the way, our group of 6 became 12, which became 15, which became 20-something (and still growing).  Our poor waiter is frazzled.  I told him we’ll be back next October, in case he wants to take that week off.

image08:00pm: Seattle Underground tour.  I did this a couple of years ago with a smaller group, and it was a lot of fun.  This year, we’ve got 15 or so PASS Summit attendees here, and we get a really good tour guide this time.

09:45pm: My friend from down under, Rob Farley, turns 40 today, and about a hundred of us stop by his birthday party.

10:30pm: This may be the earliest I have ever retired on the last night of any summit.  I’m just exhausted.  I do some minimal packing and prep for tomorrow morning and crash for the evening.

Apart from any last-minute goodbyes at the airport tomorrow, the SQL PASS 2014 Summit is over for me.  Without a doubt, this was the best, most fulfilling, most thoroughly exhausting summit experience I’ve had in my seven years of attendance.  I’m sad to be leaving, but couldn’t feel more satisfied.

SQL PASS Summit 2014 Diary – Days 3-5

The last two days have been an absolute blur.  As I first posted this week, I had planned to blog daily about my goings-on, but I’ve been running nonstop – all good things, fortunately – but it interrupted my plans to blog every day.

Day 3: Tuesday

08:00am: Headed back to the MVP Summit.  Rain again.

06:00pm: Back in Seattle, and off to the BI Over Beers event with my friends from Varigence.

10:30pm: More karaoke at the event sponsored by Denny Cherry and SIOS.  Lots of fun, but it’s really loud and crowded (or perhaps I’m getting old).  I take some pictures, including a few incriminating mechanical bull snapshots, and head back to the hotel.  Surprisingly in bed by midnight again.

Day 4: Wednesday

08:15am: Today is the first full day of the SQL PASS Summit.  It’s keynote time.  Usually the first-day keynote is marketing heavy, and that is the case for today.  There are several interesting demos, including one from PIer 1 in which they are mapping store traffic areas using the Kinect (yes, the XBox gaming interface) to detect which areas of their stores are most heavily trafficked.

10:30am: I’m sitting in Ryan Adams’ session on AlwaysOn.  This is a bit outside my area of expertise, so it’s good to see some of this administrative stuff.

11:45pm: Lunch with the Microsoft executives.  I love how open they are to chatting with community influencers.

12:30pm: Hanging out at the Linchpin People booth in the exhibitor area. Lots of great conversations with friends and passersby.

06:00pm: It’s time for the exhibitor reception.  We are getting lots of folks at the Linchpin booth!  Looking forward to seeing these folks at our party later tonight.

08:00pm: Linchpin People party at the Rhein Haus.  We’re hanging out with about 150 of our closest friends, learning to play bocce ball.  It was great seeing some folks I know and meeting some new ones.

12:15pm: Back at the room, exhausted.

Day 5: Thursday

08:00am: Arrived in the keynote room a bit early.  A much smaller crowd than yesterday. Sadly, I fear that the marketing presentation yesterday may have scared away some of the attendees, but today is likely the content they really came to hear.

10:00am: Dr. Rimma Nehme is one of the best speakers I’ve heard at a PASS Summit, ever.  She’s done a great job of laying out the cloud offerings and how they might fit into a larger data ecosystem.

10:30am: Hanging out at the Linchpin booth, thinking through my session for this afternoon.

11:15am: I found the speaker lounge (not to be confused with the speaker ready room).  We have an actual fire pit in here.  And snacks.

01:30pm: My presentation entitled “Building Bullet-Resistant SSIS Packages”. Wow, what a crowd!  Rough guess, 325 people including those sitting and standing in the back of the room.  Thanks everyone for coming and for staying awake and engaged (which I know can be difficult right after lunch Smile).

02:45pm: And my official work at the PASS Summit is officially done.  Now time to enjoy some sessions and networking.  First thing: Meet up with my friend Phil to talk through a Biml problem he’s having.

04:45pm: On my way to a session and I run into one of the guys from Pluralsight.  They’ve been doing some cool things lately, and I’m considering partnering with them to do some online content.

06:00pm: I missed lunch today due to my presentation. Grabbing a quick bite with my friend Rafael Salas.

07:00pm: Stopping by the attendee party at the EMP Museum.  I was here two years ago for that year’s attendee party, but I ended up chatting with a bunch of folks and never even made it past the lobby.  This year I took a little time to explore the museum.  I particularly enjoyed the shrine to Nirvana.

09:30pm: A half-hour of actual downtime in my hotel room, before heading out to meet some friends.

12:45am: Exhausted but happy.  What a great day.

Tomorrow is the last day of the summit.  Normally, I’m ready for some quiet me-time by the end of the week, but this year I’m very much looking forward to networking as much as possible before I leave on Saturday.

PASS Summit 2014 Diary – Day 2

It’s another beautiful day in Seattle. And by beautiful, I mean overcast and threatening rain.  Today will be mostly consumed by the MVP Summit, with some fun stuff scheduled for later in the day.  At 6pm today, I’m headed back to the Tap House for BI Over Beers, a gathering of business intelligence professionals sponsored by Varigence.

08:00am: On the bus to the MVP Summit.

rain08:30am: Hey look, it’s raining.

08:40am: Hey look, I’m standing in the rain.

05:30pm: MVP Summit finished up for the day, and we’re headed back to Seattle for several events tonight.  Lots of traffic so it’s a slow ride, but I’m getting to catch up with Aaron Nelson.

06:15pm: I’m attending the BI Over Beers event hosted by my friends at Varigence.  We’re in the large billiard room at the Tap House, with a good crowd of 100 or so folks.

IMG_690108:00pm: Stopping by the Yardhouse to attend the networking event organized by Steve Jones and Andy Warren. Not a huge group here, but they had to change locations at the last minute due to some logistical issues.  Also learned that Andy Warren has had to skip the summit this year, so I’ll definitely miss seeing him this week.

09:30pm: A small group of us have arranged to meet up at the Monkey Pub in Seattle.  It’s a relatively small place, with just a few other locals in addition to the 15 or so SQL folks in our group.  Delight of the evening: Brian and Penny Moran entertaining us with Jimmy Buffet songs.  Twitter reports that there is another SQL Karaoke event over at Bush Garden, though I have to admit that I’m enjoying this low-key group tonight.

12:30am: The SQL Karaoke party breaks up and everyone heads back to their hotels.  Most of us have early activities in the morning, so it’s a race to squeeze in as much sleep as possible.  (And thanks to Argenis Fernandez for the ride back to the hotel)

Tomorrow is my last day at the MVP Summit this week, with the rest of the week reserved for PASS Summit activities.  Tomorrow night’s big event is the PASS welcome reception, followed by the karaoke event (yes, another one) organized by Denny Cherry.

PASS Summit 2014 Diary – Day 1

Today is the first day of official activities for the week.  The PASS Summit hasn’t yet started, but I’ll be spending the day at the MVP Summit, surrounded by a few hundred people much smarter than I am.  The details of the MVP Summit are all covered under NDA, so today’s update will be brief.

IMG_688206:00am: I woke up and saw that the clock read 7:00am.  After a brief moment of panic, I realized that I hadn’t slept through my alarm, but had simply neglected to change the alarm clock in the hotel room.  For once, I’m happy about the whole DST time change.

07:15am: Breakfast at the top of the Hilton.  There’s a great view from the 29th floor, with a  panoramic look over the sound (and the picture to the right doesn’t really do it justice).

08:00am: Headed to the MVP summit.

09:00pm: After the MVP Summit activities, I’m back in Seattle to drop my stuff off and meet up with some folks.  I found my friend Keith Tate wandering around in the Sheraton lobby, and we all wandered over to Tap House.  There’s already a sizeable group of folks here.

09:45pm: I still suck at playing pool.

10:15pm: Found my friend and fellow Texan Jim Murphy.  He tells me about how his business is going while I make fun of his oversized fruity drink.  I also got to catch up with Paul Waters, Phil Helmer, and others.

11:30pm: For the second day in a row, and against all odds, I’m headed back to the hotel before midnight.  After a quick stop at the front desk – I left my card key in the room and had to get a replacement.

Tomorrow is another long day, though I expect to be back in Seattle earlier in the day.  I’m looking forward to catching up with folks at two different events (at the same time, of course) tomorrow, followed by a smaller gathering with a few friends.  More tomorrow….