On Perspective

Perspective can make or break a career.  Maintaining a proper perspective is very often the differentiating factor between a good technologist and an incredible one.

6281420488_68b88bfc00_zIn my 15-ish years in IT, I’ve said a lot of dumb things.  Many of them I’ve forgotten, but I can’t shake the memory of one particular phrase I uttered more than a few times back in my early days of my career.  Even today, it still embarrasses me that I ever had the mindset to say these words about other people:

“… those stupid end users …”

Yep. I said that.  Why would I say those words?  Sure, there was some emotion and frustration involved, but even more than that, my perspective was all wrong.  Being new to the IT field, my expectation was that it was our job as technical professionals to dictate standards and practices, and that the end users we supported would modify their business processes and their workflow to match those standards.  I looked at most business problems as the fault of the users for not following our standards, or not using their software tools properly.  Looking back on 15 years of experience, it seems silly that I would have ever held that position.  But in my (at the time) limited field of vision, this was my expectation.

Fast-forward a few years.  With a little experience under my belt, my perspective had changed.  Through a few hard lessons, I had evolved to the point that I fully understood that my principal function as a technical professional was to serve the business, not the other way around.  My attitude significantly improved, and I became a more proficient technical professional.  But my perspective still had one significant shortcoming: I simply solved the business problems that were brought to my attention.  Sure, I had my technical resources in order – my backups were always done and tested, my code used common best practices and was checked into source control, and I did my best to get out in front of performance issues before they ballooned into bigger problems.  But I still considered business problems to be outside my purview until my assistance was specifically requested.  My perspective was limited in that I was still trying to be a technical professional, rather than focusing on being a business professional solving technical problems.

I still remember when it finally clicked for me.  I’d been working in the industry for about four years, and after multiple rounds of meetings to solve a particular business problem, it hit me: my perspective is all wrong.  I’ve been looking at this from the perspective of “Tell me your problem and I’ll fix it,” when the dialog should have been “Let me understand what you do and what you need so we can address our problems.”  That’s right – it’s not that those end users have business problems.  It’s that we have business problems and we need to solve them.  There’s nothing more comforting for a nontechnical person to hear, and rarely a statement more empowering for a technical person to make, than a sincere expression of “I feel your pain. Let’s solve this together.”  This is true whether you’re tasked with front-line technical support, you’re working deep in a server room, or you’re a senior consultant in the field.

I believe a person can be a moderately successful technologist by focusing strictly on understanding and solving technical problems.  Where one becomes a rockstar problem solver is the point at which he or she has the experience and maturity to see things through a perspective other than his or her own, while understanding and feeling the pain points of others.

SSIS Parent-Child Architecture in Catalog Deployment Mode

This is the third in a series of posts about SSIS parent-child architecture.  You can find the index page here.

In my previous posts on SSIS parent-child package architecture, I described the benefits of the parent-child package design pattern and demonstrated the implementation of such a pattern in package deployment mode.  In this post, I will demonstrate the parent-child pattern for implementations using catalog deployment mode, which is the default design on SSIS 2012 and 2014.

Catalog deployment mode in SSIS

If you’re reading this post and find yourself asking, “What is catalog deployment mode?”, here it is in a nutshell: Starting with SQL Server 2012, there were significant changes to the architecture of SSIS, most notably the move to a deployment/storage structure called catalog deployment model (which is also frequently referred to as the project deployment model).  In this model, SSIS code is more project-centric than package-centric; packages are deployed as an entire project instead of individually (though each package can still be executed independently).  Catalog deployment mode in SSIS also brought the addition of parameters, which can be used to externally set runtime values for package executions, as well as project connections, which can be used to easily share connection settings across all packages in a project.  Many other changes were introduced, including a simpler logging model and a dedicated SSIS database.

Among the many changes brought about by the catalog deployment model, the one that had the most impact on the parent-child design pattern is the addition of parameters.  In older versions of SSIS, it was possible to pass runtime values to a package, but the process was clunky at best.  When using SSIS 2012 or 2014 in catalog deployment mode, setting runtime values for a child package (or a standalone package, for that matter) is much easier and more straightforward than performing the same task in previous versions.

It is also worth noting that you don’t have to use the catalog deployment model in recent versions of SSIS.  Although catalog deployment model is the default setting in SSIS 2012 and 2014, you can set your project to use the package deployment model.  You’ll lose many of the new features (including parameterization and simplified logging) by choosing package deployment model, but this might be practical if your organization has made a significant investment in SSIS architecture that would be broken by moving to catalog deployment model.

Parent-child package execution

At the heart of parent-child architecture is the collection of parameters.  In catalog deployment mode, we can set up parameters at the package level or at the project level.  For values that would affect just a single package, using a package parameter would make the most sense.  However, if a value might need to be shared among several (or all) packages in a particular project, a project parameter would allow you to create the parameter once for the entire project rather than one for each package.

Execute package task

When executing a child package, the simplest method is still the execute package task.  Introduced in 2012, the execute package task now has a dropdown list (shown below, on the Package tab) to allow the SSIS developer to specify the target package.

image

There are a few limitations with this approach.  Most notably, this dropdown list selection only works when calling a package that exists in the same project.  You’ll notice that the selection above the package name, entitled ReferenceType, is set to Project Reference.   Though you can change ReferenceType to use a project located elsewhere, oddly enough you can’t use it to execute a package in a different project deployed to the SSIS catalog (you can read more about that limitation, as well as upvote the issue on Connect here).  I’ll discuss a couple of workarounds for this momentarily.

Clicking over to the Parameter bindings tab, we can specify which values to pass into the child package.  For each child package parameter, we specify exactly one value to be supplied at runtime.  Remember, like the dropdown list for package selection, this piece only works when executing packages in the same project (using the Project Reference setting on the ReferenceType from the Package tab).

image

Keep in mind that you have to use a parameter or variable (again, choosing from the dropdown list) to map to the child parameter.  You can’t simply type in a static value in the Binding parameter or value field.  Also, remember that you will only see package parameters (not project parameters) in the list of child package parameters that may be mapped.  This is by design – it wouldn’t make sense to map a value to a project parameter when executing a package in the same project, since that child package would already implicitly have access to all of the project parameters.

Another distinct advantage of using the execute package task is the process for handling errors in the child package.  In the event that a child package fails, the execute package task will fail as well.  This is a good thing, because if the child package does fail, in almost all cases we would want the parent package to fail to prevent dependent tasks from improperly executing.  Even better, error messages from the child package would be bubbled up to the parent package, allowing you to collect error messages from all child packages within the parent package.  Consolidated error handling and logging means less development time upfront, and less maintenance effort down the road.

If you have the option of using the execute package task for starting packages stored in the SSIS catalog, I recommend sticking with this method.

Execute SQL task

Another method for executing one package from another is by using the T-SQL stored procedures in the SSIS catalog itself.  Executing a package in the SSIS catalog in T-SQL is actually a 3-step process:

  • Create the execution entry in the catalog
  • Add in any parameter values
  • Execute the package

Catalog package execution via T-SQL, another new addition in SSIS 2012, allows us to overcome the limitation in the execute package task I mentioned earlier.  Using a T-SQL command (via the execute SQL task in SSIS), we can execute a package in any project.  It’s certainly more difficult to do so, because we lose the convenience of having the list of available packages and parameters exposed in a dropdown list in the GUI.  Here there be typing.  However, being able to execute packages in other projects – and for that matter, on other SSIS catalog servers entirely – somewhat makes up for the manual nature of this method.

To execute a child package using this method, you’d create an execute SQL task and drop in the appropriate commands, which might look something like the following:

DECLARE @execution_id BIGINT

EXEC [SSISDB].[catalog].[create_execution] @package_name = N'ChildPkgRemote.dtsx'
	,@execution_id = @execution_id OUTPUT
	,@folder_name = N'SSIS Parent-Child'
	,@project_name = N'SSIS Parent-Child Catalog Deployment - Child'
	,@use32bitruntime = False
	,@reference_id = NULL

-- Set user parameter value for filename
DECLARE @filename SQL_VARIANT = N'E:\Dropbox\Presentations\_sampleData\USA_small1.txt'

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 30
	,@parameter_name = N'pSourceFileName'
	,@parameter_value = @filename

-- Set execution parameter for logging level
DECLARE @loggingLevel SMALLINT = 1

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 50
	,@parameter_name = N'LOGGING_LEVEL'
	,@parameter_value = @loggingLevel

-- Set execution parameter for synchronized
DECLARE @synchronous SMALLINT = 1

EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id
	,@object_type = 50
	,@parameter_name = N'SYNCHRONIZED'
	,@parameter_value = @synchronous

-- Now execute the package
EXEC [SSISDB].[catalog].[start_execution] @execution_id

-- Show status
SELECT [status] AS [execution_status]
FROM SSISDB.CATALOG.executions
WHERE execution_id = @execution_id

Two things in particular I want to point out here.  First of all, by default when executing a package using T-SQL, the package is started asynchronously.  This means that when you call the stored procedure [SSISDB].[catalog].[start_execution], the T-SQL command will return immediately (assuming you passed in a valid package name and parameters), giving no indication of either success or failure.  That’s why, on this example, I’m setting the execution parameter named SYNCHRONIZED to force the T-SQL command to wait until the package has completed execution before returning.  (Note: For additional information about execution parameters, check out this post by Phil Brammer).  Second, regardless of whether you set the SYNCHRONIZED parameter, the T-SQL command will not return an error even if the package fails.  I’ve added the last query in this example, which will return the execution ID as well as the execution status.  I can use this to check the execution status of the child package before starting any subsequent dependent tasks.

image

As shown, I’ve set the SQLStatement value to the T-SQL code block I listed above, and set the ResultSet value to Single row, the latter of which will allow me to capture the output status of the executed package.  Below, I’ve set that execution status value to a new package variable.

image

To round out this design pattern, I set up my control flow as shown below.  Using precedence constraints coupled with SSIS expressions, I execute the package and then check the return value: a successful catalog execution returns a value of 7, and my parent package handles any return value other than a 7 as a failure.

SNAGHTML2fb7ad03

You may also have to give special consideration for errors in child packages when using T-SQL for package execution – especially when running packages interactively in the BIDS/SSDT designer.  Since the T-SQL command does not report the failure of a package by default, it also doesn’t “bubble up” errors in the traditional SSIS manner.  Therefore, you’ll need to rely on capturing any child package error messages from the SSIS catalog logging tables, especially when developing and testing packages in Visual Studio.

Script Task

It is also possible to execute SSIS packages programmatically from the script task.  This method is significantly more complicated, but also offers a great deal of flexibility.  A fellow SQL Server MVP and acquaintance of mine, Joost van Rossum, has a detailed blog post on how to programmatically execute SSIS packages from the SSIS script task.  I won’t restate what he has already covered in his comprehensive and well-written post on the topic, but if you need to programmatically fire child packages in the SSIS catalog, check out his write-up.

Conclusion

In this post, I’ve covered the essentials of executing child packages in the SSIS catalog, including provisioning for errors in the child package.  Though it can be quirky, especially when executing child packages in a different project or on a different server, there are several different ways to address this parent-child design pattern.

In my next post, I’ll talk a little more about passing values in a parent-child package, and will illustrate how to pass values from child packages back to the parent package.

How to burn down your house while frying a turkey

Linda.  I have lots more photos of this assignment including horizontals if you need it for the web.  Just make contact with me.  Ed will only show you 5 photos so I have  to really edit it down.  It’s an odd query, yes, but in preparation to write this post I actually typed the above phrase into my browser.  No, I’m certainly not looking to burn down my house.  In fact, wait here while I clear my search history, just in case.

For the sake of argument, let’s say you’re planning to fry a turkey over the upcoming Thanksgiving holiday.  Think about the research you’d do: What type of equipment will I need? How much oil should I buy?  How big should the turkey be?  How long should I cook it? All valid queries that should be answered before taking on the task of dropping a frozen bird into boiling oil.  But are those the only questions you should ask?  Talk to anyone about the dangers of frying a turkey, even those who have never done it, and they’ll tell stories about a brother-in-law, or a coworker, or some guy on YouTube who set ablaze the family homestead in a misguided effort to cook Thanksgiving dinner.

Statistically, it may seem like a silly question to ask.  What are the odds that frying this turkey will set my house on fire?  All in all, probably pretty low.  But it does happen – and if it does, the consequences can be disastrous.  So, when taking on this task – especially for the first time – asking questions (What factors make it more likely that this turkey fry will turn into a huge bonfire?) that can help reduce the risk seems like a good investment.

Be a data pessimist

If you’ve met me in person, you probably remember me as a glass-half-full guy.  But when it comes to data management, I’m a full-on pessimist.  Any data I get is crap until proven otherwise.  Every data load process will fail at some point.  And, given enough time and iterations, even a simple data movement operation can take down an entire organization.  It’s the turkey burning down the house.  Yes, the odds of a single data process wreaking havoc on the organization is very, very low, but the impact if realized is very, very high.  High enough that it’s worth asking those questions.  What part of this process could wreck our financial reporting?  What factors make this more likely to happen?  How can we mitigate those factors?

For the record, I don’t suggest that we all wear tin-foil hats and prepare for space aliens to corrupt our data.  However, there are lots of unlikely-yet-realistic scenarios in almost any process.  Think about your most rock-solid data operation right now.  What potential edge cases could harm your data infrastructure?  Sometimes it’s the things that might seem harmless:

  • Is it possible that we could run two separate loads of the exact same data at the same time?
  • What if a process extracts data from a file that is still being written to (by a separate process)?
  • What if a well-meaning staff member loads a properly formatted but inaccurate data file to the source directory?

Others, while even less likely, could lead to a real mess:

  • Is it possible for my data load process to be purposefully executed with inaccurate or corrupt data?
  • Could some situation exist within my ETL process that would allow essential rows of data to simply be lost, silently and without errors?
  • Do I have any processes that could make unaudited changes to my financial records?

Each potential scenario would have to be evaluated to determine the cost to prevent the issue versus the likelihood of realization and the impact if realized.

Fortunately, most of the data problems we deal with are not as catastrophic as igniting one’s home with a fried turkey on Thanksgiving.  However, as data professionals, our first responsibility is to protect the data.  We must always pay attention to data risk to ensure that we don’t allow data issues to take the house down.

Join me in DC for a full day of Biml

I’m excited to announce that my Linchpin People colleague Reeves Smith and I will be delivering a full day Biml preconference seminar the day before the upcoming SQL Saturday in Washington, DC.  This seminar, entitled “Getting Started with Biml”, will introduce attendees to the awesomeness of Business Intelligence Markup Language (Biml). 

In this course, we’ll cover the basics of Biml syntax, show how to use BimlScript to make package creation even more dynamic, and will demonstrate lots of design patterns through numerous demos.

Registration is now open for this course.  Lunch will be provided.  We hope to see you there!

SQL PASS 2014 Summit Diary – Day 6

Today is the last official day of the PASS Summit.  The sessions will wrap up at the end of the day, and we’ll all go our separate ways and resume our semi-normal lives.  Having delivered my presentation yesterday, my official PASS duties are over, and I’m planning to spend the day taking in a few sessions and networking.

IMG_694808:15am: No keynote today, so the sessions are starting first thing in the morning.  I’m sitting in on a Power BI session delivered by my friend Adam Saxton.  He’s an excellent and knowledgeable presenter, and I always enjoy attending his presentations.  For Power BI, this has been one piece of the Microsoft BI stack that I have largely ignored due to the fact that it runs exclusively in the cloud.  However, I’d like to get up to speed on the cloud BI offerings – even though the on-premises solutions will continue to represent the overwhelming majority of business intelligence initiatives (in terms of data volume as well as Microsoft revenue), I expect to be fluent in all of the Microsoft BI offerings, whether “earthed” or cloud-based.

11:00am: After stopping by the Linchpin booth again, I sit down in the PASS Community Zone.  And by sit down, I mean that I collapse, exhausted, into one of the bean bags.  I spent some time chatting with Pat Wright, Doug Purnell, and others, and met up with Julie Smith and Brian Davis to talk about a project we’re working on together (more on that later).

11:45am: Lunch.  Today is the Birds of a Feather lunch, in which each table is assigned a particular SQL Server-related topic for discussion.  I headed over with my Colorado buddies Russ Thomas and Matt Scardino to the DQS/MDS table, at which only two other folks were sitting (one of whom worked for Microsoft).  We had a nice chat about DQS and data quality in general.  I have to admit a bit of frustration with the lack of updates in DQS in the last release of SQL Server.  I still firmly believe that the core of DQS is solid and would be heavily used if only the deficiencies in the interface (or the absence of a publicly documented API) were addressed.

02:45pm: I don’t know why, but I want to take a certification exam.  The PASS Summit organizers have arranged for an onsite testing center, and they are offering half price for exams this week for attendees of the summit.  I registered for the 70-463 DW exam, and after sweating through the MDS and column store questions, I squeaked through the exam with a passing score.  I’m not a huge advocate for Microsoft certification exams – I find that many of the questions asked are not relevant in real-world scenarios, they are too easy to cheat, and I’m still very skeptical of Microsoft’s commitment to the education track as a whole after they abruptly and mercilessly killed the MCM program (via email, under cover of darkness on a holiday weekend, no less) – so I’m likely not jumping back into a full-blown pursuit of Microsoft certification any time soon.  Still, it was somewhat satisfying to take and pass the test without prep.

04:00pm: Back in the community zone.  Lots of folks saying their good-byes, others who are staying the night are making plans for later in the evening.  For me?  I’ve been craving some seafood from the Crab Pot all week, and I find 6 willing participants to join me.  I’m also planning a return trip to the Seattle Underground Tour.  For the record, I love having this community zone, and I particularly dig it right here on the walkway – it’s a visible, high-traffic location, and it’s been full of people every time I’ve come by.

06:30pm: An all-out assault on the crab population has commenced.  And by the way, our group of 6 became 12, which became 15, which became 20-something (and still growing).  Our poor waiter is frazzled.  I told him we’ll be back next October, in case he wants to take that week off.

image08:00pm: Seattle Underground tour.  I did this a couple of years ago with a smaller group, and it was a lot of fun.  This year, we’ve got 15 or so PASS Summit attendees here, and we get a really good tour guide this time.

09:45pm: My friend from down under, Rob Farley, turns 40 today, and about a hundred of us stop by his birthday party.

10:30pm: This may be the earliest I have ever retired on the last night of any summit.  I’m just exhausted.  I do some minimal packing and prep for tomorrow morning and crash for the evening.

Apart from any last-minute goodbyes at the airport tomorrow, the SQL PASS 2014 Summit is over for me.  Without a doubt, this was the best, most fulfilling, most thoroughly exhausting summit experience I’ve had in my seven years of attendance.  I’m sad to be leaving, but couldn’t feel more satisfied.

SQL PASS Summit 2014 Diary – Days 3-5

The last two days have been an absolute blur.  As I first posted this week, I had planned to blog daily about my goings-on, but I’ve been running nonstop – all good things, fortunately – but it interrupted my plans to blog every day.

Day 3: Tuesday

08:00am: Headed back to the MVP Summit.  Rain again.

06:00pm: Back in Seattle, and off to the BI Over Beers event with my friends from Varigence.

10:30pm: More karaoke at the event sponsored by Denny Cherry and SIOS.  Lots of fun, but it’s really loud and crowded (or perhaps I’m getting old).  I take some pictures, including a few incriminating mechanical bull snapshots, and head back to the hotel.  Surprisingly in bed by midnight again.

Day 4: Wednesday

08:15am: Today is the first full day of the SQL PASS Summit.  It’s keynote time.  Usually the first-day keynote is marketing heavy, and that is the case for today.  There are several interesting demos, including one from PIer 1 in which they are mapping store traffic areas using the Kinect (yes, the XBox gaming interface) to detect which areas of their stores are most heavily trafficked.

10:30am: I’m sitting in Ryan Adams’ session on AlwaysOn.  This is a bit outside my area of expertise, so it’s good to see some of this administrative stuff.

11:45pm: Lunch with the Microsoft executives.  I love how open they are to chatting with community influencers.

12:30pm: Hanging out at the Linchpin People booth in the exhibitor area. Lots of great conversations with friends and passersby.

06:00pm: It’s time for the exhibitor reception.  We are getting lots of folks at the Linchpin booth!  Looking forward to seeing these folks at our party later tonight.

08:00pm: Linchpin People party at the Rhein Haus.  We’re hanging out with about 150 of our closest friends, learning to play bocce ball.  It was great seeing some folks I know and meeting some new ones.

12:15pm: Back at the room, exhausted.

Day 5: Thursday

08:00am: Arrived in the keynote room a bit early.  A much smaller crowd than yesterday. Sadly, I fear that the marketing presentation yesterday may have scared away some of the attendees, but today is likely the content they really came to hear.

10:00am: Dr. Rimma Nehme is one of the best speakers I’ve heard at a PASS Summit, ever.  She’s done a great job of laying out the cloud offerings and how they might fit into a larger data ecosystem.

10:30am: Hanging out at the Linchpin booth, thinking through my session for this afternoon.

11:15am: I found the speaker lounge (not to be confused with the speaker ready room).  We have an actual fire pit in here.  And snacks.

01:30pm: My presentation entitled “Building Bullet-Resistant SSIS Packages”. Wow, what a crowd!  Rough guess, 325 people including those sitting and standing in the back of the room.  Thanks everyone for coming and for staying awake and engaged (which I know can be difficult right after lunch Smile).

02:45pm: And my official work at the PASS Summit is officially done.  Now time to enjoy some sessions and networking.  First thing: Meet up with my friend Phil to talk through a Biml problem he’s having.

04:45pm: On my way to a session and I run into one of the guys from Pluralsight.  They’ve been doing some cool things lately, and I’m considering partnering with them to do some online content.

06:00pm: I missed lunch today due to my presentation. Grabbing a quick bite with my friend Rafael Salas.

07:00pm: Stopping by the attendee party at the EMP Museum.  I was here two years ago for that year’s attendee party, but I ended up chatting with a bunch of folks and never even made it past the lobby.  This year I took a little time to explore the museum.  I particularly enjoyed the shrine to Nirvana.

09:30pm: A half-hour of actual downtime in my hotel room, before heading out to meet some friends.

12:45am: Exhausted but happy.  What a great day.

Tomorrow is the last day of the summit.  Normally, I’m ready for some quiet me-time by the end of the week, but this year I’m very much looking forward to networking as much as possible before I leave on Saturday.

PASS Summit 2014 Diary – Day 2

It’s another beautiful day in Seattle. And by beautiful, I mean overcast and threatening rain.  Today will be mostly consumed by the MVP Summit, with some fun stuff scheduled for later in the day.  At 6pm today, I’m headed back to the Tap House for BI Over Beers, a gathering of business intelligence professionals sponsored by Varigence.

08:00am: On the bus to the MVP Summit.

rain08:30am: Hey look, it’s raining.

08:40am: Hey look, I’m standing in the rain.

05:30pm: MVP Summit finished up for the day, and we’re headed back to Seattle for several events tonight.  Lots of traffic so it’s a slow ride, but I’m getting to catch up with Aaron Nelson.

06:15pm: I’m attending the BI Over Beers event hosted by my friends at Varigence.  We’re in the large billiard room at the Tap House, with a good crowd of 100 or so folks.

IMG_690108:00pm: Stopping by the Yardhouse to attend the networking event organized by Steve Jones and Andy Warren. Not a huge group here, but they had to change locations at the last minute due to some logistical issues.  Also learned that Andy Warren has had to skip the summit this year, so I’ll definitely miss seeing him this week.

09:30pm: A small group of us have arranged to meet up at the Monkey Pub in Seattle.  It’s a relatively small place, with just a few other locals in addition to the 15 or so SQL folks in our group.  Delight of the evening: Brian and Penny Moran entertaining us with Jimmy Buffet songs.  Twitter reports that there is another SQL Karaoke event over at Bush Garden, though I have to admit that I’m enjoying this low-key group tonight.

12:30am: The SQL Karaoke party breaks up and everyone heads back to their hotels.  Most of us have early activities in the morning, so it’s a race to squeeze in as much sleep as possible.  (And thanks to Argenis Fernandez for the ride back to the hotel)

Tomorrow is my last day at the MVP Summit this week, with the rest of the week reserved for PASS Summit activities.  Tomorrow night’s big event is the PASS welcome reception, followed by the karaoke event (yes, another one) organized by Denny Cherry.

PASS Summit 2014 Diary – Day 1

Today is the first day of official activities for the week.  The PASS Summit hasn’t yet started, but I’ll be spending the day at the MVP Summit, surrounded by a few hundred people much smarter than I am.  The details of the MVP Summit are all covered under NDA, so today’s update will be brief.

IMG_688206:00am: I woke up and saw that the clock read 7:00am.  After a brief moment of panic, I realized that I hadn’t slept through my alarm, but had simply neglected to change the alarm clock in the hotel room.  For once, I’m happy about the whole DST time change.

07:15am: Breakfast at the top of the Hilton.  There’s a great view from the 29th floor, with a  panoramic look over the sound (and the picture to the right doesn’t really do it justice).

08:00am: Headed to the MVP summit.

09:00pm: After the MVP Summit activities, I’m back in Seattle to drop my stuff off and meet up with some folks.  I found my friend Keith Tate wandering around in the Sheraton lobby, and we all wandered over to Tap House.  There’s already a sizeable group of folks here.

09:45pm: I still suck at playing pool.

10:15pm: Found my friend and fellow Texan Jim Murphy.  He tells me about how his business is going while I make fun of his oversized fruity drink.  I also got to catch up with Paul Waters, Phil Helmer, and others.

11:30pm: For the second day in a row, and against all odds, I’m headed back to the hotel before midnight.  After a quick stop at the front desk – I left my card key in the room and had to get a replacement.

Tomorrow is another long day, though I expect to be back in Seattle earlier in the day.  I’m looking forward to catching up with folks at two different events (at the same time, of course) tomorrow, followed by a smaller gathering with a few friends.  More tomorrow….

PASS Summit 2014 Diary – Day 0

This week, I’m attending two different summit events in the Seattle area.  On Sunday through Tuesday, I’ll be participating in the Microsoft MVP Summit in Bellevue, Washington.  For the remainder of the week, I’ll be attending and presenting at the PASS Summit in Seattle.  Although there is much I won’t be sharing (especially at the beginning of the week), I’m going to blog each day to share my travel tales and any non-NDA information I can.

Today is Day 0, the day on which I’m traveling from Dallas to Seattle and getting checked into my hotel.  There aren’t any official summit events taking place today, but I expect there will be plenty of goings-on to discuss.

06:45am: The day gets started with a text from my friend Ryan Adams.  He’s my ride to the airport, and he’s just pulled up out front.  My body reminds me that it’s quite early.  I’ve just come back yesterday from a trip to visit a client on the east coast, and couple the time difference with a short night of sleep and I’m still quite groggy.  First stop: coffee.

IMG_687009:30am: After a brief delay, I’m onboard.  There’s a dog barking. On the plane.  This could be a long flight.

10:30am: The dog finally stopped barking.  I reminded myself that the thing I’m most excited about, even more than the excellent technical sessions, is the fact that I’ll be spending the next week with scores of treasured friends and colleagues.  This inspired me to crank out a new blog post: Five people you should meet at the PASS Summit.

11:30am: Arrived at SEATAC and met up with Reeves Smith, where he, Ryan, and me took the train back to downtown Seattle.

02:00pm: After dropping stuff off at the hotel, we’re meeting up with Carlos Bossy, one of my favorite Denver people, and decide to get some lunch at Lowell’s in the Pike Place Market.

IMG_687903:00pm: Back at the Sheraton, we bump into Rob Farley and chat with him for a while.  We learn that he has a twin brother, which is both delightful and frightening.

05:30pm: It’s hard to believe that I’ve been in town for six hours and I’m only just now making it to the Tap House.  Hanging out with Mark Vaillancourt, Kerry Tyler, Tamera Clark, Brad Ball, and others.

07:15pm: Lots more SQL folks are arriving at Tap House.  A game of billiards has erupted.  There’s talk of SQL Karaoke later.

09:15pm: And SQL Karaoke has begun.  It’s a crowded house here at Bush Garden, with a big birthday party, a smaller but louder bachelorette party, and various Saturday night people.

10:30pm: Fatigue sets in all of a sudden, and I’m headed back to the room (and for the first time in #sqlkaraoke history, I’m the first person to leave).  This must be the earliest I’ve retired to my room at a summit week since the infamous PASS Summit of 2005.  I’m looking forward to a decent night of sleep and a great day tomorrow at the MVP Summit.

Five people you should meet at the PASS Summit

imageAs I write this, I’m airborne and on my way to Seattle for the summit week (the Microsoft MVP Summit, followed by the PASS Summit).  I was struck with the notion – and not for the first time – that I’m not really looking forward to these events for the technical content as much as I’m looking forward to networking and reconnecting with my fellow SQLFamily members.  If you are planning to be at either or both of these, I strongly encourage you to make it a priority to meet people and get to know them.  This should be at least as important, if not moreso, than attending sessions.

If all goes well, I’m going to meet up with dozens of people – some of whom I’ll be meeting for the first time.  If you’re new to the SQL community, there may be lots of new names and faces to meet.  If you’re in that group, I want to share with you five folks whom I know that you should make an effort to meet while at the PASS Summit:

Argenis Fernandez: He’s one of my favorite people in the SQL Server community.  He’s an MCM, MVP, and a former Microsoftie, so his depth and breadth of knowledge is clear.  However, he’s also one of the nicest, most interesting folks you’ll meet there.  When you meet him for the first time, don’t be surprised if he wraps you up in a big ol’ bear hug.

Tom LaRock: Tom is the president of the PASS organization, and someone I’m glad to call my friend.  He’s an incredibly smart guy with a talent for getting things done.  But above that, he’s a very approachable, personable guy who really listens when you’re talking.  Tom is a good guy to know for a lot of reasons, and if you introduce yourself to him you’ll be glad you did.

Allen White: Allen is one of the friendliest folks you’ll meet in the SQL community.  He’s also one of the most versatile people in the industry, with a great deal of knowledge in database engine, business intelligence, Biml, and many other diverse topics.  If you want an honest opinion on something, ask Allen – he’ll give you a polite but fair and accurate assessment.  Allen is also a runner, but you’d better be in shape if you intend to keep up with him.

Stacia Misner: Of the various business intelligence practitioners you should know, Stacia is near the top.  I’ve known her for several years and always enjoy chatting with her.  She’s wicked smart, but goes out of her way to share what she knows.  Meeting Stacia often comes with a bonus, as you may also get to meet Dean Varga, her fiancé and also a new member of the SQL community.

Scott Currie: Scott is easily one of the smartest people I’ve ever met.  He’s the CEO of Varigence, the company that makes Biml (and my affection for that tool is well known).  But apart from that, he’s a very insightful guy, one whose opinion I would trust on just about any matter, technical or nontechnical.

By no means should this be considered a comprehensive list of people whom you should meet; winnowing this list to just five people was quite difficult.  These are just five of literally scores of outstanding people in the SQL community who would be happy to say hello to you at the Summit.