Month: February 2014

SSIS goodness… kind of.

Man, it’s been a WHILE since I’ve worked with SSIS packages. Why, I remember when it was part of something called Business Intelligence Development Studio- and before that something else… now its part of SQL Server Data Tools, whatever that means. Still looks the same though as good ol’ SSIS (and DTS before that.) And there’s a part of me that enjoys it… most of me though just wants to wash my hands of the “plumbing” and get back to REAL programming. Something about writing VB code (Yes, I KNOW C# is available, you’re still working in a dated IDE!) just makes me feel icky.

Here’s the process I hacked out. yes, I’m sure there’s a more elegant way to do it – but I was done in a few hours, and it seems elegant. I’m receiving an XML file like this:

This is an XML fragment, not a true XML document –it’s missing a root node. (I.e. should be <Keys><Key>, not just <Key>.) So, if you try to do a data transform directly against this, you’ll end up with no source columns.

Create a SSIS package with the following variables:

So, I added a Script Task with the following variables passed in as R/W:

And the following script:

void Main()


String ErrInfo = “”;

String FilePath = Dts.Variables[“User::FilePath”].Value.ToString()+ Dts.Variables[“User::FileName”].Value.ToString();

//MessageBox.Show(“Filename: ” + FilePath);



String FileContent; //Variable to store File Contents

FileContent = ReadFile(FilePath, ErrInfo);

if (ErrInfo.Length > 0)


Dts.Log(“Error while reading File “ + FilePath, 0, null);

Dts.Log(ErrInfo, 0, null);

Dts.TaskResult = (int)ScriptResults.Failure;




//FileContent Before Replace;



//Find and Replace –> Modify WHERE clause

FileContent = FileContent.Replace(





FileContent = FileContent.Replace(





//FileContent After Replace;


Dts.Variables[“User::FileContent”].Value = FileContent;


String ArchivePath = Dts.Variables[“User::ArchivePath”].Value.ToString() + Dts.Variables[“User::FileName”].Value.ToString();// +@”\Processed\” + Dts.Variables[“User::FileName”].Value.ToString();

String ProcessPath = Dts.Variables[“User::ProcessPath”].Value.ToString() + Dts.Variables[“User::FileName”].Value.ToString();


//Write the contents back to File

if (File.Exists(ProcessPath))




WriteToFile(ProcessPath, FileContent, ErrInfo);

if (ErrInfo.Length > 0)


Dts.Log(“Error while writing File “ + ProcessPath, 0, null);

Dts.Log(ErrInfo, 0, null);

Dts.TaskResult = (int)ScriptResults.Failure;



//and move the orig files

if (File.Exists(ArchivePath))




File.Move(FilePath, ArchivePath);




catch (Exception e)


Dts.Log(e.Message, 0, null);

Dts.TaskResult = (int)ScriptResults.Failure;





String ReadFile(String FilePath, String ErrInfo)


String strContents;

StreamReader sReader;



sReader = File.OpenText(FilePath);

strContents = sReader.ReadToEnd();


return strContents;


catch (Exception e)



ErrInfo = e.Message;





void WriteToFile(String FilePath, String strContents, String ErrInfo)


StreamWriter sWriter;



sWriter = new




catch (Exception e)



ErrInfo = e.Message;





At this point you’ve kicked out a modified version of the XML fragments – only they’re readable now – to a subfolder called Processed. Add a new for-each loop on this folder, and set up your XML source on this:

Above I generated the XSD file from a generated XML file, and copied the file out to a new folder called Schemas. (Yes, it’s great to run this first using hardcoded values for the XML file versus a for-each loop to reduce complexity.) Clicking on Columns then gives us a view of the sequence columns, just like if this was in a familiar SQL-to-SQL operation.


Then you run a File System task to delete the contents of the Processed folder – you’re done with them now – and fire off a sproc to add any new records from your Raw table to a more formatted, validated regular table in SQL. Voila!!!


More Links



Monitoring your SQL

I’ve seen situations where monitoring SQL falls into a kind of black hole. The DBA’s feel (somehow) that this is beneath their notice; they’re focused on strategic issues or otherwise busy – maybe riding horses and pointing majestically off into the distance. And developers feel that their responsibility is to deliver good code; monitoring the health of the SQL boxes isn’t something they have time (or the desire) to do. As a result, the application’s performance and reliability takes a hit.

From my perspective, I’d like a little insurance as a developer so I can definitively point to the backend environment as the cause of a performance bottleneck. And it’d definitely be helpful if I was collecting execution times so when we make sproc changes we can determine if they’re harmful to the health of the system. As a monitoring checklist on your SQL boxes, you should be checking the following on a daily basis:

  1. Are your backups working?
  2. Have you set up automated restores and are you monitoring success/fails?
  3. Are you checking error logs?
  4. Are you checking for failed jobs?
  5. Is there sufficient free disk space for production databases?
  6. Did you run integrity checks and check index health? This continually requires finetuning.
  7. Did you check failed login attempts for security?

Let’s face it – you’re NOT going to log onto QA or PROD every day and run SQL scripts manually. Other interests beckon! But, it would be great to log this stuff.

So, check this out: Admin creation script

Use this SQL here to set up automatic monitoring. Once you do this, you’ll have a set of tables collecting data over time and a daily report that will show you all the good stuff for the day. It wouldn’t take much work to drop this into a ListView on a website to make this even easier to check. It also, incidentally, will fix any indexes that are excessively fragmented and update statistics.

To implement it, just create a database called “Admin”. Run the attached creation script, and then create a set of views in any target databases to collect fragmentation data. (There’s probably a more elegant way to implement this last piece; I kinda banged this out over a few hours and then moved on.) Create a set of jobs then with the following schedules:

  1. A Daily Report job – run dbo.Daily_Data_Colleciton
  2. A Daily Maintenance job – run dbo.Daily_Database_Maintenance
  3. A Weekly Maintenance job – run dbo.Weekly_Database_Maintenance

#2 and #3 could be combined if desired, I imagine as a weekly run on a Saturday. Right here, combined with a database backup (and an automated restore) you’re well ahead of the game.

Didn’t take much work, and I found pretty quickly that 1) we needed to add a ton more indexing, especially as our table would start to bloat – and that the Optimize for Ad Hoc Workloads flag was (incorrectly) set to 0. And our SQL Server Agent was set to manual startup. no Bueno!!


  • I mucked around a little with using WMI to view not just SQL’s portion of CPU stress, but the entire CPU demands on the production SQL box. However, on most environments, SQL is going to be the bulk of the stress. It didn’t seem worth it – and having to turn on CLR on the SQL box isn’t a great solution. The DVM’s that come with SQL Server are fine. If you wanted to do this, it’s easy and runs fairly fast – just copy the DBStats.Database.CLRFunctions.dll from the 9781439247708_CH06 code download from this source (Chap06 folder) and copy it out to the same Data folder that you use for all your system databases (for me this was H:\MSSQL11_0.MSSQLSERVER\MSSQL\DATA ).
  • There’s a great free reindexing/statistics tool that I want to look into:
  • Note that the fragmentation script from the Apress book I referenced above was fubar – went with a riff on this article instead.
  • And I can’t recommend highly enough the DMV’s and SQL found at This guy is a genius and if you want to start digging deeply into why your server is lagging, this is a great place to begin. I also really like the section on looking for bad NC indexes or missing indexes.
  • Future refinements: Add Backup information to the daily job.

Some explanatory text of the embedded SQL:

  • usp_RunCheckdb
    • this runs and if the db size is less than 50 GB it runs DBCC CHECKDB(DBNAME) WITH NO_INFOMSGS
  • usp_Collect_login_trace
    • See the LoginAuditTrace table – has records tracking the last login.
  • updateStat_onAllDbs
    • runs exec UpdateStat_OnAllDbs – which basically fires off execute sp_updatestats on every database that’s read_write, online, and multi-User and not in master/model/tempdb
  • usp_IndexMaintenance – runs indexes that are excessively fragmented. A nifty set of SQL here!
  • and an index fragmentation job with two steps:
    • GetLargeDbs (inserts into a table Auto_Exception_Lst a set of databases >50GB
    • usp_CollectFragmentation
  • And last but not least –
  • Other sprocs:
    • GetSysConfigDetails – checks the server parallelism and AWE. Run rarely.
    • sp_GetBackupStatus – old, replaced by other jobs to check backup – usp_BackupDB_Info
    • usp_Alter_Index
    • usp_BackupDB_Info – shows most recent backup for each db.
    • usp_cluster_loadbalance_checker – checks if load balancing is running normally on SQL Server clusters.
    • usp_CollectFragmentation – collect fragmentation information (a good alternative)
    • usp_fileconfig – shows the drive location and size of each logical filename in SQL.
    • usp_GetLastRunCheckdb – shows the last run of CheckDB for each db. Shouldn’t be necessary with usp_RunCheckDb above.
    • usp_IndexDefrag – runs a rebuild/reorganize on fragmented indexes. Superceded by usp_IndexMaintenance.

Taking TDD Seriously

There’s a great line from the Bible book of Genesis where a servant to Pharaoh said, “It is my sins that I am mentioning today…” I’m going to mention one of my sins, and record what I plan to do about it in the future.

In a previous gig, one of the first things I did as project manager was to meet with the head of QA – we had a large team of about five full-time testers. He had a long laundry list of complaints, beginning with “Testing is an afterthought here.” Even though we had such a large team – nearly 16 – development raced ahead of unit and functional testing. Our unit testing was running about 6-8 weeks behind development, and our code coverage was AWFUL.

I ‘fixed’ this by integrating development and testing so we would meet together daily, by having weekly 1:1’s with QA, and sending out code coverage/unit and functional test coverage #’s as metrics in weekly status reports. I even went so far as to host a company-wide summit on testing best practices. Yet, though things improved, we still weren’t working in synch. Major regression issues would crop up on nearly a weekly basis. We had to hire an overseas team to test the application in different browsers for issues that our automated testing wasn’t catching. It still very much felt like we were working as two teams – one pulling against the other.

Of course, the issue started with me. As a developer that grew up in the era before TDD, I still viewed testing as being secondary to development – and as a “like to have”, one that often got in the way of implementation. As a result, we had two teams with implicit values – the “alpha team” of ace developers that were shoveling out code and hucking it over the fence, and the QA scrubs that were playing cleanup.

Of course, I couldn’t have been more wrong. Testing is everyone’s job, and if you’re a competent developer you need to write unit tests alongside your code. Look at the graph below:

That’s two teams from the same company working on projects of similar scale. Notice the team that implemented TDD took almost twice as long in development. But look at the lower time in handling regression and refactoring code! And which customer do you think was happier in the long run with a higher quality product being deployed the first time?

Legacy code is untested code – as Michael Feathers wrote in his great book “Working Effectively With Legacy Code“. In some projects, we’re almost leveraged into writing legacy code from the get-go due to timing demands. But that doesn’t excuse us as responsible professionals from not going back after the push and creating a robust test project.

I’m still not sure if TDD is the cure-all that some say; there may be a middle path that hits a better balance between fast development and 100% code coverage. But I do pledge to take a more in-depth look into testing, and fold TDD into my work practices.


The beauty of patterns – stoneflies and manufacturing database models

The old saying goes, “Make it as simple as possible – but no simpler.” I find that, especially where the project scope is unknown or undefined, keeping a simple design pattern is the difference between life and … unemployment.

Take the fly fishing pattern below. You like?

Believe it or not, that’s not a real bug but a fishing fly – and I don’t want to think how long it took to tie this fly on to a hook. It’s meant to imitate this insect below.

This bug may look a little like a cockroach to you, but to trout they’re like fat, juicy three inch long Twinkies. (Side note – if you haven’t experienced a stonefly hatch, you’re missing out in a buggy kind of way. These bugs grow for 2-3 years and then in late spring get a hankering for some college-style drugs and hot sex in the woods. En masse, by the thousands, they make their way from their cool river bottom home to the rivers edge, where a mass orgy ensues in the bushes. If you’re fortunate enough to be along the Deschutes River in central Oregon around that time – say April or May – they get everywhere – crawling around your clothing and hair, splatting noisily into the water, splashing noisily into the water and clumsily flying around like they’re drunk and/or stoned. After a long cold winter, this kind of feast is just what the doctor ordered – the trout hang out under trees, wait for the next clumsy splash to ring the dinner bell, and gorge. It may only last for a few days, a few weeks – but it’s by far
the best fishing of the year.)

So, let’s say you are standing on the banks of the Deschutes, 5-weight fly rod in hand, and you’ve got two flies in your hand – the awesome beautiful intricate fly above, and the ugly, homely, plain-looking lump of fur and feathers below. Which would be better to use – the fly above, or this fly?

Believe it or not, the more effective fly is the not the hyper-realistic IMITATIVE pattern at the top of the page – but the REPRESENTATIONAL pattern above. The fly above picks up light better, can be seen better in murky water – which is often the case in spring – and moves more realistically in the water. And best of all – if you lose the fly on a branch, you didn’t just waste two hours of your life!! So, you’d put fly A up on your mantelpiece as a work of art – but you’d actually hit the river with fly B.

In designing both UI’s and the data model to support them, I try to follow the same guidelines. I don’t want to get too emotionally attached to a particular model. If something is simple and needs refinements – or even a dramatic course adjustment – yes, I can refactor and adjust. But if it’s complex and multitiered, and a radical departure is called for – often, I have to start over. Big tears, remorse, heavy drinking ensue.

A Representational Manufacturing Process Model

So, let me share a pattern used in a recent project. This isn’t a true pattern in the sense of Abstract-Factory-is-a-pattern or Async-And-Await, but illustrates the point of how to apply that Keep It Simple paradigm in a shopfloor model. The heart and soul of the project is kept in a Process/Step table (and a set of lookups):


Above, we have two key lookup tables – ProductType and Location. A ProductType
represents a product that we assemble – and a Location
is a specific area of the manufacturing floor. This is a standard many to many relationship built around the Process
key table – a product is assembled as part of a series of Processes in a series of Locations on the manufacturing floor. Each Process, in turn, can be broken down into a set of Steps. The StepType table is there primarily for the frontend – if the step needs to show a textbox, a checkbox, if validation needs to happen and on what range, etc. Naturally we could have broken this down into a more normalized store, particularly on the StepType table. But the table itself is not that large – and the cost in performance of splitting it out into separate tables wasn’t judged to be worth it. I’m happy with this – and if we have more requirements down the road that require a finer tuned approach, we can refactor as needed.

OK, so that represents a very rigid set of steps that a person on a floor will follow in assembling a part. But what happens when they want to introduce a change – even temporarily – to the way the part is assembled? For example, they may want to switch out a material, remove a step for a set of five consecutive serial numbers, or add a new step to double-check a fabric lining. These engineering tryouts could have a short shelf life – maybe just one assembled part as a trial – or they could become over time a permanent part of our process. We COULD have added these as new steps to our Steps table along with some new TryOut-specific attributes – but design-wise that seemed a little messy, since LTO’s don’t happen that often and there are so many attributes and special instructions that are LTO-specific – a lot of wasted space in our Steps table if we were to go down that route! So, I created a set of tables as below:

Look at this as a mirror reflection, somewhat distorted, of the Process-Step tables above. A TryOut
represents a batch of steps (TryOutStep) that a shopfloor tech will be following – either for a date range or, more commonly, for a series of new parts. There’s nothing here that’s dramatically new or different – it’s just a riff on the Process-Step table design, which itself is neither new or unique to our application. In our app, the TryOut will represent the parent information – the master portion – and the TryOutStep will be a list of detail records that a planner can modify on the fly. Because this is simple and generic, both the Step and the TryOutStep rows can be inserted at build-time as one lump of “I did this at this date for this part” kind of instructions.

It’s here where we have to talk about Bill-Of-Materials patterns, another hoary and beaten-down pattern. See below. Some of this you will recognize from our old friend AdventureWorks – which in fact was some of our starting point:

This is very simple and doesn’t represent anything new that might shock a trout. A MaterialMaster
represents an individual part, a single screw, a film – anything that needs to be assembled as part of a hierarchy. What really matters is the BillOfMaterial
(BOM) table – which assembles these into a hierarchy using the ParentID field. We’ll get into the SerialNumbers and SNHistory table later – the important thing is, we have a list of parts as they’re assembled into a set of components, and from there into a finished good.

Thinking about Hierarchies

Here’s where the “…but no simpler” part of the “Keep it simple” saying comes in. We could have just thrown this together without any forethought – Cargo Cult
Programmer style. But we actually did think about this and our application before we started slamming out SQL. We checked the Microsoft SQL Server Bible 2008 by Paul Nielsen (a great resource, particularly the section on Traversing Hierarchies) and looked at some patterns:

  • Adjacency list
    • This is the most popular and long-running pattern in the SQL community. Think Employee-to-Manager parent/child type relationships. This is otherwise known as the self-join pattern and it’s been around FOREVER, and remains the most popular solution.
    • To get data out of this, think a subtree queries – but this breaks down past a certain number of levels – or use a recursive common table expression or a looping user-defined function (a recursive CTE is faster, the UDF solution offers more flexibility). You’d use a CTE solution when you need to look up and down the hierarchy in a very complex way.
    • Pros – It’s very easy to manually decode and understand this pattern.
    • Cons – It’s prone to data entry/cyclic errors, and not quite as performant as HierarchyID in retrieving subtrees. That being said, for our small list of parts, it returned results in milliseconds.
  • Materialized path
    • Here instead of a Child/ParentID self-join pattern, you have a field called MaterializedPath (for example) with entries like “1,2,263”. See this article for more on the pattern.
    • Returning a subtree is SUPER easy with this pattern, unlike the potentially-ugly adjacency list patterns – because all the information we need is in a single field, MatieralizedPath. A simple LIKE and a % wildcard in the WHERE clause gets this done.
    • Pros – This would probably be your pattern of choice for multiple-level scenarios. It’s both durable and consistent – if you delete a record accidentally there’s no orphaned scenarios and the tree can be reconstructed easily. Paul Nielsen tends to favor this pattern, but it does take some learning to master.
    • Cons – the key sizes can become quite complex, and simple operations like “get me the parent” become a little harder.
  • HierarchyID
    • A new data type introduced in SQL 2008 specifically to solve these kinds of problems.
    • Pros – It’s faster than adjacency list (but slower than Materialized Path).
    • Cons – and these were significant for us – it embeds data within a binary data type so it’s more difficult to diagnose/navigate. Orphaned nodes happen, and are hard to reconstruct.

In summary – adjacency pairs are the most common, the easiest to understand. As Nielsen says, unless your requirement is very high performance, stick with this pattern. Materialized path is the choice where herarchies are core to the database and are used frequently in functions. HierarchyID to me is a loser, since the binary data type makes it difficult to debug/diagnose in the inevitable mixups (and its harder to track down and re-parent orphans). Kent Tegels did an excellent writeup on this using SQL Server 2008 with a pattern that used HierarchyID, but it wasn’t enough to swing the argument that way.

But the key thing is, we did have the argument and we did hash over the options. Once we did some research, the solution became clear – a simple Adjacency List was the way to go. For reporting, a simple Parts Explosion Report query did the trick. Again, we didn’t try anything radically complex, new or different – and I think if we would have, that would have been the first sign that we were on the right path. And by doing some research – and trying a pilot of Adjaceny List, Materialized Path and a HierarchyID set of queries – we knew the options, were comfortable with our solution and knew we weren’t missing out on something cooler/more performant.


Back To Our Model – Design Versus Real World

You would think at this point we’d be almost done. We’ve got the following entities –

  • A product that needs to be assembled across a set of locations.
  • A process that follows a set of steps before moving on to the next process.
  • A set of steps (LTO’s) that can be followed on a temporary basis as an experiment.
  • A list of parts that can be assembled as part of a component or finished good.

So far so good. But this represents what SHOULD be built – a model. It’s not reality, and doesn’t really show what’s built on the floor. For this, we need a Build History table.

This typically is bound up in the concept of a Work Order – like with the classic AdventureWorks model, where a bike is assembled from start to finish in an orderly series of steps from a single work order #. This assumes a very linear progression, all from a single order by a customer with a specific date. In real world manufacturing, this concept often doesn’t exist – and definitely didn’t with my project. Each component is assembled separately and placed in storage racks – its only in the final assembly stage that they’re linked together. So, while we definitely will include some form of an initiating action – a work order – in the future, our design initially really is based on just two tables – the SerialNumber and the BuildHistory table.

Above, SerialNumbers
is the key table. This contains an identifying serial number, which could represent any one of a set of components. But where the action really is at is the BuildHistory table. This starts out initially as a plain, bland copy of all the steps in our process model in the first section – linked with the StepID field. But there’s a set of attributes there that aren’t found in the Step table – things like StepTypeInput, PassedTest, IsCompleted, etc. This tells us what happened to the part as it hit this step of the process. Did it pass every test? Were there notes that the operator needs to jot down to help diagnose production issues? We chose to partition the SerialNumbers table into two pieces – the second (Test) shows test-specific attributes that only apply to final assemblies, including the final grade (A/B/F) and test start/end times. One SerialNumbers record (a part) can have one or more Interventions, where a defective part either has to be kicked onto a repair spur and either fixed and put back on the line or sent out for repairs. SNHistory tracks the record of our finished good as it is assembled into a chain of components and subassemblies.

I’m kind of going down an “ontogeny recapitulates phylogeny” rathole here. Let’s work backwards from the UI to explain this:

  • So when an operator is ready to assemble a part, he clicks on a button to generate a serial number.
  • A kickoff stored procedure generates a SN for that component and adds a single record to our SerialNumbers table, representing that part as built.
  • It then pulls a set of steps from our process model (including any LTO’s) and fills the BuildHistory table with this sanitized, plain-vanilla list of steps the operator will be following.
  • A barcode label for this part is printed and placed on the component.
  • As the component travels along the assembly line, operators are scanning this label – and using the app form to enter in attributes to BuildHistory to show if the part passed its tests, what the environmental conditions were, and if it had to be kicked out for intervention/repair.
  • All of this can be presented from a set of views and SSRS reports to show the lifecycle of a finished good and its individual components. Steps are modifiable, and so are processes. As each new product comes online, we add new processes and steps and BOM materials for that good – and keep it rolling.

Baking Cookies

In pitching this design to management, I used a cookies metaphor. When you go to make cookies, you pull out a recipe book, which has both a list of ingredients and a list of steps you need to follow. That’s great, but it may not represent what you actually do – you may be short of vanilla, or need to add an additional step to borrow some sugar from your neighbor. In our model, we have a list of steps (the Process/Step tables) to follow in our recipe, and a list of ingredients that we’ll use in an ideal situation (the BOM/Materials tables). But on any given day, when we pull out the recipe book, in the real world we’ll have the list of ingredients we ACTUALLY used (the SerialNumbers/SNHistory tables) and the list of steps we ACTUALLY followed in the heat of the moment (the BuildHistory table). Having a Step model to use as an ideal-world template, and a BuildHistory model to show it as-built, made the difference in creating a robust and performant application that was done in weeks, not months – and could survive numerous spec changes and new product rollouts.

Wrapping It Up

It’s kind of a mix of clunky and elegant still. But it’s survived a kind of Hunger Games-arena of instability and chaos for near on two months now with only minor tweaks. That’s because we started out simple. For example, the fact that the key BuildHistory table has a key/value pair design with the step description/input fields helped me initially when it came time to design and write the operations frontend. In my ASP.NET form listviews, I could read those bit properties – and show either a checkbox or a text entry field, and handle range validation appropriately for each step of the process.

I’m not as happy with the frontend, even though its mine, just because 1) it lacks a test project, which is massively irresponsible – and will cost us dearly in regression errors if not addressed in the near future, and 2) its using a SmartUI paradigm, WebForms, which offers quick turnaround but suffers in terms of UX, testability and clean design. We won the battle in rolling something out the door soon – it took about six weeks from design to ‘completion’ – but may lose the war if we can’t buy out a few weeks to rethink the frontend. Down the road I’m actively looking at Single Page Application UI or something using a MVC/MVVM pattern. Since the backend is solid, I’m not that worried – we’ll come up with something clean and nifty soon enough.

The best fishing flies aren’t imitative, but are representational. If the fly could be one of any number of insects, you’ll have more success on the water versus targeting one hyper-specialized approach. And I feel the same thing applies to software – you’ll have more success and waste less time by (at least initially) following a generic, nonspecific approach in your design.

Blog hall of fame

Not comprehensive by any means. Here’s some blogs I enjoy from People Not Named Scott Hanselman:


What I’m reading now:

An interesting article here on Technical Disobedience – and how one team lead started out a project with a plain vanilla build out with click-once deploy to QA/PROD – as a starting point – and after Azure/Amazon EC2 got shot down – used a local machine to get builds set up. I also enjoyed the story on how they progressed from TFS to a whiteboard… and ended up phasing out TFS! (If its simple and it works, you can’t improve on it.)

From this site I also enjoyed a simple definition of survival mode (putting out fires vs learning the skills to stop starting the fires to begin with!) – and a list of fixes (starting with 1-5x increase in time estimates). The team leader manifesto makes for a good read. And I’ve read and enjoyed many of the books he recommends here. Of course Robert Martin’s book is the standard.

Async and Await patterns (SalesForce used this extensively along with dynamic objects in their new .NET library) are excellent for anything long-running (read – files or image uploads, multiple web requests, WCF programming) – a great starting article is here. I tend to use async actions by default in setting up new controllers; it’s almost as easy as writing a synchronous method (just use async and await keywords, silly!) and greatly improves the responsiveness of a UI.

This is an older article here from ScottGu on ListView – and it’s .NET 3.5 – but I found it helpful in solving a problem recently with a form at work.

And on a personal note, I liked this article on self-publishing (using leanpub, Amazon, CreateSpace, and marketing. $11,000 in profit over two months is not too shabby. I really do want to write that book someday.

What an interesting article here. POCO heresy – “Code first O/R mapping is actually rather silly“!!! and this one on Microsoft losing traction/trust in developer land: “trustworthiness is closely tied to how much responsibility you want to take for the lifecycle of your product…. Microsoft blew this on several fronts and on several occasions in the .NET world.”