Monday, July 18, 2005

The Shade Tree Developer is moving to

Sorry for the inconvenience, but I'm moving The Shade Tree Developer to CodeBetter. Besides keeping some great company at CodeBetter, I'll get to have a bit better blogging infrastructure like trackbacks and categories so you can skip over my recent rantings about legacy code to get to the TDD content. I might also try to remember enough about CSS to get a stylesheet that doesn't make my code samples look fugly. I'll put forwarding links on the blogspot location for the older posts.

The new address is

The new RSS feed is

An Atom feed is available at

Friday, July 15, 2005

Resources from the Continuous Integration talk last night

If you were at the Continuous Integration talk last night at the Austin SPIN, here is the set of resources I mentioned. The slides from the presentation are here or at Thanks for coming out last night in the rain and putting up with me.



  • I published my company's .Net tool usage with descriptions and links here.
  • CruiseControl (Continuous Integration for Java)


I know there has been a couple of books published on CI now, but I don't know any to recommend. I heartily recommend Michael Feather's Working Effectively with Legacy Code for retroactively applying Test Driven Development and CI.

Thursday, July 14, 2005

Bunch of Good Links

First, maybe the best blog post I've every read from Mike Spille. I've been guilty of the "guardrail to guardrail" effect a time or two. On a project last year we probably did some dumb stuff because of someone else's negative experience with a certain technology on his previous project.

Read the comments too --> "BTW, you left out the obligatory insult to VB."

In a similar vein, James Bach warns us against being absolutist about "Best Practices." One of my favorite quotes is "A maturity model is basically a gang of best practices hooked on crystal meth." Just to add my 2 cents worth, I think it's best to always keep an eye on the first causes. I've written before that a centralized architecture group handing down black and white mandatory best practices from Mount Olympus to the pitiful wretches in the trenches is a terrible organization anti-pattern. There is always some kind of exception to every rule and the guys doing the actual work should be capable of exercising their own judgment.

Lastly, integration guru Gregor Hohpe asks what the real difference is between coding and configuration. Here you go Gregor, the difference is that I can write unit tests for code and use a debugger to troubleshoot code. I don't know why people keep inviting Gregor to these kinds of conferences because he just mocks the proceedings in his blog later.

My StructureMap tool is very configuration intensive and I have an unnatural tolerance for external XML configuration. To compensate, I've tried to provide the usage of .Net attributes for some configuration to be closer to the code. I've also had to invest a lot of energy into creating better tools for troubleshooting and validating configuration files in StructureMap. Since I dogfood StructureMap, expect more and better troubleshooting tools (because my colleague gets irritated otherwise).

Tuesday, July 12, 2005

Impressions from the VSTS talk last night

I went to a talk on Visual Studio Team System last night given by Chris Menegay at the Austin DotNet Group. I tried to go into this with an open mind, but my bias is clearly with open source tools and agile processes. I'm still not convinced VSTS provides any value beyond our current tools and process, but it'll help somebody. Either way, I think VSTS is a valuable addition to the .Net world. If nothing else, it's helped introduce lifecycle practices to shops that have none and started some healthy conversations.

I wrote this several months ago, and nothing I saw last night substantially changes my opinion.

Without further rambling, cue the "wah, wahhh, wahhhhhh" theme music and let the cliche begin...

The Good

  • I think VSTS could be great for project tracking. Being able to intelligently log checkins to a user story or a bug fix with the time spent on the task could be a great boon to project management. The Heisenberg Principle always applies to this kind of tracking. If I, the developer, have to go out of my way, my velocity is going to be slower. VSTS might change the cost/benefit equation somewhat. A lot of any Agile process is the ability to accurately forecast user stories during iteration planning. Using a good "yesterday's weather" report from past iterations could certainly help. We're still going to take a look at Trac though as an alternative.
  • The code coverage visualization is awesome. It's not clear yet, but it might be possible to create test coverage reports from other tools outside of VSTS (NUnit, NFit, etc.)
  • It is an integrated suite of tools that will be supported by MS. I personally don't buy the argument that assembling a stack of NUnit/NAnt/CC.NET/NDoc/Subversion is really that hard, but I've been immersed in the agile world for awhile and I'm comfortable doing that (I've also worked with some of the NUnit and CC.NET folks too, so I'm not really impartial). For a shop with no experience in these tools or with a hostile policy towards open source, I can see the value of buying the whole stack in one place.

The Bad
  • Call me close-minded, but I wouldn't touch MSF with a ten foot pole and a firewall in between. The process templates are editable, but it doesn't sound like there is a GUI yet for doing the editing. I'll be interested to see if anybody (with much more patience and ambition than me) tries to make an XP or Scrum template.
  • At this point it doesn't look like you can use any other source control system than what comes with VSTS. As much a pain as it has been trying to move from VSS to Subversion, I'm not very excited about another migration. The source control is clearly better than VSS (how could it not be), but I didn't see any particular value over Subversion today.
  • VSTS does not support Continuous Integration, only scheduled builds. You'll need to continue using CruiseControl.Net. I fully expect the CC.NET guys to have a plugin for VSTS and/or MSBuild for CC.NET by the end of the year.
  • The web testing looks weak to me. I still think we're looking towards either Selenium or Watir.

The Ugly
  • From the initial impression, the MSF for Agile process looks like a heavyweight process. It is clearly meant as an iterative process. While that's certainly an improvement on the older MSF, iterative alone doesn't automatically translate to being agile.
  • VSTS is clearly aimed at very large companies using laborious, high ceremony processes--everything that my employer is not. I think process automation is a great thing, but if your process is so complex that nobody can know the whole, I think you've got some serious issues.
  • Pricing. When alternative tools are available and generally free, I have a hard time justifying the cost.
  • The one piece of the presentation where I thought Menegay was completely off his rocker was the idea that VSTS became a directed workflow tool to tell everybody (expecially developers) exactly what they should be doing. As a tracking tool for project record keeping, I think VSTS is great. Using something like VSTS as your primary communication channel is absurd. Turning developers into mindless zombies doing exactly what the VSTS workflow tells them to do sounds like a recipe for disaster to me. Not to mention a steep drop off in developer retention.

Monday, July 11, 2005

Don't play with strange databases... don't know where they've been, or who's been touching them.

I've been burned pretty badly in the past by making assumptions about existing databases. Database documentation isn't always helpful either, because it is out of date or just wrong to begin with. Even if the documentation is "correct," the corresponding code may be using it differently than the database designer intended. Even worse, you often have to interpret the data, and that's dangerous when you don't know the database. The only accurate source of information is usually the subject matter expert for the system.

What's the answer? I don't know. All I can say is to approach someone else's database with great caution. Kinda like going up to a big, mean looking dog and trying to make friends. I wouldn't make any kind of assumption about the meaning of any table or field. Put some effort into understanding the schema and expect to go back and forth a little bit.

It's the year 2005. Isn't the existence of SOA supposed to eliminate the need to be doing integration directly against a database? I'm writing a little code to integrate our main application with another 3rd party system. The only way I have to do this right now is to just write SQL queries against their database schema. I feel dirty.

Even worse is when shops start writing their own extensions and back door queries against a 3rd party application. That's a great idea. Take a database that you're not supposed to touch directly, and that you don't fully understand, and write all kinds of custom code into it. That'll really make it easy to upgrade the software package later.

Don't Make It Worse

At a previous employer they use an ancient inventory system that is written in a rare 3GL language. If you think of your IT infrastructure as the cardiovascular system of a manufacturer, the inventory system is the heart muscle. Everything else talks to the inventory system. The business logic of the system was bound up in the UI screens, so the only front door integration point was a screen emulation package from the vendor that was only certified for low volumes. At this point an intelligent IT organization in the SOA era would start looking for alternatives. My old employer beat the integration problem by writing hundreds of PL/SQL procedures to duplicate the business logic in the screens in lieu of a real service layer. To support other functions they added about 300 custom tables to the out of the box database. Just to make things worse, the database customizations were different region to region.

At one point I needed to write an integration from the inventory system to my application. I went and asked the SME where I could find a certain piece of information. He told me to look in table A and I turned around to leave. He then said, "you could also get it out of table B, or table C come to think of it." Oh, ouch. When I left they were struggling with a batch job that was supposed to run nightly that was taking about 25 hours to run and locking all kinds of database tables. They've experienced some problems with scalability, who would have guessed?

Even though the system is horribly obsolete and arguably an opportunity cost slowing down new development, they have no chance of upgrading to the newer J2EE version with web services because their database is way too wired. Idiots.

Harris Boyce warns us that O/R isn't a silver bullet

Harris Boyce wrote a response to my ADO.NET contagion post: "Soul Vaccination" for Data Access Layers.

I think that Harris is quite right in warning us about overusing Object/Relational mapping.

While there are still plenty of scenarios where I would forgo an O/R approach for the classic data access layer, I still think that the "Persistence Ignorant" (POCO/PONO/POO whatever) approach for business classes is the way to go for the sake of testability. I think the decision hinges primarily on the amount of business logic in your application (and secondarily on the comfort level of the developers with OO coding in general). Even if I'm not using an O/R tool, I usually use some type of Inversion of Control to keep the rest of the code loosely-coupled from the data access.

Before diving into O/R mapping, you might take a look at these resources. Understanding some of the underlying patterns and mechanisms of O/R mapping might help alleviate some of the pain or help in making design decisions.

  • Patterns of Enterprise Application Architecture by Martin Fowler has a couple of good chapters about O/R patterns. The chapter on organizing business logic is a canonical read for deciding whether or not to use a Domain Model approach that would lead to using O/R for persistence.
  • Applying UML and Patterns by Craig Larman has a great chapter on creating a small persistence framework. This book is my choice as the best introduction to OO programming around.
  • This article from Scott Ambler introduces the basics of O/R
  • My article on C# Today from a couple years ago, but it's derivative of the guys above.

Sunday, July 10, 2005

Continuous Integration Presentation at the Austin SPIN

I'm giving a presentation on Continuous Integration at the Austin SPIN meeting on Thursday, July 14th. The meeting info is here. It's an adaptation of the talk Steve Donie and I did for Austin Agile a while back, just less .Net-centric for a broader audience.

I'm excited (OT)

Communications technology is undeniably changing our world. "The World is Flat" type effects may act as a downward force limiting our salaries, but my wife just let me sign up for the NFL Sunday Ticket and that's pretty darn cool. I live in Texas, so every Sunday at noon my football viewing choices consist of mediocre Cowboys on one channel and mediocre (but improving) Texans on the other channel. I grew up in Missouri, so I'm a lifetime Kansas City Chiefs fan. Thanks to a little technology I'm gonna be watching my beloved Chiefs in my living room this year instead of trying to sneak over to a sports bar to see the game.

And if the Chiefs' new linebackers still can't stop my Grandmother from running up the middle, I'll just change to a different game.

Friday, July 08, 2005

When I was a Mort...

Yesterday at lunch we were discussing (mocking) Rhocky Lhotka's defense of "Mort's" on his blog. What the heck is a Mort you ask? A Mort is supposed to mean an opportunistic programmer who is most concerned with delivering business functionality rather than wasting any time on silly ivory tower concepts like technical quality. Unfortunately the term "Mort" has become a pejorative term synonymous with low skill developers using data aware widgets to drag'n drop their way to one tier applications. I'll admit that I commonly use the term Mort as a putdown, but there was a day and time when I was a Mort, too.

My first programming experience was writing some ASP and Access tools for my engineering group in the late 90's. At that time the engineering and construction world was pretty crude in terms of IT automation. There were a lot of data silos, and quite often the most junior engineer (always me) was stuck manually typing information contained in one database into Excel sheets. We had to generate a lot of paperwork and track a lot of audit style information. I hated the paperwork aspect of engineering (hence my gravitation towards agile processes), so I set out to create some automated tools to create the Word and Excel documents from information in an Access database that was edited by ASP pages. This work led me to a position with the project automation group creating an ASP system to verify and manage a very poorly written, but mission critical, data exchange.

I clearly created a lot of business value and I was pretty proud of myself. Then I left the company to relocate to Austin, and everything I left behind collapsed in a few months because no one could support it. Some of it was rewritten by actual IT folks (and consider this a long overdue apology to you all), but most of it just disappeared. So what did I do wrong? Here's a laundry list of really bad things I did because I just didn't know much about good practices in software development.

  • Source control? What's that?
  • (Boss) Jeremy, where is this stuff running? (Me) It's running on personal web server on my box. Hey wait a minute, what's that awful sound coming from my hard drive and why won't my box reboot anymore?
  • (Boss) Do you have that stuff backed up? (Me) What does 'back up' mean?
  • Coding directly into a production database of a 3rd party product that was notorious for database corruption on projects ranging from little 100 million dollar projects to multi-billion dollar projects
  • ActiveX controls on ASP pages. Believe it or not we had some issues with installation.
  • ASP pages running against MS Access via ODBC in production
  • Connecting directly to production Oracle databases from ActiveX controls on an ASP page
  • Using the old Remote Data Service (RDS) library. Great stuff for productivity, huge security hole. Using RDS meant that I had the user name and password for the fragile Oracle production database embedded in each ASP page where any yokel could go "View Source" and hack our sensitive data.
I was focused on business needs and delivered, but I was dramatically ignorant of anything approaching decent development practices or design. I distinctly remember one humbling episode where I asked a senior automation expert for help on comparing two different databases. He took a very short look at my several hundred lines of VBScript and banged out a SQL statement that did exactly what I needed to do in one line of code. His exact words were "you've got good ideas, but you would be much more effective if you just knew more." Ouch.

All right, I just shared some of the truly stupid and ignorant things I did in my Mort days. What did you do in your Mort days?

What's so wrong with being a Mort? After all, they're pretty common and they definitely add value to their employers. They're usually closer to the business and have a better understanding of the problem domain. For one thing the Mort's of the world are in mortal danger if I have to work with any more awful Mort-ish code (regardless of what language it's written in). For another more serious reason, the Mort jobs are the first developers to be offshored. Lastly, if you're developing software, learn about your craft and climb the skill ladder. The whole Mort/Elvis/Einstein taxonomy may be crap, but the more you know the more effective you'll be and that seems pretty pragmatic to me. Make your great business solutions stand the test of time.