The Web Gambit

Thoughts on Web Development

The Shelf Life of Code

There’s a lot of discussion going around about the evolution of software development towards building in long term quality and maintainability up front using practices like TDD. A lot of it was sparked by Joel Spolsky’s praise of the Duct Tape Programmer.  The latest post in this discussion that caught my eye was Scott C Reynold’s Quit Living in The Past – Practices Evolve.

While I agree that “time to market” is an important driver for software, it shouldn’t be the catch-all excuse for building low quality solutions. Too often I’ve seen time to market gains get erased by the high cost of software maintenance and production support down the road. The current codebase that I’m working in on a daily basis was written by a few Duct Tape programmers and now my company is paying a high cost to satisfy existing customers while attempting to increase market share with a creaky solution.

The fact is, many businesses keep code around for a lot longer than we as developers would like. Software has become the life blood of many businesses and is very costly to construct, so getting applications out the door quickly and planning for a full rewrite every few years is rarely a good business decision.

Good developers are also becoming scarce, especially when looking for those that have recent experience with legacy technologies. This is because developers with talent are wanting to constantly evolve into new technologies and stay challenged. The only way to retain this talent and utilize it is to have a constantly evolving codebase. If the foundation of the software incorporates a lot of technical debt that is completely dependent on legacy technologies, a business will have a lot of trouble finding and retaining skilled developers to work on it.

These conditions all appear to run contrary to each other. Businesses want to keep working code around, but Developers don’t want to keep working on the same code. Development Managers have to keep their businesses happy, but still find and retain skilled talent.

So what’s the answer? Build quality and long term maintainability into the codebase up front.  Abstract your Persistence from your Service Layer so that you can swap out data access technologies every few years.  Separate your Domain from your Presentation so that you can swap between web technologies when appropriate.  Make unit tests a requirement for your codebase so that you always have a solid safety net when you break ground on new technologies.  Build a suite of automated acceptance tests against your User Interface so that you’ll know that something broke long before your customers find it.  Release early and release often so that your software doesn’t stagnate and you can maintain and increase market share.

Another big red flag is when you have individuals that knowingly cut corners because they don’t believe software should have a lifespan of more than 3-5 years.
These days businesses the shelf life of software is increasing dramatically as more core businesses b

Turning NorthDallas.NET up a notch

I was recently accepted as an officer of the North Dallas .NET User group.  I’ve been a member of the group for a few years now and have formed many great professional relationships through it so I’m very pleased to join the ranks of the officers!

North Dallas .NET is going through a few organizational changes that will likely be announced in one of our upcoming meetings.  My role will be to assist in lining up speakers for the user group.  One of my goals is to offer something different in the speakers that we line up for North Dallas as compared to our counterparts in the DFW area.  Rather than focus solely on demonstrations of the latest technology offerings from Microsoft, I want to help Dallas-area developers hone their craft and become better at what they do.

Here’s a list of the kind of speakers that I would like to see more of at North Dallas.NET:

  • Those that are interested in or experienced with the SOLID principles, Test Driven Development, Domain Driven Design, or Lean Software Development.
  • Those that can demonstrate tools and methodologies that go beyond RAD to construct long term solutions that provide business value without compromising on quality.
  • Those that understand how to utilize Design Patterns to build more robust, testable applications.

Those that want to speak on specific .NET-based technologies are always welcome to submit a talk as well.  Use my contact form if you’re interested in coming to speak in North Dallas.

If you aren’t attending a local developer user group in your area, I highly recommend it.  I can’t say enough about the importance in getting involved in a developer community as it will help your career by increasing your skill set and exposing you to other talented individuals. Attending a user group meeting might lead you to that dream job opportunity and having your company sponsor a user group meeting can help you hire highly skilled developers.

Infrastructure is not Business Value

The title of this post is based on a quote that I heard from Chris Patterson, when he was doing a presentation on Event Driven Architecture at the Dallas TechFest a few months ago.

To provide some background, Chris Patterson is one of the founders of MassTransit, an open source service bus implementation built on .NET. Mass Transit was originally developed by Chris and Dru Sellers when they were both working for two different companies in completely separate industries (healthcare and financial services) but they were both finding a lot of common ground in the infrastructure they were building. The core framework for Mass Transit was developed to solve the specific business needs of the companies they were working for, and they obtained permission from their companies to open source the Event Driven Architecture so that two of them could collaborate on it and bring in the wider development community to help make it more robust.

Chris used the phrase “Infrastructure is not Business Value” with his company’s legal team and leadership in order to justify the move to open source Mass Transit.

In a recent developer book club discussion at my work, this topic came up and generated a lot of discussion. The group debated whether or not this was the best move for Chris’s company. Is there real value and competitive advantage to be gained in keeping infrastructure closed source and in house?  Would our company benefit from utilizing Mass Transit or would we be shackled by it? Does Not Invented Here Syndrome actually provide competitive advantages?

I took the side that Chris’s company made the right move. The company is not in the business of building and selling a generalized Enterprise Service Bus and they were more interested in their specific  Healthcare industry needs.  By keeping Mass Transit lightweight and usable across other industries, the company reaped the benefits of having two really talented developers collaborate to solve a common problem. In addition, by opening Mass Transit up to outside open source developers, the framework can improve and Chris’s company is in a better position to reap the benefits of a more robust, evolving framework.

Companies like Headspring Systems in Austin, TX are based almost exclusively on this model.  Most, if not all, of their project artifacts that provide infrastructure solutions have been open sourced without exposing any of the business value that their clients required. By fostering a community that continues to improve these tools, Headspring is in a good position to provide business value to their clients, without getting bogged down in reinventing infrastructure.

Few companies can maintain an esoteric, domain specific framework in the long term without the technical infrastructure becoming creaky and obsolete over time. Over the long term, this can reduce any competitive  advantages that a closed-source framework would bring in the first place. Thus I believe it is in many organization’s interest to follow the mantra that “Infrastructure is not Business Value.”

Moving off Graffiti CMS and on to WordPress

After a long time of letting my blog go dormant, I’ve decided that it’s time to start blogging again. Giving my blog a face lift and moving it to WordPress seemed like the most logical start. Others, like Keyvan Nayyeri, are also moving on from Graffiti so I felt inspired to finally make the switch.

It’s been obvious to most that Graffiti CMS has a shaky future. It is clear that there is no dedicated product team focused on Graffiti at Telligent and that the company is still unwilling to open source the product despite pleas from its remaining user base. And as I expected, the lack of attention has given hackers enough time to find one security hole so far. While the hole was patched, it existed long enough that many sites, including my own, were hacked and the default.aspx file was replaced. In my case, my site was infected with malware that caused Google and other browsers to block my site. Telligent has since provided a work around for the exploit, but this was enough of a red flag for me to decide once and for all to move on.

Now on to why I picked WordPress. WordPress was one of the first blogging engines I used back in 2002 so I was already pretty familiar with it. I knew it had come a long away since then and I was very impressed with the progress that the product has made. So far it has been very reliable and has a good community behind it that has stood behind WordPress for the better part of a decade. Much of Graffiti CMS was inspired by functionality in WordPress and so for me, the choice was easy.

I did contemplate taking on Rob Conery’s Jedi challenge of writing my own Blog Engine in ASP.MVC, but I didn’t feel that I would be able to approach the functionality provided for WordPress or be able to maintain it long term. I also considered using another blogging engine written in ASP.NET, but none could approach the features, plugin model, and simplicity of WordPress.

If anyone is interested in the steps to move your blog from Graffiti to WordPress, I used this post from Jef and was able to follow all the steps to migrate my posts over. Since my blog has been inactive for so long, I’m not too concerned with permalinks being inconsistent, but Jef’s post had a solution for those who need it.

Top 5 things I learned during my time at Telligent

I learned a lot about software development while I worked at Telligent, and I thought it was fitting to list my top 5 while they were still fresh in mind.

1. Short, focused iterations are not optional.  An iterative-based approach is a must in today’s competitive environment. Tight iterations allow you to nail down attainable estimates, effectively utilize your project’s budget, and create higher quality releases.

2. There is no substitute for a great team. Methodologies, Processes, and Technologies can only take you so far.  You need great talent to make these things work.

3. Tools should not create much friction. Everyday tools that a developer uses for things like Source Control, Build Management, and Task Management should not be painful to use.  Otherwise, a lot of time is wasted and the team is left feeling extremely frustrated.

4. Do what is really needed instead of what people think they want. This can apply to clients, leadership, and peers.  With a client, understand the problem they’re trying to solve and provide a solution rather than blindly implementing what they have proposed.  With your leadership, understand how the information they desire will help them make better decisions so that you can provide it accurately.  With your peers, help them analyze their approach instead of just helping them find their way around one technical issue after another.

5. If you see a problem, take steps to fix it rather than complain about it. If you think your project lacks documentation, start it.  If you think your team needs training on a technology, learn it and train them.  If an in-house tool lacks functionality that you need, download the source and add it.  If an internal process is inefficient, gather the stakeholders and work to improve it.

My last day at Telligent

Yesterday, February 13, 2009 was my last day at Telligent. 

As announced on the official company blog, there was a Reduction In Force at Telligent yesterday that saw myself and many of my colleagues let go en masse. It was a tough day for everyone, including those who survived the RIF, as many of those affected had been with Telligent for a long time and watched the company grow from its relatively humble beginnings into the leader that it is today.

While I have my own reservations about the decisions that the company made, I understand that in these times of economic uncertainty, pragmatism and confidence are often pushed aside due to fear and doubt. Thus for me, the RIF was not completely unexpected. I hold no ill will toward Rob Howard or the remaining leadership at Telligent, and I wish them the best of luck and continued success in the industry.

What was unexpected however, was the emotional response I received from many of my colleagues as well as the Twitter community. Within hours of my tweeting my lay off, I saw hundreds of Retweets and gained a number of new followers over the course of a few hours. I got a lot of emails as well from members of the community offering job postings and condolences. The community response was overwhelming and I want to thank everyone that has helped get the word out for those of us who are now seeking new employment.

It was a very emotional day for a lot of us. I’ll always have fond memories of the random drawings on whiteboards, discussing tech in the hallways with my coworkers, the random banter in the hallways, and the regular office pranks that we all enjoyed. I think few people get to experience that kind of open culture at their jobs and I will always have fond memories of it. I doubt that I will ever get the chance to work with a more fun, smarter group of people who were as dedicated to their jobs as my former colleagues.

Just like Jason Alexander stated, I truly feel that the people of Telligent were like my extended family and I hope to having lasting relationships both professionally and socially with them for years to come. And I will always cherish the fact that I got to be part of something truly great.

As for myself, my job search officially begins next week after a weekend to reflect.  If you’re interested in speaking to me about any opportunities, drop me a line at kar dot hariharan at gmail dot com.

Taking Telligent to the next level

I’m breaking my long time blogging silence for a major announcement.  My company, Telligent,  has now received a round of funding from Intel Capital.  This funding will help us to grow the company and it also values our company very highly, which is a great achievement for our leadership.

When I joined Telligent last year, I was really excited to see and understand how a startup worked and see how a company goes from being a small player to an industry leader.  I stopped telling my friends that "I work for a startup" about 6 months ago.  It’s been clear for some time now that we’re much more than that. Now I tell people that I work for a small, but growing social networking software company.  Rob and Jason’s vision has been always to turn Telligent into a great company that will lead the charge of Enterprise Social Networking.  Not just a flavor of the week on TechCrunch.  From the beginning we’ve been a profitable, revenue-generating business, with a focus on growth and industry leadership.

Back when I worked for Deloitte, I often thought about how a "Friendster behind the firewall" would be a great way to facilitate the personal networking which was integral to the Big 4 consulting lifestyle.  It’s been a great experience for me to see that vision turn to reality with Telligent’s Evolution product.  I’m definitely excited to help our clients build their intranet solutions around it.

Suffice to say that I’m very excited about what the next few years will bring and I look forward to the day when I see Telligent on the Fortune 500.  Congrats to Rob, Jason, Scott D, Scott W, and Tom.  You guys rock!


Google releases Calendar Sync

Earlier I blogged about how to sync my Outlook and Google calendar using Plaxo.  This has worked very well for me but now Google has released their own tool to do the same thing.


One of the interesting features that this tool allows is the ability to do 1-way synchronization.  You can get more information on how to set it up here

For now I’m going to continue to use my Plaxo setup since it already meets my needs.  But if you’re interested in a solution for Google and Outlook calendar synchronization that doesn’t require opening a Plaxo account, then Google Calendar Sync will be worth downloading.

Mastering the Hand-off

Most consulting engagements usually end with a hand-off of the project’s deliverables to the client’s resources during the last few days of the consultant being in contact with the client.  This is often referred to as the “knowledge transfer” and can involve walking the client through the finished code artifacts and any documentation around it.

Most consultants dread this part of the process as it often requires mounds of documentation and tons of meetings with seemingly little accomplished.  Even after the process is completed, many consultants will still get a follow up call from the client six months later when the developer who was supposed to take over their project has left for greener pastures.

There are many reasons why consultant-developed software often follows this pattern.  Firstly, most clients are plagued with the inability to identify a suitable maintenance resource in a timely manner.  Often this important task is put off until the last minute thus reducing the quality of resources that can be found to take over the project.  Also if the client’s resource is tasked with learning the code base in a very short time, they often don’t get enough information and are overwhelmed when they are expected to deliver a round of enhancements soon after the consultant leaves.

To increase the chances of a successful hand off, the client resource should be identified soon after the project enters its testing phase to ensure that they have sufficient time to get up to speed.  Also it can help to let the hand-off resource take a crack at some of the lower priority bugs that inevitably creep up so that they get familiar with the code.  Having them get their hands dirty when the consultant is still around to support them can be extremely valuable to the client.

Consultants should also make every effort to develop a maintainable solution that can easily be transitioned.  A complex custom application framework, a cutting edge technology, or an unfamiliar platform can all reduce the client’s chances of successfully maintaining their shiny new application.  While it can be appealing to try a new technology or build a new skill set, such decisions often create unnecessary risks for the client and their resources.

So does this mean that consultants should simplify their architectures to Typed DataSets and drag-and-drop GridViews?  Not necessarily. 

Even with the most non-technical clients, a consultant should take the time to explain the important architectural decisions in ways where the risks and rewards are absolutely clear to the client. If a technical decision is made that reduces development time but requires a specialized resource, then the risks and reward of such a decision should be clearly communicated to the client before it’s too late to turn back.  Projects often fail because of technical decisions that were made up front later on became extremely costly to support and were ultimately irreversible without re-writing large parts of the application.

Finally, consultants should beware of clients that expect to just pay an invoice and get some working software with very little personal involvement in the project.  No matter how much they may profess that they “just want something that works” they will have always opinions on how it should work. So a consultant is better off setting their expectations of the client’s involvement as early as possible. This will ensure both a healthy development cycle for the project and help transition it to a capable maintenance resource that can support any future needs.

Thoughts on Headspring’s Agile/XP boot camp

I recently had a chance to attend Headspring System’s Agile/Extreme Programming boot camp for Advanced .NET developers led by Jeffrey Palermo in Austin, TX.  I had wanted to learn the proper techniques to approach agile development on the .NET platform from an expert and when I found out about this course, I eagerly signed up.

The three day course covered best practices in regards to Agile design and process while incorporating multiple deep dives into Jeffrey’s tools of choice when doing development.  The exercises followed a format of first explaining how a normal development process becomes more agile and then having the hands on experience of implementing that process with different tools and concepts.  Jeffrey explained how simple things like source control can become more agile through regular branching and merging and he demonstrated how to do this with TortoiseSVN.

While a lot of software development training is often presented very academically, Headspring’s was very hands-on and really pushed everyone to pick up the processes and tools very quickly, just as you would in the real world.

On day one, upon entering the classroom we were told to check out a Subversion repository hosted on Google Code that would house the code for the application that we would be extending over the next few days.  This application was a Work Order management system designed using Domain-Driven Design and built using a Model-View-Presenter implementation with NHibernate serving as the ORM between our entities and the database.  The application also had a suite of both automated Unit Tests and Integration Tests which were regularly run as part of it’s NAnt-based automated build script.  If you’re curious to see the system, download the Visual Studio 2008 solution here.

Over the next two days, we extended and refactored this Work Order system to support new features and workflows by first writing NUnit tests and then designing interfaces around the problem domain. By focusing our whiteboarding sessions on separating the concerns into smaller pieces that each group could begin implementing, the whole class of fifteen students was able to work as a team to successfully build and test the new features in a just a few hours.  Along the way we touched on many concepts like Inversion of Control by using StructureMap and reliable, targeted Refactoring by using Resharper.

Finally, we ended the class with a review of all of the concepts and tools that were used during the exercises and we identified some of the design patterns that were implemented by the final solution.

I really appreciated that Jeffrey tailored the exercises as per the class’s feedback.  Many of the attendees wanted to see more detail on NHibernate so Jeffrey dedicated a lot of time to going over some best practices around NHibernate mappings and he demonstrated ways to optimize the queries produced by it.

There was a obviously a lot more to this course than I could possibly cover in a single blog post. If all of this content sounds interesting to you, I would highly suggest signing up for one of Headspring’s upcoming sessions if you are looking for some practical guidance to help you hit the ground running with Agile development.

One final note: If you are considering taking this course, I would recommend familiarizing yourself somewhat with the following tools beforehand: TortoiseSVN, NUnit, and Resharper.  Also, I would recommend having at least a basic understanding of Test Driven Development, Model-View-Presenter, and Inversion of Control. The course moves very quickly and you will have an easier time keeping up if you at least have some working knowledge of these tools and concepts prior to attending the course.

Thanks again to both Jeffrey Palermo and Jimmy Bogard for a great training experience and chance to inundate them with questions as I am known to do!