I’ve been involved in a variety of disciplines associated with software development for the better part of 30 years, so this installment is dedicated to some “stuff” that may appear obvious to some, but it seems worth restating since I have seen so many examples in our industry of how to develop software poorly. In fact, poor software design is so pervasive that our community will latch onto a new paradigm every few years hoping that it will be the silver bullet to kill the monsters and daemons that plague us. The latest fad that has gripped software development is Agile, and it brings us a development model with a set of principles and a seminal document that can sound revolutionary. For example:
This is quite understandable in light of years of pain and frustration. One of the key elements in Agile is that the people involved are important. It stresses satisfying the customer and accepts change as a natural part of software development, recommending continuous collaboration with the Business User as opposed to accepting or rejecting a specification. It suggests that face to face communication is superior to status reports and piles of documentation. These values represent an important shift in perspective, but they should not be used as an excuse for ambiguous requirements or an unwillingness to generate user documentation. The statement about valuing “individual and interactions over processes and tools” should not be used as an excuse for cowboy coding. Processes and tools can be extremely useful in an Agile environment, and I read the statement to mean that the process and tools should serve the developers, not the other way around. It doesn’t mean we should all stop tracking bugs or stop using source control. It does mean that we need to select tools and processes that fit the development model we’ve agreed to follow.
Let me make this clear, I don’t claim to be an expert on methodology, and I’m not one of the giants that founded the engineering discipline that has claimed so much of my time and attention, but I have learned some lessons about core elements crucial to a successful project, regardless of the development model. Based on these lessons, I have a few suggestions to anyone that feels compelled to subjugate a computer into their service where an existing program will not meet their needs…
1)Manage the interfaces to the development organization.
2)Clearly define requirements to save time, money and sanity.
3)Know your SDLC.
4)Use Source Code Control integrated with Issue/Defect Management
5)Expect developers to produce tangible, demonstrable deliverables.
When work flows into the development organization from a variety of sources, it quickly becomes impossible to do anything predictably. To gain visibility into the quality and delivery of the organization’s output, you must first accept that there will always be planned and unplanned work. Unplanned work may be handed off from a customer support organization through an escalation where a software defect has been identified, or may be the result of testing by QA or development itself. Planned work can be in the form of a project proposed by a business customer, or it can be that a defect has been identified that requires a level of resource commitment where planning becomes necessary. There needs to be a single body responsible for controlling the flow of planned and unplanned work – accepting projects for the planned work, managing the unplanned work queue, and mapping this work against release cycles and resources. Assuming that resources are finite, adding work into the queue means that something will come out for a given release cycle. The important thing here is transparency. It serves the interest of no one to pretend that work can be added without taking something else out.
OK, requirements can change. Heraclitus of Ephesus said the only constant is change – and I think the lesson from that is to accept it as a part of life. Does that mean requirements don’t have value? “A world of no.” In fact, clear, concise requirements are invaluable. A requirement that can be expressed as use cases and translated into test cases give the developer clear direction. There’s a concept that originated with XP that expects the developer to produce test cases and even code up the test classes that generate a negative result before implementing the methods that solve the problem expressed by the test case. If the tests are automated and the results exposed to the business user, the project or exercise has a good chance of being successful. Actually, test-driven development has taken on a life of it’s own, and that’s attributable to the value perceived by all the parties involved – the business users, management and the developers themselves. Test-driven development isn’t possible without succinct requirements. Requirements can change, but the new requirements should also translate into something the developer can test. The dialog between the business user and the developer in Agile totally supports this concept, and the idea of QA participating in development as a team member adds additional synergy to this iterative process by providing a perspective that spans unit test, integration test, performance test, load/stress test and user acceptance test.
The software development life cycle spans every aspect of your development from inception to end-of-life. Not everyone’s SDLC is the same, and can actually vary widely from one organization to the next depending on methodology, the nature of the business and aversion to risk. Within an organization, the SDLC can vary between planned projects and emergency bug fixes, and unless you’re a collective of mythical beings that foresee the future with unerring clarity and never write a line of bad code, there’s a good chance that the associated processes will be iterative. It is quite conceivable that separate process will be necessary for a “full release cycle”, a “partial release cycle”, an “emergency release cycle”, and a “diagnostic release cycle”, however, the closer they approximate a single workflow, the more likely you are to get useful metrics out of your tools. There’s a workflow associated with an iteration through the SDLC, even if it only exists in the minds of the managers and developers. It needs to define the participants, each state, and all of the associated transitions if you intend to use a workflow tool to manage the process, and I would really recommend a workflow tool. Bug trackers are OK at what they do, but issue management with a tool built on a workflow engine is far better at controlling and giving you visibility into your processes.
Segue to source code control. Many SCC systems can integrate to the developer’s IDE and an issue management system. There is a good chance that a time will come that you’ll need to know whether a change for a particular issue is in a given code base. You can fulfill this need by instituting a manual process of entering comments for every revision to every source file checked into your SCC system, or you can automate the task to some degree with an integration. The “richer” the integration, the more likely it is that you’ll get the data you want out of the tools and processes, and the more control you can impart to the process without incurring additional overhead. In fact, you can “disallow” check-ins that aren’t associated with an active project in a given state, owned by the individual attempting the check-in. Pre-transition commit hooks can be scripted to validate SCC meta-data against fields in the associated issue, ensuring that code is checked into the expected project on the expected trunk or branch.
Lastly, successful developers test their code. In bygone days, a development team could generate a package for a release and “throw it over the wall” to QA to test. The barriers between development and QA are falling like the Berlin wall, and the tools of today support continuous integration where a build fires off and unit tests execute whenever the SCC system recognizes a check-in. Integration tests are documented, automated and driven by tools used by both the development team and QA. Regression test suites are continuously updated to include validations from ongoing fixes to ensure that problems aren’t re-introduced. All of this activity is ideally transparent, and results are published real-time. In an environment like this, the motivation to generate good code is “built-in”. Processes and tools haven’t gone away, but we’ve succeeded in bending them to our will in ways that are extraordinary. These are interesting times, whether you’re a developer, QA analyst, configuration manager, or a business user…