It seems to me that a lot of the responses to last week's blogs on using prototypes in development reflect an assumed set of distinctions between coding and application development that themselves ultimately reflect organizational assumptions.
Note that I'm talking about the traditional business application -many users, CRUD transactions on one database- and not about embedded or OS and tools development. Traditions there are very different - and so are the organizational structures that facilitate success.
It's almost impossible to think about application development without implicitly assuming a process. For example, even something as simple minded as "figure out what the code is supposed to do, then make it do that" implies a necessary sequence: requirements first, code second. Worse, even the weakest imposition of process carries organizational design issues with it: the schema above, for example, almost forces the organizational separation of coding from requirements analysis.
The more you break the process down, the worse this gets and the more chicken and egg confusion enters the picture. Ask whether testing is part of coding or a separate discipline, and the obvious answer is that it depends on organizational structure - except that the organizational structure is itself an answer to that question.
A big part of the reason this is that most organizational structures now used in IT were inherited from data processing - a much older discipline with agendas and accumulated experience that generally don't apply to IT at all. In other words, I maintain that we separate requirements analysis from coding and coding from testing mainly because experience gained in the 1920s and 30s showed data processing practitioners that doing this made the most sense.
There were four main reasons that was true for them:
Notice, however, that none of these evolutionary drivers holds true today.
Then, coding was naturally separated from the business, the requirements specification people had to work with Finance management but not with the users themselves, and testers needed no contact with those specifying the requirements, but talking to coders might corrupt them - so separating these activities organizationally put in some controls against fraud at virtually no cost to productivity and no increase in project risk.
What evolved in response to those realities is the process now categorized as the "waterfall model" for applications development. In this process all the steps in the development process are placed into individual organization blocks subject only to a steering committeee and its delegate, the project manager. Thus, from concept approval to run-time release, each step is separately completed, separately reviewed, and separately signed off in a process perfectly attuned to the problem of creating, managing, and auditing batch jobs for card tabulators like IBM's 1931 Type 600 electro-mechanical multiplying punch and 1934 Type 405 Alphabetical Accounting Machine.
Unfortunately it's also fundamentally inappropriate to today's IT development needs. Consider, for example, this analogy about what happens when change over time and increased complexity enter the picture.
Imagine yourself passing a sentence to the first of a dozen people, each of whom will count the letters in the sentence before passing it on, and the last of whom is going to bring the sentence back to you.
Whisper something simple, like "cards" and the odds are good that's what you'll hear when it comes back. Change that to something a little more complex like: "brown dogs jump all cows but orange dogs only jump green cows and red ones only blue ones" and you'll get complete gibberish at the end. And if you think that's bad: add an external change, require that red dogs be treated as brown unless they have already jumped a blue cow, and then think about how you would insert this information into the process.
Since you can't have change bypass the earlier steps - whispering your color and logic change to the person in line who's currently holding the sentence - without bringing down the whole structure, the solution is obvious: if you can break the entire sentence into phrases, adding delimiters and a sequence number to each to allow reassembly at the end, you make it less likely that each phrase will get corrupted as it passes down the line and make it possible to pass new information into the chain at any time.
That's the logic behind OOPS, an unfortunately accurate name for an adaptation of the standard organizational structure in data processing to the realities of greater programming complexity that came about when science based computing crashed into data processing in the sixties, seventies, and eighties.
But what if you apply the calculus: chopping that project into the smallest possible pieces? What you get is forced organizational change, programming patterns, and tomorrow's subject: 4GLs.