I've developed what I think of as a forearm rule for forming snap judgements about the quality, and therefore likelihood of success, of software development projects. The rule is simple: exceptions are inversely proportional to quality.
The rule applies most directly to project scope, but has a coding and design corollary. The general version says that the more exceptions or special cases the application has to handle, the more likely it is that the project's scope has been too tightly defined -compromising the design and predicting first that scope creep will drive the development process and further that any success the project eventually has will institutionalise inefficiencies in the host organization.
You typically see this in situations where automation is applied to one part of a business process that itself evolved in response to what could be done with one or more predecessor technologies. In that situation the work processes in place will reflect those predecessor technologies and development projects targeting anything less than all of the workflow will, if delivered, perpetuate the compromises made to accommodate those predecessor technologies.
Notice that, in general, the older the business process the more of it is likely to be based on a divide and conquer strategy with some combination of manpower and delay being used first to divide the job into do-able bits, and then to bring the output from multiple work processes back together to form the end product. In other words, the older the processes, or the more technology generations embedded in current workflows, the more likely they are to embed workflows and organisational practices that we would consider wasteful if we were designing processes to get from input to output using the best currently available industrial or organisational technologies.
This problem is easy to spot in a small business, but becomes a very difficult forest and trees issue when you're dealing with large organizations whose structure reflects work flow adaptation to multiple generations of predecessor technologies. In those, the people who manage each organisational unit typically try to use automation to improve unit performance rather than organisational performance and consequently act collectively to limit organisational work process change by enforcing unit mandates in new automation.
Thus when you, as a systems architect or developer, think about the organization in terms of your understanding of what the technology you work with can do, your vision of the new workflow will contradict that of internal managers whose views are heavily colored by the processes they have in place. Traditionally that conflict is mitigated through the project scope statement or definition - a kind of peace treaty between two visions of future workflows.
Unfortunately such agreements on scope often contradict reality - meaning that as the scope constrained design is implemented the developers run into situations that can only be resolved via what's called scope change but is really organization work flow process change. Most often this manifests as the discovery that some externally visible data is shared between several organisational units, all of which assume that everyone knows what the words they use to describe the data mean, but actually associate slightly differing definitions and work processes with that data.
When that happens, meetings are called and compromises worked out - resulting in what is then called scope creep for the project and generally implemented via exception processing and some minor work flow change. To those involved this usually seems reasonable, but the net effect is generally to perpetuate existing workflows and thus institutionalise inefficiencies that would have been excised had the initial scope been sufficiently broader.
The application design and coding variant is a simple consequence of this: the less general the workflow, the more exceptions you have to account for. Thus if you look at somebody's code or design and it's filled with whatever the language in use substitutes for the abuse of case statements as faux gotos in C, the more likely it is that you've got a conceptual clarity problem.
This applies, incidently, quite generally and not just to business applications. Check check out winning open source applications like Apache or the early Linux kernel and you'll see a decrease in exception handling combined with design generalisation in each major new release - and, conversely, when you see a software product going the other way: adding restrictions to general cases in favor of separate handling for special cases, you can assume that it's either going to gradually fade away or get taken over by descendents from a more austere fork.