How many times have you heard some mainframe defender announce that his group does the corporate heavy lifting - running thousands of applications? Ask how many financial applications they run and you often get numbers in the hundreds - not because there are that many, but because the original business applications consisted of the dozens of batch jobs that had to be run, back in the 1920s, in the just the right order to accomplish the transformation from general journal to trial balance.
The technology has changed but the terminology hasn't - and that's why the typical Fortune 1000 data center claims 12000+ applications while actually hosting less than a 100.
Something different happened on the science based computing side. There user boffins collected boxes of punch cards, known collectively as codes, while tool and OS developers used programming languages to develop programs and subdivided that domain using functional labels like utility, kernel, or file manager - thus allowing the people interested in commercialization to import the "applications" label for business packages like a GL.
When the PC world came along it adopted some terminology from both camps - depending largely on where the individual players came from. Thus one result of the on-going merger between these two communities now is that you need quite a lot of context to understand what's meant when someone uses a word, like "application", that previously had quite different meanings in the two communities.
Sometimes that context is particularly obvious: vendor application availability lists, for example, provide a context in which Acrobat Reader six is not only an independent application but also quite different from Readers Seven and Eight.
But ask yourself what these document fragments tell you:
Hendricks, Snell, & Brickenwurst: application programmer II, June 1999 through April 2006C is better for applications programming because it allows close hardware control and produces efficient, easily portable, run-time code.
Applications failure is the least common source of systems failure.
IT's job is to deliver working applications to users.
The obvious answer, of course, is to add modifiers: so applications become business applications, systems applications, or web applications - except that those terms need context too. For example, what the mainframer calls systems applications are usually utilities in Unix and either utilities or applications, depending mainly on licensing requirements, in the Wintel world.
Consider this statement:
The right way to develop and deploy business applications is to prototype them, move the prototypes into production, and keep right on changing them as the user's needs change.
Now this is rather obviously true if you assume user controlled computing with centralized data, 4GL development tools, and think of applications as necessarily interactive, business focused, CRUD windows on shared data.
Silently deny any one of those contextual assumptions by assuming that you know what an application is, and the prescription appears nonsensical: prototyping a control application is about as useful as poking the hardware with a stick - and more dangerous. Similarly trying to use prototyping to fit a new batch job into an existing series all of which have to execute in precisely the right order to get the job done is suicidal; assume a decentralized database and user driven prototyping becomes a recipe for chaos; assume centralized IT control and planning and the prescription becomes impossible to carry out.
So what does all this mean? That we have to be very careful about understanding the context before interpreting words whose meanings are subject to IT cultural flux - and, as a personal note, if things predicted here in the Future Tech Wing of the Unix Museum strike you as improbably assine, I'll ask that you check your assumptions about what you know, because you may need to actually look at some of the exhibits in the main halls before forming that judgment.