The following questions:
are all simple examples of question types for which people have developed specialized "recommender" solutions - applications that rank answers, or sources of answers, to user questions in some order of recommendation.
Google and Bing -and their predecessors back to BRS and ORBIT - attempt to answer questions of the first type by enabling boolean word search across multiple documents.
In the most general sense this type of solution requires no contextual information about either the questioner or the question - so early google home pages loaded with no embedded javascript and no server calls for customized information about the user.
The need to meet advertiser expectations changed that as google added contextual recommender layers starting with IP localization and now including search history, to deliver more targeted ads - so google home pages now require significant load time processing to produce less general search results than previous generations did.
The most obvious specialization here has been geo-location: using a chipset and some software on board the local device for critical input on which to answer the second type of question.
Still, this kind of thing has its limits, and the communications burden during session set-up can be significant, so some companies turned back to using more general search engines and embedding contextual information in the queries sent these engines. Thus Apple's siri search application for iDevices is structured as an expert systems application interfacing user queries to the backend search engine through the application of customer specific information stored on the "client", not the server. Thus if you use someone else's iPhone to query Siri about the fastest route home, the people there may be surprised to see you.
The role of social context and the usefulness of word clouds derived by textual analysis is obvious in some cases: the movie rental question is not, for example, generically different from the problems you'd face if asked to rank facebook users in terms of the sales they're individually likely to generate if sent a free bottle of a new shampoo - and the contextual word cloud idea is pretty obviously where you'd start with that one.
Similarly, business intelligence, such as it is, is often concerned with applying contextual data to sales prediction; hence the perception that deciding what items to place in the customer's line of sight near the cash register is best done by combining sales histories with the word cloud surrounding products that have sold well in that position.
Unfortunately all of these recommender solutions suffer from a practical problem known as cold start: whether it's a physical product, a personal blog, or an entertainment, something that's new never makes it to the top of any recommender list unless its description copies or only vaguely extends an existing product or products.
You can, for example, analyze tweet word clouds to determine what sells shampoo and then advertise your new product accordingly, but this is just another form of "search engine optimization" and thus ultimately a fraud on the consumer. Basically, the bottom line on cold start is that the more your product, service, or idea differs from the mass, the less likely it is to be proffered by any of the existing recommender solutions -just as anyone writing a master's thesis is best advised to spend 96 pages praising others, two pages apologizing for offering a new idea, one paragraph describing the idea, and two pages disparaging it.
Unfortunately this recommender engine behavior meshes perfectly with an aspect of human behavior as described by Festinger: specifically that we tend to actively seek out information confirming or supporting what we believe, and even more actively seek to avoid or repudiate contrary information.
Thus one result of the mutual support human nature and search engines provide each other is the internet echo chamber in which it's not currently possible to determine whether the "042-68-4425" story is true or not - largely because both the believers and the deniers just quote fellow partisans.
What we need to balance this is technology that doesn't actively support our willingness to delude ourselves: i.e. a way of asking questions which produces results objectively free of both perceiver and transmitter bias - and thus something that expands rather than reinforces our mental horizons.
Wolfram Alpha tries to do this by focusing on the factual context of the question -and for that reason both illustrates the cold start problem and demonstrates a possible solution to it with respect to quite a large set of questions.
But this won't work for all questions: there are many for which no practical approach free of external context is known. Consider what you'd have to do, for example, if given a million hours of recorded VoIP calls and asked to recommend the three minutes best worth an anti-terrorist team's time.
All the cues you need to do this are in the data, but that's theory: in practice there's no known way to do this without spending a lot of time on contextual information about the speakers. That's the limitation in all of today's recommender technologies: absent a general theory of information content, ordering, and transfer, we've worked out a lot of practical solutions to specific subsets of the problem - but they all depend on context, and context, as demonstrated by everything from google to the parable about bullet proofing academic work, misleads as often as it serves.