No, Really; Domain General Mechanisms Don’t Work (Either)

Let’s entertain a hypothetical situation in which your life path had led you down the road to becoming a plumber. Being a plumber, your livelihood depends on both knowing how to fix certain plumbing-related problems and having the right tools for getting the job done: these tools would include a plunger, a snake, and a pair of clothes you don’t mind not wearing again. Now let’s contrast being a plumber with being an electrician. Being an electrician also involves specific knowledge and the right tools, but those sets do not overlap well with those of the plumber (I think, anyway; I don’t know too much about either profession, but you get the idea). A plumber that shows up for their job with a soldering iron and wire-strippers is going to be seriously disadvantaged at getting that job done, just as a plunger and a snake are going to be relatively ineffective at helping you wire up the circuits in a house. The same can be said for your knowledge bases as well: knowing how to fix a clogged drain will not tell you much about how to wire a circuit, and vice versa.

Given that these two jobs make very different demands, it would be surprising indeed to find a set of tools and knowledge that worked equally well for both. If you wanted to branch out from being a plumber to also being an electrician, you would subsequently need new additional tools and training.

And/Or a very forgiving homeowner’s insurance policy…

Of course, there is not always, or even often, a 1-to-1 relationship between the intended function of a tool and the applications towards which it can be put. For example, if your job involves driving in a screw and you happen to not have a screwdriver handy, you could improvise and use, say, a knife’s blade to turn the screw as well. That a knife can be used in such a fashion, however, does not mean it would be preferable to do away with screwdrivers altogether and just carry knives instead. As anyone who has ever attempted such a stunt before can attest to, this is because knives often do not make doing the job very quick or easy; they’re generally inefficient in achieving that goal, given their design features, relative to a more functionally-specific tool. While a knife might work well as a cutting tool and less well as screwdriver, it would function even worse still if used as a hammer. What we see here is that as tools become more efficient at one type of task, they often become less efficient at others to the extent that those tasks do no overlap in terms of their demands. This is why it’s basically impossible to design a tool that simply “does useful things”; the request is massively underspecified, and the demands of one task do not often highly correlate to the demands of another. You first need narrow the request by defining what those useful things are you’re trying to do, and then figure out ways of effectively achieving your more specific goals.

It should have been apparent well before this point that my interest is not in jobs and tools per se, but rather in how these examples can be used to understand the functional design of the mind. I previously touched briefly on why it would be a mistake to assume that domain-general mechanisms would lead to plasticity in behavior. Today I hope to expand on that point and explain why we should not expect domain-general mechanisms – cognitive tools that are supposed to be jacks-of-all-trades and masters of none – to even exist. This will largely be accomplished by pointing out some of the ways that Chiappe & MacDonald (2005) err in their analysis of domain-general and domain-specific modules. While there is a lot wrong with their paper, I will only focus on certain key conceptual issues, the first of which involves the idea, again, the domain-specific mechanisms are incapable of dealing with novelty (in much the same way that a butter knife is clearly incapable of doing anything that doesn’t involve cutting and spreading butter).

Chiappe & MacDonald claim that a modular design in the mind should imply inflexibility: specifically, that organisms with modular minds should be unable to solve novel problems or solve non-novel problems in novel ways. A major problem that Chiappe & MacDonald’s account encounters is a failure to recognize that all problems organisms face are novel, strictly speaking. To clarify that point, consider a predator/prey relationship: while rabbits might be adapted for avoiding being killed by foxes, generally speaking, no rabbit alive today is adapted to avoid being killed by any contemporary fox. These predator-avoidance systems were all designed by selection pressures on past rabbit populations. Each fox that a rabbit encounters in its life is a novel fox, and each situation that fox is encountered in is a novel situation. However, since there are statistical similarities between past foxes and contemporary ones, as well as between the situations in which they’re encountered, these systems can still respond to novel stimuli effectively. This evaporates the novelty concern rather quickly; domain-specific modules can, in fact, only solve novel problems, since novel problems are the only kinds of problems that an organism will encounter. How well they will solve those problems will depend in large part on how much overlap there is between past and current scenarios.

Swing and a miss, novelty problem…

A second large problem in the account involves the lack of distinction on the part of Chiappe and MacDonald between the specificity of inputs and of functions. For example, the authors suggest that our abilities for working memory should be classified as domain-general abilities because many different kinds of information can be stored in working memory. This strikes me as a rather silly argument, as it could be used to classify all cognitive mechanisms as domain-general. Let’s return to our knife example; a knife can be used for cutting all sorts of items: it could cut bread, fabric, wood, bodies, hair, paper, and so on. From this, we could conclude that a knife is a domain-general tool, since its function can be used towards a wide-variety of problems that all involve cutting. On the other hand, as mentioned previously, a knife can efficiently do far fewer things than what it can’t do: knives are awful hammers, fire extinguishers, water purifiers, and information-storage devices. The knife has a relatively specific function which can be effectively applied to many problems that all require the same general solution – cutting (provided, of course, the materials are able to be cut by the knife itself. That I might wish to cut through a steel door does not mean my kitchen knife is up to the task). To tie this back to working memory, our cognitive systems that dabble in working memory might be efficient at holding many different sorts of information in short-term memory, but they’d be worthless at doing things like regulating breathing, perceiving the world, deciphering meaning, or almost any other task. While the system can accept a certain range of different kinds of inputs, its function remains constant and domain-specific.

Finally, there is the largest issue their model encounters. I’ll let Chiappe & MacDonald spell it out themselves:

A basic problem [with domain-general modules] is that there are no problems that the system was designed to solve. The system has no preset goals and no way to determine when goals are achieved, an example of the frame problem discussed by cognitive scientists…This is the problem of relevance – the problem of determining which problems are relevant and what actions are relevant for solving them. (p.7)

Though they mention this problem in the beginning of their paper, the authors never actually take any steps to address that series of rather large issues. No part of their account deals with how their hypothetical domain-general mechanisms generate solutions to novel problems. As far as I can tell, you could replace the processes by which their domain-general mechanisms identify problems, figure out which information is and isn’t useful in solving said problems, figure out how to use that information to solve the problems, and figure out when the problem has been solved, with the phrase “by magic” and not really affect the quality of their account much. Perhaps “replace” is the wrong word, however, as they don’t actually put forth any specifics as to how these tasks are accomplished under their perspective. The closest they seem to come is when they write things along the lines of “learning happens” or “information is combined and manipulated” or “solutions are generated”. Unfortunately for their model, leaving it at that is not good enough.

A lesson that I thought South Park taught us long time ago.

In summary, their novelty problem isn’t one, their “domain-general” systems are not general-purpose at the functional level at all, and the ever-present framing problem is ignored, rather than addressed. That does not leave much of an account left. While, as the authors suggest, being able to adaptively respond to non-recurrent features in our environment would probably be, well, adaptive, so would the ability to allow our lungs to become more “general-purpose” in the event we found ourselves having to breathe underwater. Just because such abilities would be adaptive, however, does not mean that they will exist.

As the classic quote goes, there are far more ways of being dead than there are of being alive. Similarly, there are far more ways of not generating adaptive behavior than there are of behaving adaptively. Domain-general information processors that don’t “know” what to do with the information they receive will tend to get things wrong far more often than they’ll get them right on those simple statistical grounds. Sure, domain-specific information processors won’t always get the right answer either, but the pressing question is, “compared to what?”. If that comparison is made to a general-purpose mechanism, then there wouldn’t appear to be much of a contest.

References: Chiappe, D., & MacDonald, K. (2005). The Evolution of Domain-General Mechanisms in Intelligence and Learning The Journal of General Psychology, 132 (1), 5-40 DOI: 10.3200/GENP.132.1.5-40

2 comments on “No, Really; Domain General Mechanisms Don’t Work (Either)

  1. I very much enjoyed this post. Great point on input vs function!