- using a model outside of its domain of applicability — forgetting that is it wrong
- criticizing a model solely for being wrong, without specifying that it substantially fails to serve the purpose to which it's being put.
Humans are model-builders; while we use a lot of simple behavioral models that other organisms use, we are better than other organisms at managing hypotheticals, especially hypotheticals that are fairly well outside of our direct experience, in part because we tend to carry around deeper models of the world with which we interact. This comes through in our uniquely human language; perhaps the barest is the use of labels, hierarchical categories ("animal" includes "dog" includes "that kind of little dog that looks like a mop with its handle missing"), and abstraction ("three apples plus two apples equals five apples" and "three oranges plus two oranges equals five oranges"; for that purpose, number can be abstracted away from the thing being counted), but it's perhaps on better display in the use of metaphorical language. What's key about a metaphor is not that the thing being compared to the other thing is identical to it, merely that it shares certain characteristics that are relevant to some purpose; All metaphors are wrong, but some metaphors are useful, and they're perhaps more useful when it's clear in what ways they are useful and in what ways they are wrong. This is true, also, of categorical systems; if we classify movies, for example, into action movies, comedies, etc., we may sometimes find a movie that seems to sit on the edge of one category or another, or that clearly doesn't fit any of the categories we had before, or clearly fits into multiple categories that we might have thought of as largely disjoint from each other. Typically the category system will be of some use as long as the exceptions aren't too common, and as long as the use to which it's being put isn't too brittle when an exception comes along.
There are two somewhat more concrete examples I want to end with. The less concrete is that an analogy between X and Y will sometimes be met with "You can't compare X to Y," typically in a situation in which X and Y are in fact very similar in the relevant way but different in an obvious but irrelevant way. I'm certainly careful not to use "Nazis" as "Y" because I expect to trigger this fallacy, but even there it comes off to me as a sign that the respondent either isn't paying much attention or is more interested in some kind of point-scoring debating game than in furthering a serious discussion. The more concrete example I want to give has to do with religion; in particular, the terms Muslim and Christian. There's a certain politeness in taking at face value a person's own label for his or her own religious beliefs, and that certainly seems like as good a way to handle edge cases as any, but it also seems to me that the labels become uselessly circular if the term Muslim means "person who considers him/herself to be a Muslim". Asking whether ISIS is "Islamic" is ultimately a semantic question; asking whether it is more useful to have a term for most Muslims that includes ISIS or one that excludes ISIS is at least somewhat clearer when it's clear what the relevant "use" is. In any case, if we do call ISIS "Islamic" that certainly doesn't mean that their beliefs or practices are exactly those of other Muslims (and, accordingly, the fact that those beliefs and practices aren't exactly the same doesn't by itself mean we shouldn't call them Islamic). I similarly see the occasional assertions that, because someone has identified him or herself as Christian, it is mandatory that they adopt a particular vision of Christianity, or it is "hypocritical" if they don't. Christianity is not so narrow a category that the use of that label implies precisely a set of moral beliefs, but it is occasionally a broadly useful label nonetheless.
 almost? I'm not up on animal cognition research.
 again, I think uniquely; perhaps not quite, though certainly I mean something more by "language" than would admit simple alarm calls etc.; "language" in my mind requires an ability to express at least some degree of abstraction. In fact, if you're not slightly critical of my claim that language comprises model-building on the grounds that it's at least very nearly tautological, then I'm not being clear about what I mean by those words.
 There's a big issue, too, of our not just using labels but, for the purposes of language, having to use shared labels, i.e. we need to be using labels in approximately the same way, or at least to be able largely to understand the labels each of us is using. For the time being, I'm relegating that to this footnote.
 in principle. In practice, a person who doesn't fall at least close to the usual category is unlikely to claim to belong to it, which is probably at least part of why we so often fall back on it.