Philosophy of language

The Logicality of Language: Triviality and Logical Form                                                                                                  Noûs (online first). DOI: 10.1111/nous.12235 (penultimate draft here)

Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the 'logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can develop versions of the logicality of language within general frameworks which hold that the language system employs a deductive system that is basically classical and is not radically modular with respect to the meaning of its open class terms. 

Meaning, Modulation, and Context: A Multidimensional Semantics for Truth-conditional Pragmatics   Linguistics and Philosophy. (forthcoming), DOI: (penultimate draft here)

The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. `Truth-conditional pragmatics' is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically `freeing up' the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases were it is not observed. These accounts fall into this position because they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression 'e' can enter into the compositional processes that determine the meaning of more complex expressions which contain 'e'. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics.  

The Structure of Semantic Competence: Compositionality as an Innate Constraint of The Faculty of Language Mind & Language, (2015), Vol 30(4): 375-413.  DOI: 10.1111/mila.12084 . Author's copy here

This paper defends the view that the Faculty of Language is compositional, namely, that it computes the meaning of complex expressions from the meanings of their immediate constituents. I first argue that compositionality and other competing, non-compositional constraints on the ways in which we compute the meanings of complex expressions should be understood as hypotheses about the innate constrains of the semantic operations of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints predict incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is compositional. More generally, this paper proposes a way of reframing the compositionality debate which, by focusing on its implications for language acquisition, opens what has so far been a mainly theoretical debate to a more straightforward empirical resolution.

Dual Content Semantics, Privative Adjectives, and Dynamic Compositionality                                             Semantics & Pragmatics, (2015), Vol 8, DOI: (open access)

The aim of this paper is to defend the hypothesis that common nouns have a dual semantic structure that includes extension and non-extension determining components. I argue that the non-extension determining components are part of linguistic meaning because they play a key compositional role in certain constructions, esp., in privative noun phrases such as 'fake gun' and 'counterfeit document'. Furthermore, I show that if we modify the compositional interpretation rules in certain simple ways, this dual content account of noun phrase modification can be implemented in a type-driven formal semantic framework. In addition, I also discuss the shortcomings of other more traditional accounts of privative noun phrases which assume that nouns do not have a dual semantic structure. At the most general level, this paper presents a proposal for how we can begin to integrate a psychologically realistic account of lexical semantics with a linguistically plausible compositional semantic framework.


Philosophy of mind and cognitive science

Prototypes as Compositional Components of Concepts                                                                                         Synthese, (2016), vol. 193(9): 2899-2927.  DOI: (Penultimate draft here)

The aim of this paper is to reconcile two claims that have long been thought to be incompatible: (a) that we compositionally determine the meaning of complex expressions from the meaning of their parts, and (b) that prototypes are components of the meaning of lexical terms such as fishred, and gun. Hypotheses (a) and (b) are independently plausible, but most researchers think that reconciling them is a difficult, if not hopeless task. In particular, most linguists and philosophers agree that (a) is not negotiable; so they tend to reject (b). Recently, there have been some attempts to reconcile these claims (Prinz 2002, 2012; Hampton  et al. 2008; Jonsson et al 2012; Schurz 2012), but they all adopt an implausibly weak notion of compositionality. Furthermore, parties to this debate tend to fall into a problematic way of individuating prototypes that is too externalistic. In contrast, I argue that we can reconcile (a) and (b) if we adopt, instead, an internalist and pluralist conception of prototypes and a context-sensitive but strong notion of compositionality. I argue that each of this proposals is independently plausible, and that, when taken together, provide the basis for a satisfactory account of prototype compositionality. 

Dual character concepts in social cognition: Commitments and the normative dimension of conceptual representations. (with Reuter, K.)                                                                                                                                     Cognitive Science, (2016), vol. 41(S3): 477-501. DOI: 10.1111/cogs.12456 (Penultimate draft here

The concepts expressed by social role terms such as artist and scientist are unique in that they seem to allow two independent criteria for categorisation, one of which is inherently normative (Knobe etal., 2013). Focusing on social role concepts, this paper presents an account of the content and structure of the normative dimension of these "dual character concepts", and tests this account in a series of experiments. Experiment 1 supports the view that the normative dimension of social role concepts represents the commitment to fulfill certain idealised basic functions. We will see that background information can affect which basic function is associated with each social role. However, Experiment 2 supports the view that the normative dimension always represents the commitment to the relevant basic function as an end in itself. We conclude by arguing that social role concepts represent commitments because that information is crucial to predict the future social roles and role-dependent behaviour of others.

"Jack is a true Scientist": On the Content of Dual Character Concepts  (with Reuter, K.)                             Proceedings of the 37th Annual Conference of the Cognitive Science Society, (2015), David C. Noelle & Rick Dale (Eds.). Pasadena, Ca: Cognitive Science Society. (Penultimate draft here)

In a series of experiments, Knobe, Prasada, and Newman (2013) show that some terms, paradigmatically social role terms, allow two independent criteria for categorisation, one of which is inherently normative. This paper presents and tests a novel account of the content of these ‘dual character concepts’. We argue that the normative dimension of dual character concepts represents commitments to fulfill certain idealized functions. We then present evidence that the normative dimension is a central dimension in the conceptual structure of dual character concepts. Finally, we show that our account is both descriptively and explanatorily adequate.

Conceptual centrality and implicit bias  (with Spaulding, S.)                                                                                           Mind & Language, (online first). DOI: 10.1111/mila.12166 (Penultimate draft here)

How are biases encoded in our representations of social categories? Philosophical and empirical discussions of implicit bias tend to focus on salient or statistical associations between target features and representations of social categories. These are the sorts of associations probed by the Implicit Association Test and priming tasks. In this paper, we argue that these discussions systematically overlook an alternative way in which biases are encoded, i.e., in the dependency networks that are part of our representations of social categories. Dependency networks encode information about how the features in a conceptual representation depend on each other, which determines their degree of centrality in a conceptual representation. Importantly, centrally encoded biases systematically disassociate from those encoded in salient-statistical associations. Furthermore, the degree of centrality of a feature determines its cross-contextual stability. For these reasons, the view presented in this paper has important empirical and philosophical implications.

Stereotypes, conceptual centrality and gender bias: An empirical investigation (with Madva, A. and Reuter, K.) Ratio, (2017). Vol. 30(4): 384-410. DOI: 10.1111/rati.12170.  (Penultimate draft here)

Discussions in social psychology overlook an important way in which biases can be encoded in conceptual representations. Most accounts of implicit bias focus on `mere associations' between features and representations of social groups. While some have argued that some implicit biases must have a richer conceptual structure, they have said little about what this richer structure might be. To address this lacuna, we build on research in philosophy and cognitive science demonstrating that concepts represent dependency relations between features. These relations, in turn, determine the centrality of a feature f for a concept C: roughly, the more features of C depend on f, the more central f is for C. In this paper, we argue that the dependency networks that link features can encode significant biases. To support this claim, we present a series of studies that show how a particular brilliance-gender bias is encoded in the dependency networks which are part of the concepts of female and male academics. We also argue that biases which are encoded in dependency networks have unique implications for social cognition. 


Philosophy of Cognitive neuroscience

There and Up Again: On the Uses and Misuses of Neuroimaging for Psychology  (with Nathan, M. J.)        Cognitive Neuropsychology, (2013) Vol 30(4): 233-252DOI: 10.1080/02643294.2013.846254.  (Penultimate draft here)

The aim of this article is to discuss the conditions under which functional neuroimaging can contribute to the study of higher-cognition. We begin by presenting two case studies---on moral and economic decision-making---which will help us identify and examine one of the main ways in which neuroimaging can help advance the study of higher cognition. We agree with critics that fMRI studies seldom "refine" or "confirm" particular psychological hypotheses, or even provide details of the neural implementation of cognitive functions. However, we suggest that neuroimaging can support psychology in a different way, namely, by selecting among competing hypotheses of the cognitive mechanisms underlying some mental function. One of the main ways in which neuroimaging can be used for hypothesis selection is via reverse inferences, which we here examine in detail. Despite frequent claims to the contrary, we argue that successful reverse inferences do not assume any strong or objectionable form of reductionism or functional locationism. Moreover, our discussion illustrates that reverse inferences can be successful at early stages of psychological theorizing, when models of the cognitive mechanisms are only partially developed. 

Mapping the mind: Bridge-laws at the Psycho-Neural Interface  (with Nathan, M. J.)                                      Synthese, (2015), Vol. 193(2): pp. 637-657. DOI: 10.1007/s11229-015-0769-2.  (Penultimate draft here)

Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors now claim that neuroscientific data can be used to advance our theories of higher cognition, others defend the so-called ‘autonomy’ of psychology. Settling this significant question requires understanding the nature of the bridge laws used at the psycho-neural interface. While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the interface of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws—associative laws—which play a central, albeit often overlooked, role in scientific practice.

Two Kinds of Reverse Inference in Cognitive Neuroscience  (with Nathan, M. J.)                                                            in The Human Sciences after the Decade of the Brain, (2017), Leefman & Hildt (eds.) Elsevier. (Penultimate draft here)

This essay examines the prospects and limits of ‘reverse inferring’ cognitive processes from neural data, a technique commonly used in cognitive neuroscience for discriminating between competing psychological hypotheses. Specifically, we distinguish between two main types of reverse inference. The first kind of inference moves from the locations of neural activation to the underlying cognitive processes. We illustrate this strategy by presenting a well-known example involving mirror neurons and theories of low-level mind-reading, and discuss some general methodological problems. Next we present the second type of reverse inference by discussing an example from recognition memory research. These inferences, based on pattern-decoding techniques, do not presuppose strong assumptions about the functions of particular neural locations. Consequently, while they have been largely ignored in methodological critiques, they overcome important objections plaguing traditional methods

The Future of Cognitive Neuroscience: Reverse Inference in Focus  (with Nathan, M. J.)                              Philosophy Compass, (2017), Vol 12(7). DOI: 10.1111/phc3.12427

The aim of this article is to present and discuss "reverse inference"—the widespread practice of inferring, in certain experimental tasks, the engagement of cognitive processes from locations or patterns of neural activation. After introducing the basic structure of traditional "location-based" reverse inference, we address a well-known methodological objection. Next, we present a more recent development—-machine learning-based pattern-decoding reverse inference. We argue that this technique overcomes some of the main problem with traditional forms of reverse inference. We conclude by drawing some general philosophical implications. 

The Future of Reverse Inference: Lessons from Mirror Neurons (draft here

Cognitive Neuroscientists regularly ‘reverse infer’ cognitive processes from patterns or locations of neural activation. This article examines the prospects of this controversial technique. To frame the discussion, we consider a famous reverse inference involving mirror neurons and theories of low-level mindreading. The basic claim is well-known: the activity profiles of mirror neurons are taken to support simulationist accounts of how we understand the actions of others. In the first part of this article, I show that the evidence provided by mirror neurons does not favor any of the competing cognitive-level theories of mindreading. Drawing on those results, I defend two claims about reverse inference. Firstly, the most widely used and discussed subclass of reverse inference is inherently problematic. What is distinctive about this familiar class is that the locations of neural activation play a key role in the inferences to cognitive processes. Secondly, the problems faced by location-based inferences do not apply to inferences based on pattern decoding techniques. These techniques have been almost completely ignored in discussions of reverse inference in Philosophy of Mind and Science. This should change. Pattern decoding methods overcome some of the most resilient objections which have been raised against both location-based reverse inference in particular and Cognitive Neuroscience in general.