For video footage from past you can visit the individual event pages, or go to our YouTube Channel

To filter by event category, click on the event category link in the table below or use the menu on the right.

List of Past Events

Constructing Human Concepts: what (I-) meanings are good for (talk recording available)

Dr. Paul M. Pietroski

Tuesday, November 04, 2008, 01:00pm - 02:00pm

University of Maryland, Department of Linguistics and Philosophy

Copy to My Calendar (iCal) Download as iCal file

How are words related to the concepts that children lexicalize in the course of acquiring words? According to one simple and familiar picture, a verb like 'kick' is (among other things) an instruction to fetch the concept lexicalized with that verb. On this view, if the concept lexicalized is dyadic--say, KICK(x, y)--then the verb 'kick' is semantically dyadic, requiring two arguments like 'Brutus' and 'Caesar' to form a complete thought-expression, as in 'Brutus kicked Caesar'. This proposal faces a host of well-known difficulties, some of which I'll review. According to a less familiar picture that I want to explore, every word of a naturally acquirable human language is (among other things) an instruction to fetch a monadic concept--like KICK(e), a concept of events. As we'll see, this second view is not only defensible; it has a lot going for it, empirically and theoretically. But on this view, lexicalization is a partly creative process, in which formally new (and perhaps distinctively human) concepts are abstracted from prior concepts that humans may well share with other animals.
This in turn invites a nonstandard but in many ways attractive conception of what makes human language distinctive: lexicalization--i.e., the capacity to create formally new analogs of prior concepts--may have been the important new twist. In particular, the basic operations of semantic composition--like conjunction of monadic concepts, and the introduction of a few "thematic" concepts like AGENT(e, x) and PATIENT(e, y)--may be computationally simple and widely available in nonhuman cognition. From this perspective, words do not merely label diverse concepts that can (somehow) be systematically and recursively combined, given suitably powerful combination operations. The idea is rather that words let us use prior concepts to create monadic concepts that can be systematically and recursively combined via natural computational operations. Put another way: in human languages, semantic composition may be dumb, while lexicalization is just clever enough to make very good use of dumb composition.

To view a recording of this talk click here (You will need a Rutgers NetID and password)

Dr. Paul M. Pietroski