Changmin Duan, Sarah Knox, and Clara E. Hill
Advice giving in psychotherapy has been an area of interest for theorists and practitioners for a long time. However, clear and distinct answers to questions concerning the role of advice in client outcomes have not been as available as one would expect. This state of art may be related to discrepant theoretical positions and the lack of consistent empirical evidence. This chapter argues that some evidence does support advice giving in psychotherapy, depending on the cultural and social context as well as on client and therapist variables. This chapter reviews the literature, recommends a specific model for advice giving, and outlines future research directions.
Lyn M. Van Swol, Jihyun Esther Paik, and Andrew Prahl
This chapter examines the psychology of advice recipients, focusing on research predominantly conducted using the Judge Advisor System, in which a participant “judge” receives advice from one or more advisors but has ultimate responsibility for making the decision. First, it reviews methods of typical Judge Advisor System experiments. Next, it surveys the research to explore why decision makers often do not seek out advice, focusing on the costs of advice and decision-maker overconfidence. It then examines why decision makers underutilize the advice they receive due to factors like confirmation bias, egocentric discounting, and power. In addition, factors that increase the utilization of advice, such as trust, advisor confidence, and advisor expertise, are considered. Finally, the influence of advice-recipient power and reception to computerized advice are examined in depth. Finally, advice to decision makers about how to seek and utilize advice to make better decisions is provided.
Assessing the Language Skills of African American English Child Speakers: Current Approaches and Perspectives
Toya A. Wyatt
The purpose of this chapter is to provide an overview of current as well as past special education regulations, litigation, professional association guidelines, clinical models, and best practice approaches for the clinical speech and language skills of African American English (AAE) for zero-three, preschool and school-age child speakers. It also provides a summary of current AAE child language research within the field of Communication Sciences and Disorders that has implications for: a) the selection of appropriate formal and informal speech-language assessment procedures, b) accurate differential diagnosis of disorder vs. normal dialect difference in children with suspected language impairment and c) the identification of appropriate therapy goals when relevant. Implications for the development of future theoretical frameworks and standardized assessments that help to minimize the historical misdiagnosis and disproportionate over-identification of African American students for speech-language and other special education placements is also addressed.
The present article poses some fundamental questions related to bilingualism and to the acquisition of two phonological components, by very young children. It discusses different types of bilingualism and their outcomes. After a brief consideration of alleged pros and cons of bilingualism brought up in the past decades, two perspectives of bilingualism are sketched—psycholinguistic and sociolinguistic—and certain aspects of bilingual child phonology are presented from each of these points of view. The essential issue is whether different outcomes of bilingual child phonology are predictable, and to find the crucial criteria to support the predictions. Finally, the discussion addresses some basic questions about bilingual acquisition, and ends with a summary of various types of cross-linguistic interaction.
Lila R. Gleitman, Andrew C. Connolly, and Sharon Lee Armstrong
This article reviews two kinds of experimental evidence from laboratories that challenge the adequacy of prototypes for representing human concepts. First, experiments suggesting that prototype theory does not distinguish adequately among concepts of maximally variant types, such as formal vs. natural kind and artifact concepts. Second, a more recent experimental line demonstrating how theories of conceptual combination with lexical prototypes fail to predict actual phrasal interpretations, such as language users' doubts as to whether Lithuanian apples are likely to be as edible as apples. An extensive body of empirical research seems to provide evidence for the psychological validity of the prototype position. The default to the compositional stereotype strategy (DS) mentions that barring information, to the contrary, assumes that the typical adjective–noun combination satisfies the noun stereotype.
Alissa Melinger, Thomas Pechmann, and Sandra Pappert
Speech production involves the transformation of a to-be-expressed idea, or message, into lexical and grammatical content. Given the generally recognised separation of functional and positional processes, it has been argued that case assignment is within the domain of functional processes (or within Dell's syntactic stage). This article focuses on case assignment, which is achieved during the grammatical encoding stage of utterance planning. Early proposals for sentence production models were highly influenced by the distribution and characteristics of naturally occurring speech errors. More recent revisions of these models have been further influenced by experimental investigations into structural and word order alternations using a method called syntactic priming. This article first lays out in gross terms the general views of the stages necessary for sentence production. It then discusses the evidence that has supported the various stages of the production models and how they directly or indirectly inform us about the processes responsible for case assignment in sentence production. This includes evidence for and against (radical or weak) incrementality and evidence for lexical guidance (or verb primacy) in functional assignment.
Markus Bader and Monique Lamers
Research on human language comprehension has been heavily influenced by properties of the English language. Since case plays only a minor role in English, its role for language comprehension has only recently become a topic for extensive research on psycholinguistics. In the psycholinguistic literature, these processes are called the human parsing mechanism or the human sentence processing mechanism (HSPM). According to the Strong Competence Hypothesis, the syntactic structures computed by the HSPM are exactly those structures that are specified by the competence grammar. This article assumes that the HSPM computes phrase-structure representations enriched by various syntactic features, in particular case features on noun phrases. After providing a short introduction into current research concerned with the HSPM, it explores how syntactic functions are assigned in the face of morphological case ambiguity, the role of case for identifying clause boundaries in languages like Japanese and Korean, the problem of syntactic ambiguity resolution, and whether markedness distinctions that have been postulated to obtain between different cases are reflected in language comprehension.
This article examines the relation between the study of comparative syntax and language disorders. It aims to demonstrate ways in which research on impaired language interacts with syntactic theory. The article shows that the study of impaired language interacts with comparative syntax in particular, a research program which aims at understanding human language by comparing and contrasting the behavior or properties of several languages with respect to certain syntactic structures or types of phenomena. It also discusses several types of language therapy.
This article addresses the issue of compositionality of mental representations from the perspective of a foundational framework for cognitive science. The dynamical cognition framework (DC framework) is inspired partially by connectionism and partially by the persistence of the problem of relevance within classical computational cognitive science. It treats cognition in terms of the mathematics of dynamical systems: total occurrent cognitive states are mathematically/structurally realized as points in a high-dimensional dynamical system, and these mathematical points are physically realized by total-activation states of a neural network with specific connection weights. The framework repudiates the classicist assumption that cognitive-state transitions conform to a tractably computable transition function over cognitive states. Computational Theory of Mind (CTM) states that the causal role of a mental representation is syntactically determined, but this idea of syntactic determination of causal role is ambiguous.
This chapter provides a critical overview of experimental and computational research on the processing and representation of derived words. It begins with an introductory section addressing methodological issues: The pros and cons of various popular experimental tasks, issues with respect to the selection of materials, as well as the relevance of experimental research for morphological theory. The main section reviews two opposing classes of theories for the organization of the mental lexicon: theories building on the dictionary metaphor, and theories seeking to understand lexical processing without a mental dictionary and without theoretical constructs such as the morpheme.
Hayley Blunden and Francesca Gino
This chapter integrates research on advice interactions, motivations for advising, and the psychological consequences of serving in an advisor role to develop a more comprehensive perspective on the psychology of advising. By connecting this work, which spans various methodologies and theoretical foundations, it advances current thinking on advice giving in two primary ways. First, in examining the diversity of motivations for advice giving, it extends the set of advice-exchange outcomes to be considered beyond those previously emphasized. Second, it highlights previously unexplored aspects of the advisor role that are likely to impact the advice-giving experience. The chapter concludes by providing recommendations for advisors and identifying areas ripe for future research to illuminate the advisor side of the advice-exchange process.
Ianthi Tsimpli, Maria Kambanaros, and Kleanthes Grohmann
Universal Grammar (UG) denotes the species-specific faculty of language, presumed to be invariant across individuals. Over the years, it has shrunk from a full-blown set of principles and parameters to a much smaller set of properties, possibly as small as just containing the linguistic structure-building operation Merge, which in turn derives the uniquely human language property of recursion (Hauser et al., 2002). UG qua human faculty of language is further assumed to constitute the ‘optimal solution to minimal design specifications’ (Chomsky 2001:1), a perfect system for language. Unfortunately, the human system or physiology does not always run perfectly smooth in an optimal fashion. There are malfunctions, misformations, and other aberrations throughout. The language system is no exception. This chapter will present language pathology from the perspective of the underlying system: What can non-intact language tell us about UG?
Jeffrey R. Cole and Marla J. Hamberger
Epilepsy is a neurological condition that enables systematic study of language organization and reorganization. Although the vast majority of healthy individuals are left-hemisphere dominant for language, people with epilepsy are more likely to have atypical language organization. This chapter reviews two general mechanisms of language plasticity, including reorganization due to chronic functional disruption giving rise to slowly progressive structural disturbances from ongoing epileptic activity, as well as acute changes that occur after epilepsy surgery. Evidence is presented from classic “disruption” techniques, such as Wada testing and electrocortical stimulation mapping (ESM), and alternative “activation” techniques, such as functional magnetic resonance imaging (fMRI). Additional findings are is also reviewed from more advanced imaging, specifically diffusion tensor imaging (DTI), reflecting changes in the structure of language circuits pre- and postoperatively. These methods have been used to investigate clinical factors that influence the lateralization and localization of language regions in epilepsy—including but not limited to the location of seizure foci, age of seizure onset, presence of lesions, and extent of abnormal EEG activity between seizures—all of which may be associated with both inter- and intra-hemispheric changes in language networks. The process of language organization and reorganization is complex and heterogeneous, and there are multiple patient variables that can affect results from these different, yet complementary, techniques. For this reason, it is important to understand these issues to optimize clinical care, especially when definitive identification of language cortex is required for surgical planning among patients with refractory epilepsy.
Matthias Gamer and Kristina Suchotzki
Lying is a very complex behavior, occurring in different forms and situations. It requires the liar not only to constantly keep the perspective of the to-be-deceived person in mind, but at the same time to remember and activate the truth, prevent the truth from slipping out, and flexibly switch between the lie and the truth. The affective correlates of lying seem to range from guilt and the fear of being discovered to a delight after successfully getting away with a lie. Because of the observed variability in the affective correlates of lying, most recent research on lie detection has started to explore methods that are based on cognitive rather than affective processes. Those methods aim either to measure the increased cognitive load during lying, or to measure lying indirectly by assessing whether a suspect recognizes critical crime-related information.
Given the definitions of lying and self-deception, it would be wrong to understand self-deception as lying to oneself. It seems, however, that any definition of self-deception gives rise to two paradoxes. According to the ‘static paradox’, self-deception involves believing ‘p and not-p’ at the same time. According to the ‘dynamic paradox’, self-deception involves the intention to deceive oneself. If both claims were true, self-deception would seem to be impossible. ‘Divisionists’ try to solve the first paradox by arguing that the human mind is divided into several subsystems such that the self-deceiver consciously believes that p while unconsciously believing that not-p. ‘Non-intentionalists’ try to solve the second paradox by arguing that self-deception is based on a ‘motivational bias’. Since both explanations fall short of accounting for the blameworthiness of self-deception, a third approach examines the phenomenon from the perspective of virtue theory, claiming that self-deceivers have not yet succeeded in developing the virtue of accuracy.
Bella M. DePaulo
The social psychology of lying addresses some of the most fundamental questions about deception: How often do people lie? Why do they lie? To whom to they tell their lies? Do particular types of people lie especially often? Research-based answers to all those questions are reviewed. The investigation of frequency includes a comparison of students and non-students living in the same area, finding a higher incidence of lying among the former. Also discussed are strategies for lying, cognitive factors in lying, and lying in close and casual relationships. The system of personality categories introduced (or rather, updated) by Ashton and Lee (2007) is reviewed. The chief distinction in types of lie is found to be between self-serving lies and other-oriented lies. Strategies are examined in depth using interviews of suspected criminals and frequenters of online forums. The chapter concludes with a pessimistic overview of online dating sites.
Knowledge, it is commonly assumed, can be and often is transmitted via testimony. How exactly this takes place, however, is a matter of controversy. One common thought is that, in order to obtain knowledge via testimony, listeners need to live up to some minimum standard of epistemic conduct. This raises the question of just what this minimum standard might be. Some philosophers have recently attempted to make progress on this question by turning to the psychological literature on mechanisms of ‘epistemic vigilance’, or the methods that people routinely use to track the quality of the testimony they are hearing, to filter out liars and the uninformed. The present chapter briefly canvasses the state of this inquiry and lays out several challenges for it. It concludes with a broader challenge to the thought that there really is some minimal standard that listeners must live up to in order to acquire knowledge via testimony.
This chapter provides a selective overview of recent research on the phonetics and phonology of bilingualism. The central idea put forth in the chapter is that, in bilingualism and second-language learning, cross-language categories are involved in complex interactions that can take many forms, including assimilations and dissimilations. The sound categories of the two languages of a bilingual seem to coexist in a common representational network and appear to be activated simultaneously in the processing of speech in real time, but some degree of specificity is attested. The chapter then goes on to explore some of the characteristics of cross-language sound interactions, including the fact that these interactions are pliable and appear to be mediated by the structure of the lexicon.
Giosuè Baggio, Michiel van Lambalgen, and Peter Hagoort
Compositionality remains effective as an explanation of cases in which processing complexity increases due to syntactic factors only. It falls short of accounting for situations in which complexity arises from interactions with the sentence or discourse context, perceptual cues, and stored knowledge. The idea of compositionality as a methodological principle is appealing, but imputing the complexity to one component of the grammar or another, instead of enriching the notion of composition, is not always an innocuous move, leading to fully equivalent theories. Compositionality sets an upper bound on the degree of informational encapsulation that can be posited by modular or component-based theories of language: simple composition ties in with a strongly modular take on meaning assembly, which is seen as sealed off from information streams other than the lexicon and the syntax.
John A. Hawkins
This article provides a research programme in which typological patterns are ultimately explained in terms of language processing and use. It presents three general organizing principles that describe common patterns in grammars and performance: Minimize Domains, Minimize Forms, and Maximize Online Processing. The first is illustrated with patterns involving relative clauses; the second, with morphological data and markedness hierarchies; and the third, with a number of linear precedence regularities that hold across different language types. The article finally outlines some general issues raised by this approach to linguistic typology, and discusses challenges that remain. It is concluded that typological patterns can be profitably described, predicted, and to a significant extent explained in terms of principles of efficiency and complexity in processing. These principles, individually and in combination, can motivate a broad range of preference data in performance and in grammars.