Refine
Year of publication
Document Type
- Preprint (46) (remove)
Has Fulltext
- yes (46)
Is part of the Bibliography
- no (46)
Keywords
- Deutschland (2)
- collaboration script (2)
- referential communication (2)
- Adjustment (1)
- Adulthood (1)
- Anpassung (1)
- College Students (1)
- College Teachers (1)
- Computer Mediated Communication (1)
- Computervermittelte Kommunikation (1)
Institute
- Psychologie (46) (remove)
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Verständnisvolle Dozenten haben weniger Fachwissen : Wirkungen der sprachlichen Anpassung an Laien
(2012)
In der Interaktion mit Studierenden ist schriftliche Online-Kommunikation ein wichtiges Arbeitsmedium für jeden Lehrenden geworden. Die Interaktionspartner haben dabei für ihre Urteilsbildung über den jeweils anderen ausschließlich den geschriebenen Text mit seinen lexikalen und grammatikalischen Merkmalen zur Verfügung. Das Ausmaß der lexikalen Anpassung an die Wortwahl eines Studierenden kann daher einen Einfluss auf die studentische Bewertung ihrer Dozenten hinsichtlich unterschiedlicher Persönlichkeitseigenschaften haben. In der vorliegenden Studie beurteilten Studierende jeweils zwei Dozenten hinsichtlich Verständnis, Gewissenhaftigkeit und Intellekt (IPIP, Goldberg, Johnson, Eber et al., 2006) auf Grundlage einer Emailkommunikation. Der Grad der lexikalen Anpassung der Lehrenden wurde dabei variiert. Es zeigte sich, dass Studierende Dozenten mit umgangssprachlicher Wortwahl als verständnisvoller, gewissenhafter aber tendenziell weniger wissend einschätzen.
Im Rahmen des Bund-Länder-Programms "Qualitätspakt Lehre" hat die Goethe-Universität Frankfurt erfolgreich das Programm "Starker Start ins Studium" eingeworben. Dadurch verfügt das Institut für Psychologie nun über die personellen Möglichkeiten, die fachliche und soziale Integration neuer Psychologiestudierender im sechssemestrigen Bachelorstudiengang Psychologie zu verbessern. Hierzu wurden zwei obligate je zweisemestrige Lehrmodule entwickelt. In dem vorliegenden Beitrag wird das übergeordnete Lehrkonzept beschrieben und dessen Implementierung im Fach Psychologie als Praxisbeispiel illustriert.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, i.e., the ease vs. difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English, German) read isolated words, we encode and decode word vectors taken from the family of prediction-based word2vec algorithms. We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
Dual coding theories of knowledge suggest that meaning is represented in the brain by a double code, which comprises language-derived representations in the Anterior Temporal Lobe and sensory-derived representations in perceptual and motor regions. This approach predicts that concrete semantic features should activate both codes, whereas abstract features rely exclusively on the linguistic code. Using magnetoencephalography (MEG), we adopted a temporally resolved multiple regression approach to identify the contribution of abstract and concrete semantic predictors to the underlying brain signal. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings shed new light on the temporal dynamics of abstract and concrete semantic representations in the brain and suggest that the concreteness of words processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual and motor regions.
To characterize the left-ventral occipito-temporal cortex (lvOT) role during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filter out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging. Empirically, using functional magnetic resonance imaging, we demonstrate that quantitative LCM simulations predict lvOT activation across three studies better than alternative models. Besides, we found that word-likeness, which is assumed as input to LCM, is represented posterior to lvOT. In contrast, a dichotomous word/non-word contrast, which is assumed as the LCM’s output, could be localized to upstream frontal brain regions. Finally, we found that training lexical categorization results in more efficient reading. Thus, we propose a ventral-visual-stream processing framework for reading involving word-likeness extraction followed by lexical categorization, before meaning extraction.
We propose a framework of individual problem-solving and communicative demands (IproCo) that bridges the gap between models from cognitive psychology and communication pragmatics. Furthermore, we present two experiments conducted to identify factors influencing the demands and to test possibilities for support. The experiments employed a remote collaborative picture-sorting task with concrete and abstract pictures and applied non-interactive conditions compared to interactive conditions. In a first experiment, the influence of the postulated demands on collaboration process and outcome was analysed, and the impact of shared applications was tested. In a second experiment, we evaluated instructional support measures consisting of model collaboration and a collaboration script. The collaboration process showed benefits of the support but the outcome did not. However, the support measures fostered the collaboration process even in the particularly difficult conditions with non-interactive communication. We discuss the impact of the IproCo framework and apply it to other tasks.
Effective knowledge communication presupposes common ground (Clark & Brennan, 1991) that needs to be established and maintained. This is particularly difficult in remote communication as well as in non-interactive settings, because the speaker cannot use gestures or mimic and has to tailor his utterances to the addressee without receiving feedback. In these situations, the speaker may achieve mutual understanding for example by adopting the addressee’s perspective. We present a study conducted to test the impact of instructions that support and hinder individual problem solving and knowledge communication. We used a picture-sorting task requiring individual cognitive processes of feature search (Treisman & Gelade, 1980) in addition to referential communication. As our study focused on the design of utterances, all participants assumed the role of speaker. Participants were told that their descriptions would be recorded and then listened to later on by a participant in the role of addressee. Eight sets of pictures were used, which varied on two dimensions: the individual cognitive demands of detecting the relevant features (varied as between-subject factor) and the communicative demands (varied as within-subject factor). A further between-subject factor was the type of instructions: The participants received either a collaboration script as supporting instructions, or time pressure was applied to induce stress, or else they were given no additional instructions (control group). We used the speakers’ verbal utterances to examine the quality of the speakers’ descriptions. For both dimensions of difficulty, we found the expected effects. In the conditions with a collaboration script, there were fewer irrelevant features mentioned and fewer features were described with delay. In the conditions with time pressure, there were fewer irrelevant features described, but the number of correctly described pictures was impaired through the fact that relevant features were also neglected. Under time pressure, speakers tended to provide ambiguous descriptions regarding the frame of reference.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
Objects that are congruent with a scene are recognised more efficiently than objects that are incongruent. Further, semantic integration of incongruent objects elicits a stronger N300/N400 EEG component. Yet, the time course and mechanisms of how contextual information supports access to semantic object information is unclear. We used computational modelling and EEG to test how context influences semantic object processing. Using representational similarity analysis, we established that EEG patterns dissociated between objects in congruent or incongruent scenes from around 300 ms. By modelling semantic processing of objects using independently normed properties, we confirm that the onset of semantic processing of both congruent and incongruent objects is similar (∼150 ms). Critically, after ∼275 ms, we discover a difference in the duration of semantic integration, lasting longer for incongruent compared to congruent objects. These results constrain our understanding of how contextual information supports access to semantic object information.