Hanna-Sophia Shine

GitHub Profile Picture

Life Update: I will be starting my PhD in Cognitive Science with Dr. Roman Feiman at Brown in Fall 2026!

Hello! My name is Hanna and I am the lab manager for the Snedeker Lab (since June 2023) and a lab manager for the Ai4CommSci Lab (since September 2025). I graduated from the University of Maryland, College Park in May 2023 with a B.S. in Psychology and a minor in Classical Mythology. As an undergrad at UMD, I was a research assistant in the Early Childhood Interaction Lab, the Interpersonal Relationships Lab, and the Language and Music Cognition Lab. I wrote my undergraduate thesis under the mentorship of Dr. Bob Slevc on the influence of perceived accent prestige on lexical and syntactic alignment in “dialogue” (OSF, poster 1 & poster 2).

Advice & Mentorship: If you are an undergraduate or post-baccalaureate RA who is interested in learning more about the lab manager application process (or what the job entails), or the graduate school application process, please don't hesitate to reach out! I would be more than happy to talk about my experience and/or provide feedback on materials if I can :)


I am broadly interested in language and cognition and, more specifically, I would like to explore how children tackle the learning problem of mapping meaning onto linguistic form. I am interested in exploring this process in the following ways:

(a) From a developmental perspective, what conceptual representations are innate or early emerging, and how do they guide children as they tackle the learning problem of mapping meaning onto linguistic form? How do children leverage conceptual representations and external input to acquire abstract linguistic expressions whose referents are fleeting or vary across learning instances?

(b) Within the intersection of semantics and pragmatics, as children encounter external input embedded in rich pragmatic contexts, how do they integrate conceptual primitives or precursors with these pragmatic cues to map meaning onto form? What can this reveal about the developmental trajectory of children's ability to infer meanings that extend beyond literal content?

(c) With a focus on informing theory, how can developmental data inform and constrain formal semantic and pragmatic theory? How can a deeper understanding of early-emerging or innate conceptual primitives inform formal linguistic theory? Similarly, how can developmental data deepen our understanding of the mechanisms that allow us to derive meanings that extend beyond the literal content of a sentence?


Currently, I am working on a number of projects through the Snedeker Lab. You can learn more about them below!

Working with Dr. Irene Canudas Grabolosa and Dr. Jesse Snedeker we are investigating whether individuals can access the Agent-Patient relational concept and use it to explicitly categorize event participants engaged in contact and cause-motion events. Spoiler: they do!

Through an online computer-based task on the PCIbex platform (Zehr & Schwarz, 2018), participants viewed short videos depicting contact events (such as A hitting B or X hugging Y). We modeled their responses, guiding them to select either the agent or the patient. Following the modeling phase, participants applied the learned selection rule to new contact and motion events in a test phase, assessing generalization. This work was done with English-speaking adults, English-speaking children, and homesigners based in Nicaragua. I collected data from all three populations and I've presented two posters on this work (Human Sentence Processing conference in March 2025 and the Boston University Conference on Child Language Development in November 2025). The abstract for HSP can be found here and the posters I presented can be found here!

What did we find? We found that English-speaking adults and children can deploy Agent-Patient concepts for explicit categorization, suggesting that these concepts are readily accessible by the age of four. Importantly, Homesigners are also able to do this, suggesting the Agent-Patient relational concept is not something we extract from language but rather is brought to the language acquisition process by the learner.

We are also interested in exploring how event roles interact with linguistic structure and have two follow-up studies we have run with adults! In the first follow-up study we reversed the direction of generalization (so participants were trained on caused-motion events instead of contact events) and found that adults were again at ceiling during the test phase. In our second follow-up we trained people using the non-linguistic paradigm described above and then explored how they generalized to active and passive sentences to try and learn more about the nature of the representations guiding their explicit categorization. We found that people were very good at generalizing both to active sentences and passive sentences, suggesting that they were using some sort of semantic encoding during this categorization process (with syntactic encoding being the relevant alternative of interest). If you would like to learn more about this work, I presented a poster on it at HSP in March 2026 (abstract here!) and am giving a talk about it at ELM4 in June 2026! Please reach out to me if you have any questions or suggestions!

What's next? Currently, we are working on developing stimuli for an extension of this paradigm to the domain of psychological state verbs. While our second adult follow-up is interesting in and of itself, it also functioned as a "proof of concept" that we can use this paradigm to get at what we think is some form of semantic encoding. Looking ahead, we want to explore how this paradigm can deepen our understanding of psychological state verbs. Do fear-type (experiencer-subject) and frighten-type (experiencer-object) verbs evoke the same thematic roles? If this is the case then we might expect people to easily extend from the Agent-Patient training to both forms of psych verbs. If, on the other hand, fear-type verbs are missing a causal head (as proposed by some theories, see Hartshorne et al., 2016) then we might expect extension to frighten-type verbs but not to fear-type verbs. We are still piloting stimuli for this study but I am very excited to see where this goes!

Finally, we are investigating pre-verbal infants' understanding of and ability to represent abstract Agent-Patient concepts. Prior work that has explored this question is informative but limited either because (a) it found sensitivity to role-binding to a particular individual (e.g., Papeo et al., 2024) or (b) the class of events tested was narrow (e.g., Leslie & Keebe, 1987, see also recent replication failure: Meewis et al., 2026). Again, this work is informative, but it doesn't fully support the notion of more abstract relational categories (that extend beyond a particular individual or event). We recently started piloting a task in which 11-13 month olds are habituated to videos in which an experimenter selects the agent (or patient) of an event. During test, the experimenter selects the patient (or the agent if habituated with patient selection). We are interested in whether infants will notice a difference and look longer when the experimenter selects the unexpected event participant.

Working with Dr. Tanya Levari, Briony Waite (doctoral student), and Dr. Jesse Snedeker we are assessing predictive abilities across a wide range of perceptual, cognitive, and linguistic domains in a single group of children with autism. By working with one well-characterized sample, want aim to see whether there are shared underlying mechanisms at work across these domains, or, if observed, predictive impairments are domain-specific, affecting some areas but not others.

This study includes both behavioral (standardized tests including the CELF and part of the Kbit) and brain measures (EEG) and a parallel study is being conducted with adults to allow for an exploration of the developmental trajectory of predictive abilities in this population. In addition to recruiting and running 19 ASD participants in both the EEG portion of the study (~ 2 hour appointments) and the behavioral portion of the study (~1.5 hour appointments) I have supervised two multi-week projects.

I supervised Isabel Sichlau (now a lab manager! ) during a 9-week summer internship program. Isabel expressed an interest in the visual adaption component of this study and she presented this poster at our summer internship fair in 2024! Currently, I am providing mentorship to Cathyln Boyle, a former undergraduate research assistant on placement from the university of Bath, as she explores how autism and language ability interact and function as predictors of performance in an online sentence completion task in which we modify the contextual support present across sentence trials. Cathlyn recently presented a poster on this work at the International Society for Autism Research (INSAR) conference (poster here!) and also recently submitted her undergraduate thesis on this work!

Working with Jian Cui (doctoral student in Linguistics at Harvard) and Dr. Jesse Snedeker we are exploring this typological asymmetry and how feature (tone vs. segment) and counting (counting vs. non-counting) interact to influence performance on a task that involves learning a morphological (i.e., pluralization) rule in an artificial language.

Through an online computer-based task on the PcIbex platform (Zehr & Schwarz, 2018), participants are trained to learn a pluralization rule that involves some combination of feature assignment and counting (e.g., tonal assignment and counting or segmental assignment and non-counting) in an artificial language. We then assess performance on selecting the correct pluralization of novel words in the artificial language and see if it differs signficantly based on some combination of feature and counting. This study was conducted with Cantonese-English bilignuals (more familar with tonal language systems) as well as monolingual English speakers (little to no familiarity with tonal language systems). We found that adults struggle to learn an unattested segmental counting rule relative to its tonal counterpart. We are piloting this study with children now to better understand whether this asymmetry is rooted in early emerging learning mechanisms or if it is driven by linguistic experience.

Working with Elena Marx (doctoral student at CEU), Dr. Eva Wittenberg, and Dr. Jesse Snedeker we are exploring whether children (4;0 - 4;11) make use of the dynamic properties of situations to infer temporal order in otherwise ambiguous sentences that use a relative clause construction.

Using an act-out task, we presented children with stative main clause sentences (e.g., "Tiger sat on the wagon that Sheep danced next to") and eventive main clause sentences (e.g., "Tiger danced next to the wagon that Sheep sat on"). Prior work found that, in an underspecified situation like this, adults tend to treat static states like sitting as the backdrop (i.e. the Ground) against which dynamic events like dancing (i.e., the Figure) unfold. Do children do the same thing?

While data collection is ongoing (one or two kids left!) preliminary results suggest that kids do the same thing! Specifically, in their enactments, children were more likely to follow the linear order of a sentence when the main clause encoded a state and tended to reverse the order when the relative clause encoded a state. This suggests that children, like adults, use event dynamicity as a cue to infer temporal order.

Through the Ai4CommSci Lab I am working on developing a transitive elicitation task that will be used to better understand word order in the Formosan Languages! This is very much in development and is collaborative work with Dr. Joshua Hartshorne and Dr. Jesse Snedeker.


Contact:

Feel free to contact me: hshine[at]fas[dot]harvard[dot]edu | slowly migrating things to: hanna-sophia_shine[at]brown[dot]edu