Moral Computing Lab
Welcome to the Moral Computing Lab!
We are an interdisciplinary research group investigating how humans and artificial intelligence (AI) systems perceive, represent, and reason about what is morally right and wrong behavior. Our mission is to provide answers to a broad array of interlocking questions, including: How do neural activation patterns give rise to moral judgments? How can we evaluate and modify the moral alignment between humans and Large Language Models (LLMs)? Can deep neural networks learn to recognize the moral information embedded in our multimodal environments? We study these questions using diverse methodologies, from (large-scale) behavioral experiments and functional neuroimaging to natural language processing and computer vision.
Lab Spotlights
- April ’26: PI Frederic Hopp is elected a member of the Junge Akademie.
- August '25: The MCL will be at the 2025 Computational Cognitive Neuroscience Conference in Amsterdam. We will present research on the visual representation of moral intuitions in artificial and biological neural networks and co-organize a workshop on emotion and morality in brains and machines.
- April ‘25: We will be presenting research at the Moral Media Conference in Buffalo, NY, where PI Frederic Hopp delivers a keynote address.
Who we are
Jun.-Prof. Dr. Frederic Hopp
Junior Professor Big Data
+49 (0) 651 603419-171fhopp@leibniz-psychology.org
Recent Lab Publications (selection)
- Musa, N. S., Müller, S. M., & Hopp, F. R. (2026). Validation of the moral foundations questionnaire-2 (MFQ-2) in Germany: Psychometric properties and associations with political ideology, religiosity, and personality. PloS one, 21(3), e0345599.
- Hopp, F. R., Youk, S., Amir, O., Malik, M., Woodman, K., Wheeler, B., ... & Weber, R. (2026). The Moral Foundations MRI Collection: A Multi-Center, Multi-Country Functional MRI Collection for Evaluating Moral Judgment. Pre-print available at http://dx.doi.org/10.23668/psycharchives.21564
- Stecker, M. & Hopp, F.R. (2026). Moral foundation measurements fail to converge on multilingual party manifestos. Political Analysis.
- Hopp, F. R., Youk, S., Sinnott-Armstrong, W., & Weber, R. (2025). A sensitive and specific neural signature robustly predicts graded computations of moral wrongness. Pre-print available https://doi.org/10.21203/rs.3.rs-7935407/v1
- Lauer, T., Drożdż, K., Müller, S., & Hopp, F. R. (2025). MoralNet: Visual representations of moral intuitions in artificial and biological neural networks. Cognitive Computational Neuroscience. https://2025.ccneuro.org/poster/?id=zYSAS7tTWO
Current Projects and Cooperations (selection)
- Decoding the Moral Brain: We seek to understand how the human brain computes the moral reprehensibility or acceptability of varied moral scenarios. By combining functional neuroimaging with pattern recognition techniques, we explore which neural systems represent and translate moral severity into subjective moral judgments. We explore these questions across different cultures and moral scenarios, from controlled moral vignettes to dynamic, naturalistic stimuli.
- Aligning Language Models with Human Morality: The goal of this research is to evaluate and modify the moral alignment between humans and LLMs. Specifically, we study how similarities and differences in moral cognition can be measured and adjusted by distilling the contextual and cultural factors that guide both human and machine moral reasoning. In this process, we quantify moral biases to better understand how these biases emerge and shift. With these insights, we aim to improve the transparency and explainability of ethical AI systems, ensuring a fair and representative alignment with diverse human populations.
- Multimodal Representations of Morality: In our daily life, moral information is embedded in a rich and multimodal environment. To provide a more accurate cartography of this moral landscape, we explore how deep neural networks can extract and quantify morally salient actions and concepts across multimodal (audiovisual) information channels. By comparing the joined visual and textual representation of morality across both artificial and biological neural networks, we aim to elucidate how the human visual system processes and organizes moral information.