Moral Computing Lab

Welcome to the Moral Computing Lab!

We are an interdisciplinary research group investigating how humans and artificial intelligence (AI) systems perceive, represent, and reason about what is morally right and wrong behavior. Our mission is to provide answers to a broad array of interlocking questions, including: How do neural activation patterns give rise to moral judgments? How can we evaluate and modify the moral alignment between humans and Large Language Models (LLMs)? Can deep neural networks learn to recognize the moral information embedded in our multimodal environments? We study these questions using diverse methodologies, from (large-scale) behavioral experiments and functional neuroimaging to natural language processing and computer vision.

Lab Spotlights

  • April ‘25: We will be presenting research at the Moral Media Conference in Buffalo, NY, where PI Frederic Hopp delivers a keynote address.
  • October ‘24: The Moral Computing is officially founded :)

Who we are

Jun.-Prof. Dr. Frederic Hopp

Junior Professor Big Data

+49 (0) 651 201-1712

Dr. Tim Lauer

Senior Researcher

+49 (0) 651 201-1706

Recent Lab Publications (selection)

  • Hopp, F. R., Jargow, B., Kouwen, E., & Bakker, B. N. (2024). The Dutch moral foundations stimulus database: An adaptation and validation of moral vignettes and sociomoral images in a Dutch sample. Judgment and Decision Making, 19, e10. https://doi.org/10.1017/jdm.2024.5
  • Weber, R., Hopp, F. R., Eden, A., Fisher, J. T., & Lee, H. E. (2024). Vicarious punishment of moral violations in naturalistic drama narratives predicts cortical synchronization. NeuroImage, 292, 120613. https://doi.org/10.1016/j.neuroimage.2024.120613
  • Hopp, F. R., Amir, O., Fisher, J. T., Grafton, S., Sinnott-Armstrong, W., & Weber, R. (2023). Moral foundations elicit shared and dissociable cortical activation modulated by political ideology. Nature Human Behaviour, 7(12), 2182-2198. https://doi.org/10.1038/s41562-023-01693-8
  • Mokhberian, N., Hopp, F. R., Harandizadeh, B., Morstatter, F., & Lerman, K. (2022, November). Noise audits improve moral foundation classification. In 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 147-154). IEEE. https://doi.org/10.1109/ASONAM55673.2022.10068681
  • Hopp, F. R., Fisher, J. T., Cornell, D., Huskey, R., & Weber, R. (2021). The extended Moral Foundations Dictionary (eMFD): Development and applications of a crowd-sourced approach to extracting moral intuitions from text. Behavior Research Methods, 53, 232-246. https://doi.org/10.3758/s13428-020-01433-0
  • Hopp, F. R., Fisher, J. T., & Weber, R. (2020). A Graph-Learning Approach for Detecting Moral Conflict in Movie Scripts. Media & Communication, 8(3). https://doi.org/10.17645/mac.v8i3.3155

Current Projects and Cooperations (selection)

  • Decoding the Moral Brain: We seek to understand how the human brain computes the moral reprehensibility or acceptability of varied moral scenarios. By combining functional neuroimaging with pattern recognition techniques, we explore which neural systems represent and translate moral severity into subjective moral judgments. We explore these questions across different cultures and moral scenarios, from controlled moral vignettes to dynamic, naturalistic stimuli.
  • Aligning Language Models with Human Morality: The goal of this research is to evaluate and modify the moral alignment between humans and LLMs. Specifically, we study how similarities and differences in moral cognition can be measured and adjusted by distilling the contextual and cultural factors that guide both human and machine moral reasoning. In this process, we quantify moral biases to better understand how these biases emerge and shift. With these insights, we aim to improve the transparency and explainability of ethical AI systems, ensuring a fair and representative alignment with diverse human populations.
  • Multimodal Representations of Morality: In our daily life, moral information is embedded in a rich and multimodal environment. To provide a more accurate cartography of this moral landscape, we explore how deep neural networks can extract and quantify morally salient actions and concepts across multimodal (audiovisual) information channels. By comparing the joined visual and textual representation of morality across both artificial and biological neural networks, we aim to elucidate how the human visual system processes and organizes moral information.