Main content start
Jared Moore
How can we get AI to do what we want? I work on social reasoning and alignment in large language models (LLMs). My research explores the core social abilities of LLMs: whether they can understand what people think, figure out what is “right,” and help us live up to our ideals. In addition to these basic questions, I test how LLMs’ social abilities hold up in real-world settings, such as in therapy. As AI systems become more widespread, we need to ensure they act in our best interests and do not cause harm. By improving LLMs' social reasoning, I aim to make AI systems that are safer, more helpful, and better aligned with human values.