Using AI to Augment Human Intelligence in Psychological Research • The increase in data about human behavior creates an opportunity to use AI systems to aid in sense-making and theory development. • Machine learning models can be used to compare the predictions of human-developed theories to predictions made by the models. • This comparison reveals the gap between human-developed theories and machine-generated predictions.
Speaker 1
As the amount of data that we have in particular in psychology, the amount of data that we have about human behavior increases, it’s possible for that to get to a point where it goes beyond What it’s easy for a human being to think about. That creates an opportunity for us to use AI systems to augment the intelligence of humans as they’re dealing with these kinds of data to come up with new ways of making sense of those Data to develop new meaningful scientific theories. We began to explore in my lab how we can use those kinds of methods in the context of psychological research. We’ve done this in a couple of different ways. One way is to take a really big data set and take a psychological theory and say, how much are we missing? How good is our theory really in terms of explaining these data? What we can do then is take a machine learning model, fit that machine learning model to the data that is make the machine learning model make the best predictions that it can from the Behavioral data. Then we can compare how good the predictions of the machine learning model are to the predictions of a theory that’s developed by human scientists. That gives us a gap. It tells us how big is the gap, how much space is there to fill in terms of trying to understand what’s going on.Augmenting Human Intelligence with AI in Psychological Research • The increasing amount of data in psychology presents a challenge for human comprehension. • AI systems can help augment human intelligence in analyzing large datasets. • Machine learning models can be used to compare the predictions of psychological theories. • Using machine learning output can lead to the development of better psychological theories. • Scientific regret minimization emphasizes the importance of addressing avoidable errors.
Speaker 1
As the amount of data that we have in particular in psychology, the amount of data that we have about human behavior increases, it’s possible for that to get to a point where it goes beyond What it’s easy for a human being to think about. That creates an opportunity for us to use AI systems to augment the intelligence of humans as they’re dealing with these kinds of data to come up with new ways of making sense of those Data to develop new meaningful scientific theories. We began to explore in my lab how we can use those kinds of methods in the context of psychological research. We’ve done this in a couple of different ways. One way is to take a really big data set and take a psychological theory and say, how much are we missing? How good is our theory really in terms of explaining these data? What we can do then is take a machine learning model, fit that machine learning model to the data that is make the machine learning model make the best predictions that it can from the Behavioral data. Then we can compare how good the predictions of the machine learning model are to the predictions of a theory that’s developed by human scientists. That gives us a gap. It tells us how big is the gap, how much space is there to fill in terms of trying to understand what’s going on. We can actually use the output of the machine learning model to go back and critique our psychological theory and come up with a better theory. We call this scientific regret minimization because the idea is that you should care about the errors that you’re making that you could have avoided.Using Machine Learning and Big Data to Improve Understanding of Human Decision-Making • The speaker used a strategy to examine human decision-making using a large dataset of over 10,000 scenarios, which is significantly more than previous studies. • The speaker used a machine learning model to analyze the data and found that while basic psychological principles still explain human behavior, they don’t apply uniformly to all problems. • The use of this strategy improves the accuracy of predictions about human behavior and allows for the exploration of more complex theories. • The ability to collect and analyze more data is a natural progression that leads to a better understanding of human behavior.
Speaker 1
In one paper recently, we used this strategy to look at human decision making where previously papers on decision making had used on the order of about a few hundred decisions, like Choices between different gambles to say that people might be entertaining. You want a 10% chance of this prize or a 90% chance of this prize versus a 20% chance of this prize and a 80% chance of this prize. These are the way that we investigate human decision making. Previous research had used a few hundred of these scenarios to try and understand people’s decisions but we used more than 10,000 of them and then we’re able to use a machine learning Model to go in and work out. The basic psychological principles we’re using to explain those data are essentially correct but they don’t apply uniformly across the space of problems. For some problems, people behave one way, for other problems, people behave another way and that gives us another kind of thing that we now want to be able to explain about human behavior.
Speaker 2
In essence, it’s making the science more accurate. Is that what you’re saying?
Speaker 1
It’s doing two things. One is it’s making the predictions that we’re able to make about human behavior more accurate. But the other is it’s allowing us to explore theories that might be more complicated than the theories we were able to explore before. That’s a natural consequence of just having more data.Machine learning models as research infrastructure: they unblock a specific institutional inefficiency in how research is conducted. Rather than waiting for human theorists to synthesize across studies, an ML model can serve as aggregator and assembler for different theories. The paradigm shifts what counts as researcher time — from synthesis work to addressing the gaps the model reveals.Will sentient AI be possible and will it resemble human intelligence? • Sentience is a big question in artificial intelligence. • AI systems should not be expected to be like humans. • Human cognition and AI engineering have different constraints. • AI systems can intelligently solve problems in ways that are not human-like.
Speaker 2
Let’s talk about one really big picture question I think a lot of people are asking, especially after that scientist at Google, thought that his program was becoming sentient. Will sentience be possible? In your view, do you think that we’re going to be able to make an artificial intelligence that has self-awareness? What is sentience?
Speaker 3
Yeah, that’s a big question.
Speaker 1
It’s also one that I would say I don’t think about a lot. I’m not sure what difference it makes to the way that we approach these systems or our interactions with them or the way that we think about things. The first thing I’d say is that we shouldn’t expect AI systems to be like people. The reason why we shouldn’t expect AI systems to be like people is the constraints that characterize human cognition are different from the constraints that characterize the circumstances Under which AI is being engineered. Humans are this weird special case of intelligence that comes from our biological heritage of having limited lifespans, limited amounts of computation, and limited communication. When you take those constraints away, you’re going to get systems that can intelligently solve problems. They’re not necessarily going to solve problems in ways that look human-like. They’re going to solve problems in ways that are different because they’re not operating under the same constraints.
