Machine learning (ML) algorithms can already recognize patterns far better than the humans they’re working for. This allows them to generate predictions and make decisions in a variety of high-stakes situations. However, for ML systems to truly be successful, they need to understand human values.
Researchers still need to answer empirical questions related to things like how values evolve and change over time. And once all the empirical questions are answered, researchers need to contend with the philosophical questions that don’t have an objective answer, like how those values should be interpreted and how they should guide an AGI’s decision-making.