Building AI Systems That Doubt Themselves

Posted by Peter Rudin on 19. January 2018 in News

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.

The work reflects the realization that uncertainty is a key aspect of human reasoning and intelligence. Adding it to AI programs could make them smarter and less prone to blunders.


Leave a Reply

Your email address will not be published. Required fields are marked *