Related projects
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Deep neural networks are effective at image classification and other types of predictive tasks, achieving higher accuracy than conventional machine learning methods. However, unlike these other methods, the predictions are less interpretable. While accuracy may be enough for applications where errors are not costly, for real world applications, we want to also know when the predictions are more likely to be correct. Estimating the likelihood that a prediction is correct is called confidence, or uncertainty. In order to deploy these methods in a public sphere, we need to better understand why they make the predictions. This project will focus on one aspect of understanding: developing methods to estimate the uncertainty associated with a given prediction. This research will allow us to be more confident when using the predictions of the models.
Adam Oberman
Mariana Prazeres;Aram Pooladian;Ryan Campbell
Fédération des caisses Desjardins
Mathematics
McGill University
Accelerate
Discover more projects across a range of sectors and discipline — from AI to cleantech to social innovation.
Find the perfect opportunity to put your academic skills and knowledge into practice!
Find ProjectsThe strong support from governments across Canada, international partners, universities, colleges, companies, and community organizations has enabled Mitacs to focus on the core idea that talent and partnerships power innovation — and innovation creates a better future.