Don’t want to humble-brag (but already right if you think about it what am I doing?) but this was the first hit in my search for a bit more on the question I asked at the end of an interesting talk today at SCU by Vivek Krishnamurthy, and it was exactly my question. Glad to know I am not in the far-away rafters when it comes to these issues.
Giving algorithms a sense of uncertainty could make them more ethical Posted on February 5, 2019 by Michael Rowe The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs. Say the AI system was meant to help make medical decisions. Instead of recommending one treatment over another, it could present three possible options: one for maximizing patient life span, another for minimizing patient suffering, and a third for minimizing cost. “Have the system be explicitly unsure and hand the dilemma back to the humans.” Hao, K. (2019). Giving algorithms a sense of uncertainty could make them more ethical. MIT Technology Review.
I think about clinical reasoning like this; it’s what we call the kind of probabilistic thinking where we take a bunch of – sometimes contradictory – data and try to make a decision that can have varying levels of confidence. For example, “If A, then probably D. But if A and B, then unlikely to be D. If C, then definitely not D”. Algorithms (and novice clinicians) are quite poor at this kind of reasoning, which is why they’ve traditionally not been used for clinical decision-making and ethical reasoning (and why novice clinicians tend not to handle clinical uncertainty very well). But if it turns out that machine learning algorithms are able to manage conditions of uncertainty and provide a range of options that humans can act on, given a wide variety of preferences and contexts, it may be that machines will be one step closer to doing our reasoning for us.