This is an asinine position to take because AI will never, ever make these decisions in a vacuum, and it’s really important in this new age of AI that people fully understand that.
It could be the case that an accurate, informed AI would do a much better job of diagnosing patients and recommending the best surgeries. However, if there’s a profit incentive and business involved, you can be sure that AI will be mangled by the appropriate IT, lobbyist, congressional avenues to make sure if modifies its decision making in the interests of the for-profit parties.
I think your hypothetical is just false, that we can’t even give AI that much potential credit. And this is incredibly obvious if you ask about transparency, reliability, and accountability.
For example, it may be possible to come up with a weighted formula that looks at various symptoms and possible treatments and is used to come up with a suggestion of what to do with a patient in a particular situation. That’s not artificial intelligence. That’s just basic use of formulas and statistics.
So where is the AI? I think the AI would have to be when you get into these black box situations, where you want to throw a PDF or an Excel file at your server and get back a simple answer. And then what happens when you want clarity on why that’s the answer? There’s no real reply, there’s no truthful reply, it’s just a black box that doesn’t understand what it’s doing and you can’t believe any of the explanations anyway.
This is an asinine position to take because AI will never, ever make these decisions in a vacuum, and it’s really important in this new age of AI that people fully understand that.
It could be the case that an accurate, informed AI would do a much better job of diagnosing patients and recommending the best surgeries. However, if there’s a profit incentive and business involved, you can be sure that AI will be mangled by the appropriate IT, lobbyist, congressional avenues to make sure if modifies its decision making in the interests of the for-profit parties.
I think your hypothetical is just false, that we can’t even give AI that much potential credit. And this is incredibly obvious if you ask about transparency, reliability, and accountability.
For example, it may be possible to come up with a weighted formula that looks at various symptoms and possible treatments and is used to come up with a suggestion of what to do with a patient in a particular situation. That’s not artificial intelligence. That’s just basic use of formulas and statistics.
So where is the AI? I think the AI would have to be when you get into these black box situations, where you want to throw a PDF or an Excel file at your server and get back a simple answer. And then what happens when you want clarity on why that’s the answer? There’s no real reply, there’s no truthful reply, it’s just a black box that doesn’t understand what it’s doing and you can’t believe any of the explanations anyway.
They will just add a simple flow chart after. If AI denies the thing, then accept the decision. If AI accepts the thing, send it to a human to deny.