Quantcast
Channel: PhD – ORGANIZING CREATIVITY
Viewing all articles
Browse latest Browse all 186

Should AI be explainable or should it just evaluate its predictions?

$
0
0
«Worlds governed by artificial intelligence often learned a hard lesson: Logic Doesn’t Care.» Yin-man Wei, «This Present Darkness: A History of the Interregnum», CY 11956 in Andromeda One topic last semester was AI. It’s — again — a trending topic (at least in Germany). And yeah, the benefits seem overwhelming. After all, if we are living our lives digitally, if we log what we do and leave digital traces no oppressive government would ever dream of, then there is lots and lots of data for AI to work with. One problem in many applications is explainabilty. After all, AI isn’t a god (not yet, anyway). You can hardly point to the computer and say: “I do x because that algorithm told me so.”. Just image the algorithm decides whether you get a job, or even only the job interview, or that credit. But I wonder whether the quest to make AI explain itself is the right track. After all, this is something most humans suck at. Sure, we have reasons why we did something. But often these reasons are only post-hoc rationalizations. Haidt’s elephant and rider. In contrast, what about an AI that makes predictions of what happens when it [...]

Viewing all articles
Browse latest Browse all 186

Trending Articles