I’ve become fascinated lately with artificial intelligence (AI), machine learning (ML), and quantum computing. We use AI to find patterns and predict purchasing behavior, but we also use it everywhere else – predictive policing, assessing risk, making art, and even hiring employees.
AI sometimes creates ridiculous outputs, called AI hallucinations. For example, when asked to show the fewest number of steps between distances, one AI experiment answered by tripping and falling from Point A to Point B. Few steps taken. In another, to produce the safest condition for a factory, the AI chose to shut down the factory.
Sometimes AI misidentifies or fails to identify the desired objects. In one case, a self-driving car likey misinterpreted a moving truck as a billboard when viewed from the side, as opposed to from the front or back as it had typically seen trucks before. This hallucination led the car to crash with the truck.
These hallucinations range from comical to frightening. Think, The Monkey’s Paw. With predictive policing and employee hiring, it seems that algorithms have been producing biased results, weighing results against minorities.
There are only two explanations for this: (1) You’re asking the wrong question of the computer. It’s only giving you what you asked for. (2) You have a bias bias, or a preference that the result confirm your interpretation of social equality.
If you’ve asked the best question you reasonably could, then the results are whatever they are. If you care about the truth, then you’ll follow reason and evidence in whatever direction reason and evidence take you.
Remember that it’s a machine. It doesn’t feel. It doesn’t experience meaningful human connection. The GIGO rule applies. Garbage in, garbage out. If what’s going out makes no sense, then maybe we should look at what’s going in. The solution is to ask a better question.