AI Bias: Good intentions can lead to nasty results
AI isn’t magic. Whatever “good judgment” it appears to have is either pattern recognition or safety nets built into it by the people who programmed it, because AI is not a person, it’s a pattern-finding thing-labeler. When you build an AI solution, always remember that if it passes your launch tests, you’ll get what you asked for, not what you hoped you were asking for. AI systems are made entirely out of patterns in the examples we feed them and they optimize for the behaviors we tell them to optimize for.If you care about AI safety, you’ll insist that every AI-based system should have policy layers built on top of it. Think of policy layers as the AI version of human etiquette.
I happen to be aware of some very pungent words across several languages, but you don’t hear me uttering them in public. That’s not because they fail to occur to me. It’s because I’m filtering myself. Society has taught me good(ish) manners. Luckily for all of us there’s an equivalent fix for AI… that’s exactly what policy layers are. A policy layer is a separate layer of logic that sits on top of the ML/AI system. It’s a must-have AI safety net that checks the output, filters it, and determines what to do with it.
0 Comments