When Deep Blue defeated chess master Garry Kasparov some 20 years ago, chess players shuddered. Will artificial intelligence, or AI, destroy their game? Earlier this year, an AI system called Libratus played the world’s best Texas Hold’em poker players and won. Not much fun getting beat up by a machine.

“When you use AI in gaming, a lot of game developers will say, ‘Oh, it’s not interesting because the characters become super-human and kill you all the time,’” Danny Lange, vice president of AI and machine learning at video game developer Unity Technologies, said at Mobile World Congress in San Francisco last week.

For sure, AI has a scary reputation of being unbeatable in the gaming community and elsewhere – but that’s the fault of developers telling AI what to do. A hot topic among AI experts is something called “objective function,” which is what AI is trying to achieve. Early AI systems had objective functions to kick people’s butts, or at least outperform them in some way.

But that’s not really why AI exists.

Related: Zero One: Deep Learning on Artificial Intelligence

Here’s how Lange develops AI with an objective function in mind. In the gaming environment, Lange doesn’t program an AI character so much as trains it. The AI character might play against itself and learn, play against humans and learn, and imitate a user and learn. In all cases, the objective function governs the learning. That is, what does Lange want the AI character to do?

“That objective function could be minutes of play time” instead of winning, Lange says. “Basically, the game will learn to keep you engaged and go and go and go rather than just kill you off. All the magic is in your way of expressing your objective, what you want the system to learn.”

Like everything else about AI, the objective function is a complex concept. For instance, it’s easy for developers to focus on an objective function and miss the greater benefit. Lange has seen AI in an exercise bike with an objective function that combines the heart rate and the crank of the bike. But the system’s real benefit comes from learning how to coach the user to become a better bicyclist.

AI’s objective function can also be as varied as a human being’s desires, which presents a problem if you believe AI ultimately is supposed to improve people’s lives.

Consider healthcare, where AI systems have taken hold. What does an objective function to improve the welfare of human beings look like? Ask someone what they want, and you’ll get different answers. Some people might want to live as long as possible. Some might not care about living as long as possible. Some might want to live as long as possible only while they’re still mobile.

“We regard the objective function as static,” says Gunnar Carlsson, co-founder of Ayasdi and a retired math professor at Stanford University. “But somehow we should be experimenting all the time with tons of objective functions so we understand the benefit.”

Tom Kaneshige writes the Zero One blog covering digital transformation, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him at tom.kaneshige@penton.com.