Machine learning has already changed the way we work and process information in the modern business environment. It’s helped us become more efficient—make smarter decisions—and target customers better than ever before. But there’s a whole new type of learning—reinforcement learning—that is going to do a lot more.
Unlike machine learning, which uses data analysis to allow computers to learn without being programmed, reinforcement learning allows computers to learn from experience—just like humans do. Much like Thorndike’s well-known experiment where a cat was trapped in a box until it learned to step on a lever to escape, artificial intelligence (AI) is now learning how to solve problems faster and faster using this technique. It’s almost as if technology is crossing over into humanity as we study its behaviors under various methods of conditioning.
Recently, tech giants in the AI space, including Alphabet, have been making huge leaps in advancement in reinforcement learning. It has the potential to change everything from how we drive to how we interact with one another—and there are tons of ethical issues still to be determined. The following are a few things to know about reinforcement learning.
Machine learning as we’ve traditionally known it is mostly what you’d call “supervised learning” or programming. In this type of learning, developers create a curated or labeled set of data, and computers learn to associate different shapes, sounds, or words with those curated sets. This process is incredibly labor and time intensive. In reinforcement learning, however, the learning happens much differently. The computer learns by interacting with the world around it. It learns through trial and error what the goal is, rather than being told. In essence, it learns to solve problems—not to look for specific solutions.
Why It Matters
In essence, reinforcement learning is teaching computers to think, not just learn. Using their own judgment, they determine their behavior based on feedback from the world around them, just like we as humans would when operating in our physical environments. That’s a potential game changer for almost every industry, from the military—where AI soldiers could become the norm—to self-driving cars—which could easily take over the highways if we are able to produce vehicles with good “judgment.”
It’s gaining traction.
You’ve probably heard that Google’s AlphaGo AI beat the world champion of the game Go simply by playing against itself. But Google’s Deepmind division is not the only supporter. Today, reinforcement learning is being used to do things like identify cancer in MRI scans, and it could eventually be used for everything from public safety and public transit to protecting our energy supply and other natural resources.
It’s Not Perfect—Yet
Though developers have made major breakthroughs in reinforcement learning, there is still much learning to do on the part of humans. Until now, reinforcement learning has dealt with specific tasks. It can difficult for computers to know what they’re looking for if the goal is immediately clear, such as in a game like Go. Ultimately, to create things like personal assistant robots or medical assistants, we’ll need to develop AI that is flexible enough to learn “common sense” to manage a wide range of issues. Right now, we’re just not there. The issue of “delayed” rewards make reinforcement learning more difficult.
Ethics are an Issue
Don’t kid yourself—introducing self-learning robots that can learn faster and better than humans will come with a huge range of issues. On our end, we can only program them to the extent of our human knowledge, which is always going to be limited. If we forget to set system safeties, we could have serious trouble on our hands in terms of public safety. On the other end, the question remains: do we really want to create a world of computers that think—and do—via their own free will, especially when they are smarter than humans? That’s definitely an issue we need to reflect on before jumping too far into the reinforcement learning landscape.
For now, the potential for reinforcement learning is so vast and promising that it would appear to outweigh the risk. There is simply no telling what we—and they—can do.
Additional Resources on This Topic
The Booming Artificial Intelligence Market: Who’s In – Everybody!
Why Deep Learning (and AI) Will Change Everything
This article was first published on blogs.3ds.com.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.