There is no escaping the unblinking eye of artificial intelligence, which sees and hears all (and it will read your lips when it can’t hear you).
There is a common view in popular culture of artificial intelligence as magically powerful and, often, malevolent. The classic example is HAL 9000, from 2001: A Space Odyssey—the omnipresent computer whose implacable calm masks its murderous intent. There is no escaping the unblinking eye, which sees and hears all (and it will read your lips when it can’t hear you).
This image’s simple resonance—the idea that technology will soon outstrip and overwhelm us—has created an enduring misperception: that AI is privacy invasive and uncontrollable. It has obvious narrative appeal, but isn’t grounded in reality. The surprising truth is very near the opposite: As big data and rapidly accelerating computing power intertwine, AI can be a bulwark of privacy. Real-life AI can, in fact, be a defense against what its fictional counterparts threaten. But we can’t leverage that potential if we mistake misuses of AI for fundamental flaws.
It’s not hard to figure out where the image of AI as necessarily omniscient comes from. The information age has created a spiraling loss of privacy, some of it self-driven as we share online both mundane and momentous details of our lives. And some is generated by the digital footprints we leave on- and off-line through web browsing, e-commerce, and internet-enabled devices. Even as we have created this constellation of data and sources of it, we have made it easier for entities—be they individuals, private companies, or government bodies such as law enforcement—to share it. Before now, people could expect privacy by obscurity: When data did exist it was less accessible and harder to share. But the new tide of information has eroded anonymity.
At the same time, ever-more-powerful systems have made this data flood manageable. Moore’s law—that computing power (as measured by transistors per chip) doubles every two years—has held for 50 years, with astounding effects. Just think of the power of your first computer versus what you’re reading this on (possibly in the palm of your hand). Similarly, Nielsen’s law, which asserts that high-end users’ internet bandwidth increases by 50% annually, has also borne out. We have both unprecedented access to information about ourselves and each other, and the ability to gather, slice, dice, and massage it in extraordinary ways.
The development of facial recognition technology is a classic example of the confluence of these trends. While imperfect, it has eased everyday life in numerous ways: Individuals use this technology to unlock phones and mobile apps and to verify purchases. Additionally, AI-powered computers can peruse innumerable images to find a specific individual, combing through more information faster than a human could, making it possible to locate a missing child or help find a Silver Alert senior, for example. More sensitive tasks, such as locating a criminal suspect, require more careful human–AI collaboration. In such instances the algorithms can be used to generate leads and double-check human-made matches, for example.
But, as with all technological advances, carelessness or misuse can lead to negative outcomes. We can track just about anyone with disturbing ease. There is seemingly no escaping the unblinking eye.
Or is there? AI is not itself the problem. It is a tool—one that can be used to safeguard against abuses. The central issue is the data and how it is being used: precisely what is being collected, who has access to it and how. That last part manifests in a couple of important ways. A critical difference between AI and a traditional computer is that the former has the capacity to learn from experience. The same inputs will not necessarily produce the same outputs. Modern AI even has the ability to express uncertainty or even confusion. So human users must be able to understand the limits of the available data and account for such results.
But for all its potential, we—the humans who develop and employ AI—decide when and how it is used and can design the rules to ensure responsible use while protecting privacy and preserving civil liberties.
Read the full article and more content on Fast Company SA.