A couple of weeks ago, Fox announced their new “science fiction” TV show “neXt.” The show’s description sounds similar to that of Black Mirror or Phillip K. Dick’s Electric Dreams, a cautionary tale on the encroaching control technology has over its users:
“neXt is a propulsive, fact-based thriller about the emergence of a deadly, rogue artificial intelligence that combines pulse-pounding action with a layered examination of how technology is invading our lives and transforming us in ways we don’t yet understand.”
However, as far as I can tell from its trailer, the show couldn’t be further from that goal. The trailer opens with an Alexa-lookalike, and all seems well, but slowly you realize it’s gaining a will of its own. Causing havoc and rebelling against its users and human creators with the subtlety and verbosity of a Bond super-villain. This attempt at an H.G. Wells approach of representing societal fears in the form of an “invader” figure may have worked for War of the Worlds or Godzilla. However, using artificial intelligence AI in this context is actively shielding us away from real-life concerns regarding AI and decision-making algorithms. Many of which significantly impact human lives in the non-fictional reality we’re living in compared to the harmless fiction the show portrays.
It’s not (just) data privacy
To be clear, data privacy is a component. You should be concerned about where your data, the backbone of artificial intelligence AI, is going. It’s becoming increasingly inevitable that by being an active participant in society, you will be forced to yield a very sizable portion of your tastes, behaviors and self to data-gathering companies. These companies’ sole purpose is to squeeze as much value as they can from the data, and there is little we can do without becoming digital hermits.
Did you know that not only can you be tracked and identified through clicks and online searches, but also your physical actions? Those starting to block their laptop webcams may be on the right track but probably have not heard of behavioral biometrics. Techniques through which details such as the cadence of your typing on a keyboard or the travel patterns of your mouse can tie you back to your identity (learn how to stop that here).
Now I’m not saying, don’t try to minimize your digital data footprint. Moreover, conversations surrounding data privacy efforts should also focus on public outcry for strict safety and privacy policies. On a recent episode of the Vox podcast “Recode Decode,” Gabe Weinberg CEO of DuckDuckGo CEO argues that opting-in to data tracking should be the default, rather than the obscure opt-out approaches many data companies implement. A popular point of view many European countries have agreed on through policy that in time needs to be adopted worldwide.
Behavior-shifting tactics
Money-hungry data companies have realized that not only can they use their assets and tools for data collection and behavioral predictions, but they can now deploy their insight to shift behavior. Effectively rigging our future actions to result in the utmost optimal benefits of them in any way possible. In the book “The Age of Surveillance Capitalism,” author Shoshana Zuboff argues the endgame for capitalist behavioral analytics may not just represent a threat to our privacy, but our very free will; a deterministic landscape of preset decisions and self-fulfilling prophecies:
“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behavior”
The impending implications for non-regulated data tracking and manipulation under the control of a capitalist mindset should be evident. From threats to our very behaviors to democracy itself, we don’t need a computerized super-villain to imagine a genuine and present AI threat. A robust and ethics-based approach to implementing data policies is fundamental if we want to make it out of this new digital golden age with our autonomy intact.
No such thing as bias-free
Biases exist everywhere, and algorithms aren’t helping, they’re exacerbating the problem by obscuring it. Data and algorithms are not an objective reflection of our world; they carry with them the very same prejudices, biases and misunderstandings from the cultures and organizations that created them. Except they hide inside packages that encourage a lack of accountability for these biases making it easier to justify using determinism mistaken for objectivity. By implementing systems that decide who to prioritize for medical care, who to suspect of a crime or what content to recommend to children online, we are increasing that lack of accountability without addressing pressing concerns.
Rachel Thomas, a professor at the University of San Francisco Data Institute and co-founder of Fast.ai has spoken and written at length on this subject:
“Our algorithms and products impact the world and are part of feedback loops. Consider an algorithm to predict crime and determine where to send police officers: sending more police to a particular neighborhood is not just an effect, but also a cause. More police officers can lead to more arrests in a given neighborhood, which could cause the algorithm to send even more police to that neighborhood (a mechanism described in this paper on runaway feedback loops).”
Computer vision algorithms perform poorly on people of color to those of European descent, due to the overwhelming majority of facial data being from people of European descent, most collected behavioral data comes from rich parts of developed world countries, autofill algorithms are more likely to suggest STEM-related occupational terms for men and housekeeping terms for women. These errors don’t come from anyone’s maliciousness but societal and personal biases reflected in the data’s breadth and width; regardless of origin, they need to be addressed.
Far from arbiters of objective truth, algorithms and data carry our biases, to which there are plenty. The lack of understanding from those that execute, deploy, consume and regulate them lead us to be carried off the cliff by prejudiced, manipulable machines of our creation. Ethical considerations have been a crucial part of AI’s core since its inception. As a result, waiting until the damage has been done or trusting that the threat of hefty financial penalties is more than just a band-aid fix on a larger problem is a mistake.
What to do about the pressing concerns with artificial intelligence AI?
Sadly, solving these problems won’t be as easy as calling on the scruffy no-rules detective to beat Alexa’s evil cousin like Fox’s tv show “neXt” would have us believe. There is no “other” here, no rogue artifical intelligence AI from an experiment gone wrong, just the endorsement of our societal prejudices and misgivings, packaged into shiny deterministic lines of code.
We already know how to handle biases. Humanity has been intelligent enough to recognize and correct them (to varying degrees) before. The challenge is our ability to apply these solutions to systems that we once imagined fair or objective. In doing so, the next action is to establish limitations and accountability for actions taken by the people responsible for each step in the data processes – its collection, storage, exploration, modeling and deployment. Subsequently, always being aware of how easy they are to miss and be open to understanding that algorithms may make mistakes. Sometimes, frequently and more impactful than expected. Ultimately achieving success means being prepared to course-correct the issue proactively if not flat out discard the algorithm before the damage is irreversible.
At an InfoQ presentation, Rachel Thomas outlines some solutions to problems we discussed above, such as:
- Don’t maximize metrics – It’s easy to want numbers to increase, but the more we reduce data into numbers, the easier it becomes to miss mistakes and reward negative behaviors that exist in the services we provide. Or enabling restrictive feedback loops that unjustly misrepresent said services.
- Hire diverse teams – Research shows diverse teams perform better, and analytically they contribute a greater comprehension of the subject matter your datasets encompass, leading to better, actionable insights and an assurance that bias is not coming from your team.
- Push for policy – “Advanced technology is not a substitute for good policy,” says Thomas. Bias is hard to catch; no matter how good your algorithms or pipeline. Regulations and protocols need to be in place so users can appeal decisions dictated by the algorithms.
- It’s a never ending job – We will never reach a point where people and our algorithms will be bias-free. That’s how we got here in the first place! It’s our responsibility to train people to always be on the lookout for feedback loops, biases and prejudices implicit in our data.
In the end, leave the black-and-white scenarios for bad science fiction TV shows.
About Sergio Morales Esquivel
Sergio Morales Esquivel is the Principal Engineer of Analytics Strategy at Growth Acceleration Partners and a professor at the analytics post-graduate program at Cenfotec University. Sergio leads the Data Analytics Center of Excellence at GAP, where he directs efforts to design and implement solutions to complex data-related problems. Sergio holds a B.S. in Computer Engineering and an M.S. in Computer Science from Tecnológico de Costa Rica. Outside of work, he enjoys traveling, making games and spreading the love for open software and hardware. You can connect with Sergio on his website or LinkedIn, or send him an email.
Comments are closed.