Those inclined to think apocalyptically know that tech, in its purest form, spells civilizational disaster. It is true that we might never see a world filled with violent hypertrophic CRISPR babies, and uncontrollable self-driving cars, and AI intent on twisting humans into paperclips. Our tech-hastened end, if and when it does arrive, will probably look a bit different and will probably suck in ways we cannot yet imagine. In the meantime, though, it’s worth wondering: what’s the most dangerous emerging technology? For this week’s Giz Asks, we reached out to a number of experts to find out.
Zephyr Teachout
Associate Professor, Law, Fordham University
Private workplace surveillance. It upends the already awful employer-employee power dynamics by allowing employers to treat employees like guinea pigs, with vast asymmetries of information, knowing what motivates them to work in unhealthy ways and how they can extract more value for less pay. It allows them to weed out dissidents with early warning systems, and destroy solidarity through differential treatment. Gambling research taught casinos how to put together gambling profiles to customize appeals to be able to earn as much as possible off of each gambler’s weaknesses—that technology, now entering the workforce, is on the verge of ubiquity, unless we stop it.
Michael Littman
Professor, Computer Science, Brown University
The 2021 AI100 report, released last month, included a section on the most pressing dangers of artificial intelligence (AI). The 17-expert panel expressed the opinion that, as AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate.
One of the panel’s biggest concerns about AI is “techno-solutionism,” the attitude that technology like AI can be used to solve any problem. The aura of neutrality and impartiality that many people associate with AI decision-making results in systems being accepted as objective and helpful even though they may be applied inappropriately and can be built on the results of biased historical decisions or even blatant discrimination. Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made. AI systems are being used in service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. Insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of AI-based systems and over-reliance on those systems. AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare, meaning these approaches have literal life-and-death stakes.
The dangers of AI automation are mitigated if, on matters of consequence, the people and organizations responsible for the outcomes play a central role in how AI systems are brought to bear. Engaging all relevant stakeholders can drastically slow the delivery of AI solutions to hard problems, but it’s necessary—the downsides of misapplied technology are too great. Technologists would be well served to adopt a version of the healthcare dictum: first, do no harm.