Elaine Chou is a law student at Santa Clara Law, completed doctoral work in cyberethics at Georgetown University, and worked at Center for Democracy and Technology in 2016. Views are her own. Contact: etchou@scu.edu

 

What the future of AI will hold is a crucial question with social, political, and technological dimensions. It was also the topic of this spring’s inaugural Artificial Intelligence and Social Impact series speaker, Ed Felten who in January 2018  addressed the direction of AI and its long-term impacts on the human experience on the Santa Clara campus. (Link to talk) Based on the report he helped write, “Preparing for the Future of Artificial Intelligence” as the Deputy Chief Technology Officer at Obama’s White House OSTP, Felten offered a rational perspective on the future of AI, arguing that the rise of AI should not be feared. His main point was that an AI apocalypse, of an unprecedented and sudden super-explosion over a sustained period of time, is not foreseeable in the near future. AI development fails to support this outdated theory of singularity. Rather, those thinking about the future of AI should move toward a multiplicity framework, where harnessing the power of machine and human together can create unprecedented progress.

The deeply-informative, yet all-too-brief one-hour lecture rapidly covered five parts: first, it defined AI, its historical development and how thought-leaders have perceived it over time. Next, it introduced the singularity theory and a variety of key concepts. Felten dissected singularity theorists’ assertions, offering an alternate AI framework of multiplicity, in that general AI utilizes a multitude of skills that give rise to an era, not a singular episode.  Its encourages a society’s gradual transition, not an explosion. Its growth provides an opportunity for evolved human decision-making. Finally, he encouraged thinking about larger AI issues with the understanding that AI development in conjunction with human prowess can encourage grander possibilities.

When I read the White House  report, I was pleased to see that CDT’s comments to the Request for Information, which I primarily authored, were reflected. The comments focused on how policy improvements could reduce inequality in the workforce and promote societal progress as AI advances, by: (1) using AI for public good, (2) addressing social and economic implications caused by AI, and (3) harnessing AI with scientific and technical training. Instead of replacing jobs, our research suggested that AI would require workers to gain complementary skills that focus on human traits, such as talent, creativity, empathy, and compassion. For this talent revolution to take place, our society will need to profoundly change its approach to education, skills, employment, and cross-industry and cross-sector collaboration. The research showed its imprint during Felten’s lecture. In areas of creativity and innovation, AI development is rudimentary. It has far to go before fearing that an AI explosion will consume humanity.

Contemporary real-world situations support multiplicity. A recent proof-of-concept, the conference: Content Moderation & Removal at Scale held a session on “Humans vs. Machines” at Santa Clara Law on Friday, February 2, 2018. The session illustrated how social media platforms’ use of AI exemplify the multiplicity theory. Content moderation is one context in which humans and machines work in collaboration, not one versus the other. Here, AI can be beneficial in that it can surpass human ability with certain moderation tasks. In other contexts, humans surpass AI. For instance, AI performs the unpleasant job of reviewing and managing offensive material. It is largely consistent, redundant, and reliable based on the algorithmic parameters programmed, processing vast amounts of data quickly. It offers a non-human, non-emotional response that resists fatigue/burn-out, and preserves employees’ well-being by protecting them from sensitive material. By eliminating the human factor, businesses can reduce the labor costs of irrational decision-making, emotional states, and incentive-based reward mechanisms to motivate employees.

On the other hand, use of AI allows employees to perform in productive, creative and thoughtful ways, complementing the limits of AI such as lending diverse human perspectives and situational awareness to moderating content, anticipating trends and making correlations. Humans have the ability to moderate judiciously, to understand that particular content may serve historical purposes — that it may be deemed newsworthy or that it may spark a movement. Interpreting content and determining the level of moderation can involve a nuanced approach to complex situations, by taking cultural and social norms into consideration, building relationships with offending content posters, building community consensus and peer-influence, and explaining moderation procedures. However, human content moderation lacks scalability. But one example, content moderation illustrates how AI develops within the multiplicity framework, demonstrating the ongoing tensions in a gradually evolving AI environment.

And yet, as AI evolves, the ethical issues that AI systems raise become ever more complex. While not dangerous as it pertains to the singularity theory in the foreseeable future, AI can operate in unique ways that can promote productivity as well as widespread and unfettered maliciousness. Some of the most obvious concerns include the 3Ms: its misuse, misdirection, and it’s being provided with misinformation.

A more realistic sci-fi AI scenario might suggest that untamed robots’ almost-infinite physical might, combined with unsophisticated artificial intelligence, is sufficient to create tragic conditions. In the hands of an international adversary, or provided maligned programming, could AI technologies provoke nearly nuclear repercussions?

Yet, as multiplicity theory suggests, a hybrid system of mechanical devices powered by humans – a system combining biological and engineering parts grows to be highly optimized and efficient. Where machines operate best with human oversight and direction, humans too will be augmented with cybernetics. The next generations have the ability to literally become super-humans and superheroes. No matter how unique humans are – in ethical judgment, discernment, and living meaningfully and conscientiously – sophisticated mental prowess and strong ethics will be the new currency in ensuring a just world.

From a governance perspective, perhaps one approach to de-stigmatizing AI’s prolific development and to enhancing the concept of multiplicity is to popularize this era as the “race to AI” – a “challenge to the technological frontier,” similar to how the U.S. “race to the moon” promoted progress on the space frontier in the 1950s.

But one must wonder, is it already too late for the U.S.?

Have foreign adversaries already realized and unlocked AI multiplicity’s potential? Is China’s investment in AI a race to harness multiplicity first? Have Russian artificial intelligence fake-news bot generators already harnessed the multiplicity-science-reality? In essence, Russian bots are highly-optimized and efficient hybrid systems of engineering with human oversight: humans develop machine learning algorithms, made to imitate humans, for the purpose of propagandizing and manipulating susceptible human minds. Are the extraordinary reach and affectation of these troll-bots the choice of weaponized global warfare in this “race” to win? Perhaps meddling in our Western democratic process, our U.S. elections, foretells the great menace.