Does generative artificial intelligence pose a risk to humanity that could lead to our extinction?
That was among the questions put to experts by the head of the U.S. Federal Communications Commission at a workshop hosted with the National Science Foundation.
FCC chairwoman Jessica Rosenworcel said she is more hopeful about artificial intelligence than pessimistic. “That might sound contrarian,” she said, given that so much of the news about AI is “dark,” raising questions such as, “How do we rein in this technology? What does it mean for the future of work when we have intelligent machines? What will it mean for democracy and elections?”
The discussion included participants from a range of industries including network operators and vendors, leading academics, federal agencies, and public interest representatives.
“We are entering the AI revolution,” said National Science Foundation senior adviser John Chapin, who described this as a “once-in-a-generation change in technology capabilities” which “require rethinking the fundamental assumptions that underline our communications.”
“It is vital that we bring expert understanding of the science of technology together with expert understanding of the user and regulatory issues.”
Investing in AI
FCC Commissioner Nathan Simington pointed out that while technology may sometimes give the appearance of arriving suddenly, in many cases it’s a product of a steady but unnoticed evolution decades in the making. He gave the example of ChatGPT as AI that landed seemingly overnight, with dramatic impact.
“Where the United States has succeeded in technological development, it has done so through a mindful attempt to cultivate and potentiate innovation.”
Lisa Guess, senior vice president of Solutions Engineering at the firm Ericsson/Cradlepoint, expressed concern that her company’s employees could “cut and paste” code into the ChatGPT window to try to perfect it, thereby exposing the company’s intellectual property. ”There are many things that we all have to think through as we do this.”
Other panelists agreed. “With the opportunity to use data comes the opportunity that the data can be corrupted,” said Ness Shroff, a professor at The Ohio State University who is also an expert on AI. He called for “appropriate guardrails” to prevent that corruption.
FCC Commissioner Geoffrey Starks said AI “has the potential to impact if not transform nearly every aspect of American life.” Because of that potential, everyone, especially in government, shoulders a responsibility to better understand AI’s risks and opportunities. “That is just good governance in this era of rapid technological change.”
“Fundamental issues of equity are not a side salad here,” he said. “They have to be fundamental as we consider technological advancement. AI has raised the stakes of defending our networks” and ultimately “network security means national security.”
Digital equity, robocalls
Alisa Valentin, senior director of technology and telecommunications policy at the civil rights organization the National Urban League, voiced her concerns about the illegal and predatory nature of robocalls. “Even if we feel like we won’t fall victim to robocalls, we are concerned about our family members or friends who may not be as tech savvy,” knowing how robocalls “can turn people’s lives upside down.”
Valentin also emphasized the urgent need to close the digital divide “to make sure that every community can benefit from the digital economy not only as consumers but also as workers and business owners.”
“Access to communication services is a civil right,” she said. “Equity has to be at the center of everything we do when having conversations about AI.”
Global competition
FCC Commissioner Simington said global competitors are “really good, and we should assume that they are taking us seriously, so we should protect what is ours.” But regulations to protect the expropriation of American innovation should not go overboard.
“Let’s make sure we don’t give away the store, but let’s not do it by keeping the shelves empty.”
…