AI — Fear Or Friend It?

In popular culture, AI — artificial intelligence — is often depicted as a malevolent force: machines, built and trained by human programmers, now sentient and self-interested, doing whatever they please.

Yet, AI today is abundantly available and integrated into the daily digital lives of millions. Is your inbox spam-free? That’s AI, trained to recognize keywords associated with past bad email. Does your vehicle have self-driving or autonomous capability? Do your video games include non-player characters? Did your credit card company ask if that’s really you making a purchase? 

So, which is it — AI as threat or AI as convenience? And where is AI heading? 

“As it exists and going forward, [AI] certainly promises to make more things faster, easier and less expensive in a way that is beneficial for all of us,” says Jeffery Atik, Jacob Becker Fellow and professor at the LMU Loyola Law School. “But tied to it are some alarms.”

One development with the potential to alarm, Atik says, is “unsupervised” machine learning, where systems are gaining intelligence but it’s hard to ascertain how. (Supervised machine learning is learning as a result of human intervention — Google Translate improves in accuracy when people edit the machine’s mistakes.) “Unsupervised is a newer technology and the applications for that technology are not as clearly established,” Atik says. “The machine recognizes certain patterns and says, ‘Here’s a jumble of data, but something interesting is going on. … It’s something that you humans might not have noticed before, but I’m picking up something.’”

“AI is built on data, and data is most often historical,” Atik says. “And so it reflects decisions made in the past” — by human beings. AI can do damage by “inadvertently replicating some of our worst past prejudices.”

John Nockleby, the Susan Gurley Daniels Chair in Civil Advocacy and founding director of the law school’s Civil Justice Program and Journalist Law School, is an expert in new technology and privacy.

“The idea that machines are going to be The Terminator — we’re not there yet,” Nockleby says. “That’s the fear, that’s the risk, in terms of independent judgment making, when you empower a war machine to make autonomous judgments about whether to fire.”

Nockleby, who has worked on civil rights cases that went before the U.S. Supreme Court, is concerned about some potential uses of facial recognition software and databases. Facial recognition technology, he says, “has the effect of destroying anonymity where it is utilized. Let’s take people who are protesting government policies, or labor protesters who wish not to be identified. Identification is one way of intimidating or exposing people who may have much to lose.”

Nockleby points to the issue of algorithmic bias and a hypothetical police department using facial recognition software and databases to predict where crime will occur, and by whom. “A big part of the problem is that if you have a system that has systematically discriminated against certain races or ethnicities, that’s the data that’s being fed in.” The system’s database, therefore, will be flush with information that had tremendous adverse impact upon populations.

Atik shares Nockleby’s concerns about algorithmic bias. “AI is built on data, and data is most often historical,” he says. “And so it reflects decisions made in the past” — by human beings. AI can do damage, Atik says, by “inadvertently replicating some of our worst past prejudices.”

Nonetheless, Atik cites sectors where AI could offer benefits, including medical diagnosis and drug discovery, for example. When AI is combined with faster quantum computing in the future, areas of the legal profession involving vast troves of documents, such as tax law, bank stress tests, and anti-trust cases, could also benefit.

Atik says most days he remains more an optimist than a pessimist. “But I can certainly see there’s reason for concern, I don’t want to be pollyannaish about any of this.” 

Jeremy Rosenberg, a frequent contributor to LMU Magazine, is a Los Angeles-based writer, editor and consultant. Rosenberg’s writing has appeared in the Los Angeles Times, the OC Weekly, at KCET.org and elsewhere. His “Mind Bridges,” about mental health care during the COVID-19 pandemic, appeared in the winter 2022 issue of LMU Magazine. Follow him @LosJeremy.