PODCAST OVERVIEW
Cultural impact of artificial intelligence on society: Potential benefits and concerns
February 16, 2024 • 5 minutes
Societal concern around the advent of artificial intelligence (AI) in our everyday lives is not lacking, especially as intricate connections between AI and the cultural fabric of our world become further entwined.
What implications does artificial intelligence have on creativity and society at large?
AI can cause conflict, but it can also be used to help solve disputes and build consensus.
In our latest Generation AI podcast episode, Ori Freiman, Fellow at the Center for International Governance Innovation, discusses the potential and expected impacts of AI-generated content on culture, the challenges in globally regulating tech giants, and the ethical considerations surrounding AI technologies in health and education.
The potential for artificial culture and how AI plays a role in shaping culture
Historically, new inventions and innovative, emerging technologies have played a large part in spreading culture. Consider the creation of the printing press and the subsequent spread of literature, literacy, and knowledge as a prime example.
If we expect to one day have artificial general intelligence with the ability to create something structured — such as a novel, TV series, or movie — in a way that is virtually indistinguishable from human-created content, it isn’t a far leap to consider that AI might one day be able to build its own culture.
But we’re not quite there…yet.
“Large language models are relatively new within the last year. Who would have believed they would look like this at a year old? It’s not far-fetched to imagine a scenario where LLMs are trained on texts produced by AI,” Freiman says.
We see the beginnings of this today when using AI-generated training data to augment and make artificial intelligence smarter, but what’s missing to bridge the gap between human-created content and AI-generated content?
The ability to plan.
It might not be long before developments in AI algorithms bridge that gap, but what will happen to culture when stories and other creative endeavors aren’t under human beings’ control anymore?
That question may remain unanswered as policymakers are trying to stay ahead of the pack regarding regulation and rules surrounding artificial intelligence.
Policy: The good, the bad, and the ugly of internet and AI regulation
The internet and artificial intelligence are two key areas to focus on in terms of policy for the upcoming year. Both areas see huge debates about regulations from the European Union to the United States, Canada, and the UK.
“It’s a problem because laws don’t apply everywhere as these are multinational companies. We need better tools to collaborate internationally,” says Freiman.
There is no regulatory body ready and waiting to solve problems or address concerns around AI systems, which becomes a problem in itself by creating a common denominator issue across the board.
Still, it’s worth noting the whole situation is a catch-22. How would a global governing body work anyway? Who would drive decision-making, and why?
The light and dark side of the current regulatory climate
“Some places, like the European Union, are doing wonderful work with privacy regulations through the democratic process. Of course, there’s always room for criticism and improvement, but it’s been positive,” says Freiman.
Their efforts have also become an inspiration for others due to their market power.
“However, the ability of those multinational corporations to negatively affect the process is concerning. And when I say negative, I mean actions that are not in favor of the citizens,” Freiman says.
To stay out of the gray area and far from the dark side, it’s important to be mindful of the interests at play both for larger corporations and the general public.
“What we see is only what’s publicly available. The democratic processes should be much more transparent. It’s hard to keep up to date with everything that happens,” says Freiman.
Ultimately, it’s about having the right people involved in the democratic process. It’s about how they approach making decisions, like consulting key players who need to be consulted for ethical guidelines, as much as which decisions are made.
The potential dark side of artificial intelligence
If regulatory initiatives should prove to be misguided or should end up failing, is there a potential we could see a dark side of AI? Are there really any dangers to artificial intelligence?
There is the automation of misinformation.
“With the cultural effects of using AI, the pace news gets picked up and travels is concerning. It’s mind-blowing the consequences of things that we didn’t have a few years ago,” Freiman says.
There is still so much that is unknown and can’t be predicted about the potential influence of artificial intelligence on the human mind. After all, the output of these AI systems is in the same language as the operating system of the human mind, leaving a lot of room for potential.
Whether the overall influence of AI development will be good or bad — light or dark — to human well-being depends on how things move forward. It’s important to stay mindful of what is doing the influencing as there are safety and ethical concerns over foreign, or external, influences on people.
Despite the potential for unwanted or unexpected influence and the spread of misinformation, there is still room for positive outcomes from the impact of artificial intelligence on society and future culture.
“It comes together, as the cliche goes. At the end of the day artificial intelligence is also used for drug discovery and saving lives,” Freiman says.
Ethical concerns for future applications of AI systems
Given the concern for current AI models’ potential to spread misinformation, there are some proposed or expected applications of artificial intelligence that pose ethical concerns.
“For example, a therapy chatbot could be a really good idea, but the state of the technology now makes it a really terrible idea,” says Frieman.
Artificial intelligence serving as a therapist with the technology as it is now can’t work for the same reasons AI chatbots are unable to successfully write original novels — they can’t make a plan.
Ultimately, technology may get to a point where a bot can write a book on its own, and there likely wouldn’t be much risk in having a good idea for a novel. But consider the potential risks of dehumanizing therapy.
“We need therapists with the ability to empathize, and that is something AI cannot do,” Frieman says.
Despite these concerns, there is much room for positive outcomes in the adoption and future use of generative AI tools. It will be interesting to see what the future holds.