A standing-room-only audience of Fall Conference attendees was on hand to hear about the risks and opportunities of artificial intelligence (AI).
Presented by Gannett Fleming’s Christian Birch and Anand Stephen, the session began with Birch asking attendees how many had used generative AI such as ChatGPT. Birch was unsurprised when 90 percent of hands were raised. He noted that design professionals are natural consumers of AI and, therefore have a unique responsibility to help manage its risks.
Those risks are very real, Birch noted, on both the technical and the human side. He shared a statistic that drew nervous laughter from the crowd: “Fifty percent of AI researchers believe there is a greater than 10 percent chance that humans will go extinct from our inability to control AI.”
Neither Birch nor Stephen endorsed that specific doomsday scenario, but both did agree that with no accountability for AI companies, all risk is assumed by its users. That risk, they contend, is not just financial, but emotional and societal as well.
The first example of widely adopted AI, Birch noted, was social media. At its inception, social media was lauded as an equalizer: it gave everyone a voice, connected communities, and went a long way toward leveling the marketing playing field for small businesses. All of that has happened, he argued, but the human cost has been high.
Information overload, influencer culture, the proliferation of fake news, political polarization, and record-levels of mental health issues among young people are all, at least in part, byproducts of a social media-driven culture.
With generative AI like ChatGPT, we have a second chance. AI has the potential to improve efficiency and solve previously insoluble challenges such as climate change and curing cancer.
But, like social media, its myriad benefits come with an equally significant downside. How does a society function when its ability to differentiate what’s real and what isn’t is compromised?
Birch and Stephen concluded that it is our responsibility as an industry to be part of the conversation surrounding AI, and that firms need to build the technology into their policies. AI is here to stay, they argued, and it’s not too late to build a framework around it that harnesses its potential while mitigating its risks. “We still have time to tie governance around this and be able to achieve a trustworthy AI,” Birch said.