Artificial intelligence is not as smart as it has been made out to be and fears of it having destructive potential are overblown, a summit of AI experts has heard.
Speakers at the Humanising AI Futures event at University of Technology Sydney on Friday, discussed the growing potential and ethical implications of AI, envisioning its impact on society, technology, and humanity at large.
The half-day symposium of experts from a range of fields heard that tighter regulation may not be advisable in trying to control AI growth, and managing risks while reaping the benefits may be less complicated than people believe.
“I think we sometimes overlook how narrow our ideas are about digital systems,” said Dr Michael Falk, from the UTS School of Communication, Digital and Social Media, adding that fear of AI had been largely generated by ‘marketing spin’.
“When ChatGPT says ‘I am a generative text I cannot do that’, it’s lying. There is no ‘I’ behind that text.
“We are trapped in this Marvel universe where we think that a large server rack is going to come out of our basement at night.”
Dr Falk argued artificial intelligence was not intelligent at all and was misleading and frequently inaccurate.
Instead, he suggested the term “artificial stupidity”.
However, despite the flaws and fears of generative AI, experts said the possibilities for AI to enhance society were transformational.
Keynote speaker Professor Heather Horst, the chief investigator at the ARC Centre of Excellence for Automated Decision-Making & Society at Western Sydney University, echoed the necessity for understanding AI, telling the audience that it would be key to its future.
“We need to understand the effects that AI is having on the production of knowledge and culture, industry, academia and other contexts,” Professor Horst said.
Systems capable of making automated decisions based on machine learning models govern more and more aspects of our lives.
She said AI needed to be understood “both in the big innovations as well as the small acts of creativity.”
“Algorithms increasingly govern what we see in our internet searches and on our social media feeds,” she added.
“Systems capable of making automated decisions based on machine learning models govern more and more aspects of our lives from whether we receive loans, secure jobs, how much time we spend in prison or the likelihood we will be searched by police officers.”
However, the possible pitfalls of a future with AI were not glossed over.
“This is not to say the explosive growth of automated decision-making systems is not startling and that the power they have in our lives does not require close attention,” she said.
Professor Heather Ford, from UTS’s Data Science Institute’s Data and AI Ethics Cluster, said ideas surrounding AI “often centre on the affordances of technology, what it can do or do to us”.
She outlined the three goals of the institute were to: a) understand how people work with and without technology; b) examine ways to govern technology in the public interest from local to government levels and; c) reimagine how technologies might be designed in creative and innovative ways.
“Research can help to build better technology often by refining what the criteria for better is,” she said. “We do this because we know that critical, independent, and rigorous research is essential to the development of technologies.”
UTS Deputy Vice-Chancellor (Research) Professor Kate McGrath said: “It’s really challenging to keep up and understand the consequences, both positive and negative.
“This is really the height of quality-based scholarly activity and at the height of importance for us as a nation to be exploring and tackling,” she said.
Professor McGrath said AI could be fundamental in solving some of the biggest issues on both a national and international scale.
Dr Heather Ford @hfordsa kicks off the UTS Symposium, Humanising AI Futures, in Sydney this morning https://t.co/i9CPoBCeBK @UTSFass #humanisingAI pic.twitter.com/fiX15CG3Gu
— Emma Clancy (@emmaclancy123) July 27, 2023
“It has the ability to really aid us as a society to tackle some of the most important challenges that we are facing at the moment,” she added. “These include climate change, critical health problems, finding new energy sources and advancing education and learning for all.
“All of us in this room would acknowledge how important it is to solve those things, and anything that can help us do that is something that we should explore actively and proactively.”
Low digital literacy levels appeared to be the biggest concern among the experts.
Dr Adam Berry cited a lack of representation to be a major issue for AI.
“I think there are risks that come from generally lower levels of data literacy when it comes to the early stages of stuff like ChatGPT,” he said.
“AI starts to exclude conversations about really important things. We have a history of doing very badly at including people with disabilities in artificial intelligence.”
Dr Berry said including people with disabilities in the research and development of generative and artificial intelligence would be crucial going forward to ensure everyone is included.
Professor Kirsty Kitto from the UTS Connected Intelligence Centre also stated digital literacy and awareness was crucial to managing AI, explaining that it holds the power to change the way we view data.
“It is hard to opt out with how big this is becoming,” she said. “We need to get better at helping those people who aren’t experts to ask those critical questions.”
However, she doesn’t believe strict regulations or guidelines are the key to managing this new era of technology. “I find guidelines not helpful. I think they’re reactive when we need to be more anticipatory,” she said.
Some experts disagreed, and said tighter regulation of AI was necessary to prevent possible misuse of the software.
Let’s be as blunt as possible. There is an international race regarding the development of AI. Weak and wishy-washy regulation is not going to cut the mustard.
Professor David Lindsay warned vague rules and regulations regarding artificial intelligence were not enough.
“Let’s be as blunt as possible,” he said. “There is an international race regarding the development of AI. Weak and wishy-washy regulation is not going to cut the mustard.
“We need to understand technology and, more importantly, how technology is morphing and changing.”
Dr Michael Davis, from UTS’s Centre for Media Transition, however, said regulation was necessary but would not necessarily solve all the concerns, including the risks generative AI posed to media and journalism.
“Principles are important, but I think what matters more is what happens in practice,” he said. “Generative AI opens a wider range of opportunities as well as deeper risks.
“In order to be aware and manage the risks, it’s important for journalists to be aware of what they are doing and the risks of misinformation from AI.”
Many speakers said multidisciplinary collaboration was crucial to managing the concerns and benefitting from AI.
Professor Alan Davidson, the Dean of UTS’s Faculty of Arts and Social Sciences, said the key to understanding AI and its real-world impacts was a ‘human-centred view’.
“Together, we can better prepare and shape ourselves for impact, positive or negative, foreseen or unforeseen, deliberate or accidental, of AI on our world,” he said.
“Of particular concern for me is the impact of AI on social media and the functionality of our people.
“As an academic community, we have a duty to bring our best ideas and scrutiny to these concerns.”
Main Image: Deepak Pal/Flickr