The Alumni Network Blog

The latest from the Alumni Network at Lund University

We ask Lund: What are four things we should fear when it comes to AI?

WeaskLund_AI

In a dystopian future, a battle is raging between humanity and an artificial intelligence whose purpose is to eradicate humans. Does that sound familiar? Or perhaps, advanced biotechnical beings of the future, practically indistinguishable from humans, harbor their own emotions, intentions, and goals – sometimes, even terrifying ones. Countless movies have been made about AI over the years, long before the technology was in place. But are these dystopian Hollywood films an accurate depiction of our future if we are not careful? We asked Professor in Mathematics, Kalle Åström, and members of the AI Lund coordination group.

AI created this picture
If AI was to “paint” a dystopian AI future, it would look like this. Photo: AI prompted by the writer.

AI Lund is an interdisciplinary open network for research, education, and innovation in the field of artificial intelligence, coordinated by Lund University.

When Lundensaren posed the question, “What should we be afraid of when it comes to AI?” the network’s members had differing opinions on the dangers of AI. Several primarily saw benefits. Nevertheless, the group’s experts do recommend taking certain risks seriously, but the dystopian Hollywood scenarios are likely to be postponed.

Here are four fears associated with AI and the future, according to AI Lund.

AI list 1

Fear of fear

Perhaps one of the few dangers of AI is that we become too afraid to use AI in the future, and the effective and well-used AI becomes challenging to promote because public fears take over.

AI list 2

AI dependency

There is a risk that we might construct our society in a way that makes us dependent on AI solutions. What happens if it suddenly malfunctions or if the AI solutions are disrupted? This is something that policymakers need to consider.

AI list 3

Abuse of AI

As the tools become more powerful, there is also an increased risk of them being misused, for example, to spread misinformation, for surveillance or through autonomous weapons.

AI list 4

Human alienation

Increased automation and the use of AI can lead to people feeling alienated from work and society. Will we only have robot doctors in the future, thus losing the personal touch? Will research and political decisions be carried out by AI systems with reduced human understanding and insight?


Learn more about AI Lund

Research in artificial intelligence and machine learning at Lund University is conducted in many departments across most faculties. AI Lund is an interdisciplinary open network for research, education, Scientific collaboration and innovation in the field of artificial intelligence, coordinated by Lund University.
Visit their website here.

2023-10-26

This entry was posted in

Featured in Lundensaren