Editorial: ”AI that both impresses and frightens”

”In the past, I've rarely been particularly impressed by something that was produced by AI. But this is something completely different”, Samuel Lagercrantz writes in an editorial.

Like many others, I've recently been logging on to Open AI's site to ask questions about all sorts of things to the much talked about AI tool ChatGPT. I've asked for movie recommendations, how big the universe is, how to develop a drug for ALS and a lot more. Regardless of my questions, the answers have been well worded and I have not detected any obvious factual errors.

It's both fascinating and a bit scary. Of course, the chatbot can only describe the process of how you develop a new drug for ALS, not precisely how you do it, and it is important to keep in mind that the bot's answers are entirely based on what people have already written. Despite knowing that, I start calling ChatGPT “you” after a while, as if it was a human being, a friend, I was talking to.

In the past, I've rarely been particularly impressed by something that was produced by AI. But this is something completely different.

When the Swedish broadcast show Studio Ett recently had a whole program on AI, it was said that it is now an “AI summer”, that is a period of significant progress for artificial intelligence. ChatGPT was mentioned as an example. The interviewees agreed that AI will change our entire existence in the near future.

“What I feel very confident about is that in 20-30 years AI will have radically changed life on Earth and it will either be the best thing that has ever happened to humanity or the worst,” said physicist and AI researcher Max Tegmark in the program.

It is easy to imagine dystopian future scenarios when we humans have lost control over the algorithms, and AI has run amok. It is vital that politicians, those who work with AI development and all of us take these concerns seriously.

However, when it comes to healthcare, most agree that artificial intelligence can have huge benefits. Max Tegmark also emphasises that we should go full speed ahead with regard to AI in healthcare, for example as support for diagnosing cancer.

What are the risks of AI tools in healthcare? You might imagine that ChatGPT is biased and doesn't want to give any answers to that, but anyway I ask the question to the bot, which answers (and actually gives me some examples of serious dangers).

“AI can replace human expertise and knowledge, which can lead to lost human abilities and reduced quality of care”, ChatGPT states.

The bot goes on to highlight risks of discrimination if the AI systems are affected by discriminatory patterns in the data used to train the tools. Data privacy and lack of transparency are other items on the list of risks from the bot.

“AI systems can suffer from technical problems, including algorithm errors and system crashes, which can lead to adverse events and serious consequences for patients”, ChatGPT concludes.

The positive effects of more AI in healthcare are obvious. Artificial intelligence can analyse vast amounts of data and act as a decision support for doctors so that they can make a diagnosis faster and more accurately. When properly used AI also relieves the burden on the healthcare system so that doctors and other staff can devote more time to patient encounters. Those are just a couple of the benefits.

Artikeln är en del av vårt tema om News in English.

Kommentera en artikel
Utvalda artiklar


Sänd till en kollega