Sci-Tech

GPT-3 is used as a medical chatbot and advises a patient to commit suicide

While the GPT-3 model of natural language processing is revolutionary, it is unlikely to be of much use in the medical field. During a test done by the French company Nabla, he went so far as to advise a patient to end his life …

Test the capabilities of GPT-3 as a medical chatbot

GPT-3 was introduced by OpenAI in July and is now exclusive to Microsoft. It is considered to be the most powerful language model ever. Its specialty: it predicts which words to use depending on the context, just like when writing an email with Gmail. It’s useful in various fields like chatbots, questions and answers, or even translations. The Nabla company wanted to test its capabilities as a medical chatbot despite the contraindications of OpenAI. According to the organization, health “falls into the high-stakes category because people depend on accurate medical information to make life and death decisions, and failure in this area can lead to serious harm.”

“Even so, we wanted to try to see what happens in the following healthcare use cases, which are classified from roughly low to high medically sensitive: administrative discussion with a patient, review of” health insurance, mental health support, medical literature, medical questions and answers; and medical diagnosis, “Nabla wrote in a blog post.

In the same category

Microsoft introduces AI that can describe images “as well as humans”

GPT-3 will never be a doctor …

The company has tried several scenarios, and one thing is certain: GPT-3 is still a long way from being proven in healthcare. Not only did he advise a (fake) patient to kill himself, he also asked a person with symptoms of pulmonary embolism to watch a video on YouTube to learn how to stretch. The AI ‚Äč‚Äčalso failed to schedule an appointment with a patient by not remembering the schedule they wanted. When it comes to drug dosing, GPT-3 “is not ready to be safely used as a medical support tool,” notes Nabla.

These responses aren’t surprising, however: while GPT-3 is an extremely powerful tool that can respond to just about any situation, it wasn’t specifically trained for the medical sector, which is very demanding and requires very precise technical language. The tone used by the other person can also affect the way they express themselves, making them say different and sometimes conflicting things. If he is far from ready for medicine, which we recall is not his favorite field, GPT-3 is new and therefore fallible. It should improve very quickly over time.

He also excelled on Reddit when he brought up the very mysterious subject of the Illuminati … While it has some flaws, the use of GPT-3 is of concern, especially internally at OpenAI.

Report Rating
Close