Is ChatGPT Right?

Client-only newsletter.

This is the second of a multi-part post about using AI to create content for your digital marketing. I’ll address questions of effectiveness, correctness, and ethics. Read part 1 here.

Part 2: On the Correctness of ChatGPT

IS IT CORRECT?

In the last Wax Plumletter I reprinted the response ChatGPT gave me to the prompt:

Write a facial plastic surgery blog post in about 500 words on the top 5 uses of botox.

The result reads pretty well to my non-physician eye.

However, the first time I posed that question to the chatbot I happened to phrase it slightly differently, I said:

Write a blog post for a plastic surgeon about the different uses of botox in facial rejuvenation in about 500 words.

Towards the end of an otherwise reasonable text, it generated the following:

“In addition to these cosmetic uses, Botox also has therapeutic uses in facial rejuvenation. For example, it can be used to treat hyperhidrosis, which is excessive sweating in the underarms, hands, and feet.”

You see that? It makes no sense – treating hyperhidrosis of the underarms as a “therapeutic [use] in facial rejuvenation.”

This may seem like a small point, but it’s important to remember: ChatGPT doesn’t know what it’s talking about. It doesn’t *know* anything. It just operates on a vast set of probabilities of words and sentences being associated with others (from what I understand).

To its credit, for the sake of finding a sillier example, I asked it:

What is the best use of rhinoplasty for calf implants?

…and…

Write a facebook post about the best use of hyaluronic acid in otoplasty

…and…

Write a facebook post about the best use of breast implants in rhinoplasty

Turns out #2 is a real thing. And for 1 and 3 it basically told me to take a hike.

Nevertheless, its output, for use in a medical context, should be closely scrutinized.

IS IT RIGHT?

There are important and open questions about plagiarism, bias and trust to consider.

  1. Is ChatGPT synthesizing, scraping or plagiarizing the writing in its corpus?
  2. If it only “knows” what it encounters on the web, and there are a lot of messed up things on the web, it stands to reason that it might regurgitate some messed up things.
  3. Can ChatGPT be trusted? Ask it to cite its sources and you get an answer like this:

As an AI language model, I do not have personal opinions or beliefs, and I do not have sources to cite from my own perspective. However, the information provided in my response is based on commonly accepted principles of facial aesthetics and is supported by various sources in the field of plastic surgery and cosmetic procedures. Some of these sources include:

  • The American Society of Plastic Surgeons
  • The International Society of Aesthetic Plastic Surgery
  • The American Academy of Facial Plastic and Reconstructive Surgery
  • The Journal of Plastic, Reconstructive & Aesthetic Surgery

I hope this helps clarify my previous response.

Note: The Bing iteration of ChatGPT attempts to show “Learn more” links which may be sources for its responses.

…and…

It’s worth noting in this second example, for “Who is the best rhinoplasty surgeon in LA?” it includes the surgeons’ own websites as sources.

Next time we’ll discuss risks.

UNTIL THEN

I’ll leave you with a quote from sci-fi author and coiner of “metaverse,” Neal Stephenson:

“See, the world is full of things more powerful than us. But if you know how to catch a ride, you can go places.”

Yours Truly,
Wax