Skip to content

OFF TOPIC: chatGPT glibly produces a lot of wrong answers?

4 messages · Bert Gunter, Stephen H. Dawson, DSL, Calum Polwart +1 more

#
**OFF TOPIC** but perhaps of interest to some on this list. I apologize in
advance to those who may be offended.

The byline:
********************************
"ChatGPT's odds of getting code questions correct are worse than a coin flip

But its suggestions are so annoyingly plausible"
*************************************
from here:
https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/

Hmm... Perhaps not surprising. Sounds like some expert consultants I've
met. ?

Just for amusement. I am ignorant about this and have no strongly held
views,

Cheers to all,
Bert
#
Thanks.

https://www.wsj.com/articles/with-ai-hackers-can-simply-talk-computers-into-misbehaving-ad488686?mod=hp_lead_pos10

Ever heard of AI prompt injection?


*Stephen Dawson, DSL*
/Executive Strategy Consultant/
Business & Technology
+1 (865) 804-3454
http://www.shdawson.com
On 8/13/23 13:49, Bert Gunter wrote:
#
It does often behave better if you say to it "that doesn't seem to be
working" and perhaps some error message

It is afterall a language tool. Its function is to provide text that seems
real.

If you ask it a science question and ask it to provide references in
Vancouver format, it can format the references perfectly. They will be from
real authors (often who have published in the general field), they will be
in real journals for the field. But the title is entirely false but
plausible.

Expect many a scammer to get caught out...
On Sun, 13 Aug 2023, 18:50 Bert Gunter, <bgunter.4567 at gmail.com> wrote:

            

  
  
#
Hi Bert,
The article notes that chatGPT often gets the concept wrong, rather
than the facts. I think this can be traced to the one who poses the
question. I have often encountered requests for help that did not ask
for what was really wanted. I was recently asked if I could
graphically concatenate years of derived quantities that were based on
an aggregation of daily cycles. You get an average daily cycle for
each year, but this doesn't necessarily connect to the average daily
cycle for the next year. It didn't work, but it was a case of YKWIM
(You Know What I Mean). In this failure of communication, the
questioner frames the question in a way that may be figured out by a
human, but is not logically consistent. It is the basis for some very
funny scenes in Douglas Adams' "Hitchhiker" series, but can be
intensely frustrating to both parties in real life.

Jim
On Mon, Aug 14, 2023 at 3:50?AM Bert Gunter <bgunter.4567 at gmail.com> wrote: