22 Comments
User's avatar
Paul Black's avatar

What a brilliantly articulated example to use. However, many users turn a blind eye, lazily trading off feeding the "Beast" for convenience and expediency on their way to their own obsolescence and redundancy.

Expand full comment
The Underdog's avatar

I think quite a few people genuinely don't realise the pitfalls. Many people pre-GPT acted like computers were (somehow) psychic (which was a headache when trying to explain the computer cannot guess their intentions). Post-GPT I imagine that belief has gotten worse.

I read of how a lawyer used ChatGPT to build a legal case and he lost his licence. You might say 'it was well deserved', but I think he was of the view ChatGPT, being a computer, "could do no wrong" and "make no mistakes" - that is to say, he thought it was referencing legitimate materials and couldn't lie about citations.

It's a bit like if a search engine gave you a link saying "peer reviewed study!", but only when you click on it does it turn out to be a spam website.

I think many people have the (naive) worldview AI is magic or 'really good', and it takes someone who knows both about AI and specialist subjects to recognise the pitfalls.

Expand full comment
Paul Black's avatar

I remember the lawyer one. Didn't AI make up fake case law? Must admit I have been tempted to cut corners on occasion but not tried it.

Expand full comment
The Underdog's avatar

Essentially it did, yes. From my understanding, it made legal quotes, but attributed the wrong cases for it. ChatGPT is horrible at citations (whether legal, peer reviewed, etc). Whilst verbatim quotes can *sometimes* be correct, it nearly always misattributes source. Sometimes it will even fabricate the titles of papers and cases out of thin air.

The fact OpenAI only have a generic disclaimer saying "it may produce incorrect results" rather than a set of explicit disclaimers, including saying "do not trust any citations as these are typically wrong" shows OpenAI refuse to explicitly warn people of major pitfalls. "May be wrong" is a lot weaker than "nearly always wrong".

Expand full comment
Sophocles's avatar

My brother makes a sport of catching chatgtp out. It can tie itself in real knots when you challenge suspicious “truths”, like the population of India, or when a laser was first reflected off the moon.

Expand full comment
Peter Taylor's avatar

One last thing, this AI Revolution is fodder for the masses, many of whom are brain lazy, ignorant and just not motivated to do the yards researching, reading, understanding and then discerning…. In other words applying critical thought, in essence doing the very basics of whats needed to identify and to find truths and the answers to much of what assails and assaults them today, it is both sad and reality, an indictment of what many have become, dumbed down, non thinkers, products of the world they have been programmed and manipulated into thinking is reality… little wonder so much has been taken …. Kia kaha from Bali, Indonesia

Expand full comment
The Great Reject's avatar

But this is how technology has always been marketed. As convenience to the lazy and ignorant.

Expand full comment
EK MtnTime's avatar

Good to know! Also good to know because I have never been inclined to try any AI out…just the principle of it all and the fact that the programmers have programmed so much bias!🤣

Expand full comment
Franklin O'Kanu's avatar

As someone who uses ChatGPT regularly, I can appreciate this article. So much so that I saved it.

Thanks for the input and background knowledge

Expand full comment
2mg's avatar

Not sure if links are permitted here, but I would like to share my most egregious experience with ChatGPT. When asking it to calculate degrees of freedom for a statistical test, it consistently failed to correctly evaluate the fractions in the formula to calculate variance, and even after I gave it the correct averages and variances it failed to calculate the correct result: https://www.scribd.com/document/760656810/ChatGPT-Math

Expand full comment
The Underdog's avatar

Related to your point, one thing I forgot to have mentioned is prior to ChatGPT's release, OpenAI *knew* GPT-3 (the precursor to ChatGPT) was absolutely garbage at mathematics despite their best efforts to "fix" (brush under) the problem. They never notified the public of this glaring flaw upon ChatGPT's release. There's a number of known major flaws that ChatGPT does not mention in their generic, watered-down disclaimer.

If you pick sufficiently big enough multiplication or addition, or ask it uncommon sums not typically found on the Internet, ChatGPT will get it consistently wrong. Also, the errors compound, so if it makes a mistake in a given thread, it will continue to repeat that mistake until you start a new one.

The best descriptor of ChatGPT is it believes in whatever is popular, not what is right.

Expand full comment
The Underdog's avatar

Links are fine here, although I will ban any posters making use of either spam sites, proxy-to-spam sites, blatant propaganda or suspected IP loggers/malware/other hostile web materials, subject to review.

Expand full comment
Eccentrik's avatar

a popular job position right now is "AI trainer"

if only these people realized they're training their replacement...

Expand full comment
Eris's avatar

Right, AI is a big illusion and is needed to be worried about it being pushed in all areas. AI models are human-made products, and they “represent the views and positions of the people who developed them".

The consequences of the adoption, practice and trust granted are and will be harmful to society.

Expand full comment
Peter Taylor's avatar

Thank you, a much needed reveal, keep up the great work, you are doing a stellar job, performing a great role which is why i support you…

Kia kaha from Bali, Indonesia

Expand full comment
FourthIndustrialRevolutionBot's avatar

The "add 0.004" solution will of course itself fail for numbers in certain ranges, e.g. 0.4505. I personally do find ChatGPT very useful for coding in languages I'm not familiar with, for example CSS animations where I can describe an animation and it will do a decent first-pass job of creating code that would take me 10 times as long to do from scratch on my own. For quick and dirty searches of the scientific literature, perplexity.ai is great, I use it far more than ChatGPT.

Expand full comment
The Underdog's avatar

"e.g. 0.4505" - that's addressed by the fact it only applies for 2 floating points (2 zeroes for 2 floating points). .4505 is 4 floating points. ; )

Expand full comment
Mara's avatar

Yes, I was surprised it didn't say, "add 0.005". Sure it doesn't work for 0.45000, but how often is the floating point number going to be all zeroes?

And if you know it could be and you know what the smallest increment is, you can "subtract 0.000019073".

Personally though, I think I'd just round it, and if the result is greater than the input, then it's good. If it's less than it, add 0.01. Then I don't need to know the mantissa precision and I don't have to worry about absurd edge cases.

On coding in general, I agree, it's a great resource for beginners. It works as a super specific search engine.

While I agree with OP that it has limitations, it's impressive that a language mimicry program can achieve so much.

What it needs to make it truly great is backend modules. The language processor is really just a front end. Connect it to a calculator, a spatial cognition engine, and various other functions including the ability to incrementally improve and resolve flaws in it's training data, and you'll be pretty close to a world ending AI.

Expand full comment
The Underdog's avatar

"it's a great resource for beginners"

Given the examples, it really isn't. It teaches bad habits by extracting bad code examples from a limited subset of resources. One upvote for a suggestion is sufficient to teach all students everywhere? Yikes.

Expand full comment
Mara's avatar

Yes, that is shit.

If the user could filter and improve it's database, then such garbage could be removed. But I think it's important not to lose sight of what it can do well.

Any tool can be abused in the hands of idiots. It would be equally foolish to perform a Google search and blindly copy the code on the first results page. But Google can still be very useful when it's limitations are recognised.

The kind of person who lazily writes garbage code they copied without thinking, is the kind who shouldn't have any responsibility in the first place.

Tl;dr the real problem is the fact that so many people are really fucking stupid.

Expand full comment
FourthIndustrialRevolutionBot's avatar

A non-tech use case I do like a lot is writing copy when I'm in the wrong state, for example if a situation requires me to adopt an enthusiastic or encouraging tone but I'm actually feeling a bit frazzled. Everyday things like chasing somebody up by e-mail while trying not to add to the sum of human misery too much. ChatGPT is always so sunny and polite. It's a little verbose for my taste, I can often prune out every second sentence.

Expand full comment
The Great Reject's avatar

I also asked ChatGPT whether the present British Prime Minister is a Jew. Its answer is recorded as yes (and saved as screenshots on an earlier post of mine). Try asking today and see the response.

Expand full comment