How intelligent is Artificial Intelligence?

Here’s a simple math question proposed to a popular chatbot that uses artificial intelligence for its answer:


Here’s the AI answer:


The correct answer — it should be 60 seconds!

Explanation: You have requested 2 sets of 30 seconds each so at the expiration of both of the 30 second requests you will have received 60 seconds in total.

Conclusion — AI is not ready for primetime!

No nuclear codes or self-driving cars and airplanes, please!

This AI - replied to the quetion thus…

It probably read your post. :rofl:


:lol: Doesn’t it only use stuff from up to a few years ago, or has that changed?

ChatGPT 4 (paid version) has several plugins that allow it to browse the web in real-time.

The official one Browse with Bing is currently disabled, as it allowed people to read content that is behind paywalls. However, there are various others in the plugin store.

Without plugin functionality, ChatGPT has a knowledge cutoff date of Sept 2021.

1 Like


1 Like

I was totally unaware of AI’s limitations in that respect.

My reply was just an attempt at facetiousness during
a rather mild state of boredom. :rolleyes:


Limitations could very well be an understatement!

AI might just have a difficult finding a location, at least for now.

Here’s another simple questions that is way off!


I’ll be glad to give you the answer if needed.

1 Like

That is a limitation of ChatGPT, not AI in general.

ChatGPT probably is assuming a definition of current. You say 2 sets but ChatGPT is interpreting current literally. You are assuming sets but ChatGPT probably is not told that.

Years ago programmers used the acronym GIGO for Garbage In, Garbage Out. That has applied since the beginning of computers.

If someone asks do you mind if I have a seat then most people say yes but if the words are interpreted literally then the answer is no since the person does not mind (does not have a problem with) the person sitting. The implied question is may I have a seat. Sometimes people talk without thinking about what they are saying. AI must think even in situations where many people do not and if the knowledge base does not have relevant exceptions and definitions then it might yield results that humans think are wrong.

I didn’t mention “current” or “2 sets” in my question to ChatGPT.


This is just an explanation as to the correct answer, nothing proposed to ChatGPT. This was just an explanation of a friend of mine who is an Electrical and Mechanical Engineer with 110 US Patents. It was something I wanted to run by him for fun to see his answer.

It’s really a simple math problem with no implications. Math at this level is black or white; I don’t see any gray in this at all.

Even look at ChatGPT’s explantion:


If you add 2 seconds to the remaining time of 28 seconds, you get 58 seconds remaining - only 2 seconds have been used. Therefore it should be 30 + 28 + 2 = 60. Adding 4 makes no sense at all. Hence, my point about AI which in this case was totally incorrect.

Totally Agree!

I did not say you said current. ChatGPT said current in its answer. Look at the answer from ChatGPT that you posted.

It is not a simple math problem. It is a complex problem of parsing words and understanding what is meant. Most people do not appreciate how complex that can be.

Well, I think we will just have to disagree on the complexity of the question. I have asked 5 people that all got the answer correct including a person who is terrible at math. I have been paid for 40 years writing questions for licensing exams and believe me, that is not complex at all. If it’s that difficult to “parse” the words then moving forward, AI is going to be a BIG problem. If you look at my last post, there is no logic in adding the “4” to arrive at the final answer. Sounds like “parsing” is going to be the issue with AI not math, and if that is the case, then I wouldn’t want to rely on AI for anything.

And that should be the end of that. Any further comments implies you do not agree to disagree.

Artificial Intelligence’s intelligence varies based on its design and capabilities, ranging from specialized tasks to human-like understanding, but it’s still limited compared to human intelligence.

There’s a study from Stanford and U.C. Berkley which studied ChatGPT and found that it’s ability to correctly answer math functions regressed over a three month period, and it was also less likely to answer certain types of questions as it went. It seems to hint that this was a problem specific to GPT4, but I haven’t finished reading the entire paper (I personally can’t read scientific papers in one sitting…)

1 Like