That is a limitation of ChatGPT, not AI in general.
ChatGPT probably is assuming a definition of current. You say 2 sets but ChatGPT is interpreting current literally. You are assuming sets but ChatGPT probably is not told that.
Years ago programmers used the acronym GIGO for Garbage In, Garbage Out. That has applied since the beginning of computers.
If someone asks do you mind if I have a seat then most people say yes but if the words are interpreted literally then the answer is no since the person does not mind (does not have a problem with) the person sitting. The implied question is may I have a seat. Sometimes people talk without thinking about what they are saying. AI must think even in situations where many people do not and if the knowledge base does not have relevant exceptions and definitions then it might yield results that humans think are wrong.
I didn’t mention “current” or “2 sets” in my question to ChatGPT.
This is just an explanation as to the correct answer, nothing proposed to ChatGPT. This was just an explanation of a friend of mine who is an Electrical and Mechanical Engineer with 110 US Patents. It was something I wanted to run by him for fun to see his answer.
It’s really a simple math problem with no implications. Math at this level is black or white; I don’t see any gray in this at all.
Even look at ChatGPT’s explantion:
If you add 2 seconds to the remaining time of 28 seconds, you get 58 seconds remaining - only 2 seconds have been used. Therefore it should be 30 + 28 + 2 = 60. Adding 4 makes no sense at all. Hence, my point about AI which in this case was totally incorrect.
Well, I think we will just have to disagree on the complexity of the question. I have asked 5 people that all got the answer correct including a person who is terrible at math. I have been paid for 40 years writing questions for licensing exams and believe me, that is not complex at all. If it’s that difficult to “parse” the words then moving forward, AI is going to be a BIG problem. If you look at my last post, there is no logic in adding the “4” to arrive at the final answer. Sounds like “parsing” is going to be the issue with AI not math, and if that is the case, then I wouldn’t want to rely on AI for anything.
There’s a study from Stanford and U.C. Berkley which studied ChatGPT and found that it’s ability to correctly answer math functions regressed over a three month period, and it was also less likely to answer certain types of questions as it went. It seems to hint that this was a problem specific to GPT4, but I haven’t finished reading the entire paper (I personally can’t read scientific papers in one sitting…)