The disclaimer says that sometimes it gives wrong answers. Is it dreaming? In a hallucinatory state? Who knows.
Really? If this is what does having 175 billion parameters can you imagine what it can do with 1 trillion parameters? And if it spends its computing power on just one user not 1 million? And if it's connected to the internet and darknet? And if it has access to a bank account with some money in it? I think it far from having consciousness and its own desires but it's very close to be a very dangerous tool
I think we're a long way from worrying about that. It doesn't "think" so if it can't still make heads or tails of a simple word problem it's not going to be activating Skynet.
Are you kidding me? It almost solved your problem. It just made a mistake. The fact that it acts as if it understands meaning of the words is CRAZY!
Ok, you are unimpressed. I am simply shocked by it and simply cannot wait to see what the next version can do.
As I noted in #24, these things have been around for years. They just tend to get pulled but it's not that much different from a chatbot that pops up at a company website. This simply has a wider range.
I don't think I can remember when was in more disagreement with somebody then I'm with you right now. You just cherry picked one riddle it couldn't solve and ignore the fact how it nails very similar questions.
I apologize if I've put you in a state of anxiety over someone disagreeing with you on what to me seems a very small point. You must have lived a life in which everyone agreed with you on everything.
Finding a failure is not a refutation of the technology. We have bots today that can write a high school paper which can not be determined by a reader to be a machine product. And, that's monumentally harder than parsing a trivial math problem such as the above.
I'm not sure it is. The hard part wasn't the math; that's super easy for a computational program. It's understanding the world problem it's couched in. When you ask ChatGPT to write a paper on the war of 1812, that's far easier for it to do than to understand what it wrote. These chat programs have come along way and I think they are pretty amazing in that it's possible that this might pass a Turing test, but still wouldn't really be aware or have any understanding of the data it's discussing other than realizing it IS data.
Parsing those few sentences was a trivial problem. I just don't see that failure as describing a limit of any sort. You didn't even cite where it came from, so I would guess you weren't all that impressed, either.
Yes, teachers are going to want all the help they can get. However, the ability of bots to write papers is increasing and will continue to increase. Note that the program in question only gives a percent likelihood. I don't know about you, but I'd be pretty po'ed if some program told me that my paper was too bot-like to be accepted.
Do children of grade school age not have to cite their sources when they write a paper? I don't know because I was homeschooled. I know in college, you definitely need to cite your sources! I haven't seen the AIs cite their sources yet, so that might be a way to detect whether an essay is AI-generated or not.
Never mind, I was able to get ChatGPT to write an essay with sources and, after a second try, get it to revise the essay with the sources added to the text of the essay.
Well - I don't think so. Writing a high school paper is to collect existing information to write it according to some pattern, the order of words, sentences etc. To solve a problem is much harder because it involves UNDERSTANDING. I did this test with https://chat.openai.com/chat: Q: Can you solve this IQ test: 3 + 3 = 12 5 + 5 = 40 7 + 7 = 84 9 + 9 = 144 2 + 2 = ? A: The answer to the IQ test is 4. The pattern in the test involves multiplying the sum of the two numbers by itself, rather than adding the two numbers together. So, the correct answer to the final question would be 2 + 2 = 4. Then I told him he is wrong because the pattern is: x + x = (x + x) * (x - 1) He acknowledged it. Then I asked him the same question in Hebrew, but instead 2 + 2 = ? I wrote 11 + 11 = ? Again he/she/it didn't guess what is the pattern is.
I just found AI is not learning from questions asked - it makes sense because we could deprogram AI. Q: do you learn from questions you are asked? A: As an artificial intelligence, I do not have the ability to learn or acquire new knowledge in the way that humans do. I was programmed with a fixed set of knowledge and capabilities, and I use that knowledge to provide information and answer questions to the best of my ability. However, I am able to process and analyze large amounts of data very quickly, which enables me to provide accurate and useful responses to a wide range of questions on a variety of topics. Q: If I will give you a way to solve specific problem, you will remember it if someone else ask you to solve this specific problem? A: As an artificial intelligence, I do not have the ability to remember specific information or experiences in the way that humans do. Instead, I was designed to process and analyze large amounts of data very quickly, and to use that data to provide accurate and useful responses to a wide range of questions on a variety of topics. If someone asks me a question that is similar to one that I have been asked before, I will use my existing knowledge and capabilities to try to provide a useful and accurate response. However, I do not have the ability to remember specific pieces of information or experiences in the way that humans do.
Absolutely. I believe you can find a LOT of bots today which can legitimately claim to be "AI" and can't pass tests that you ask. I'll give it to you on your questions, but remember that the bot clearly understood your question and gave the correct answer. Surely the real question is whether there exists a bot that CAN pass such tests - in your case, incorporate answers given by humans on how to solve certain tests. And, if such don't exist, one might ask when it will be that the incredibly fast advancement in the AI field will produce such a bot.
LOL, at myself-- When you'd said, modernpaladin, that you have to give it your phone number, I'd thought you had found its conversation "stimulating."
Chatgpt just got a B in a Wharton MBA final exam. ChatGPT passed an MBA exam and one professor is sounding the alarm | Fortune