Submitted by Malachiian t3_12348jj in Futurology
RedditFuelsMyDepress t1_jdvzdtf wrote
Reply to comment by speedywilfork in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Well I tried asking the coffee question from ChatGPT and this is what it said:
"As an artificial intelligence language model, I do not have personal preferences or opinions. However, I can provide some general information about coffee.
Coffee is a popular beverage enjoyed by millions of people around the world. It is made by brewing roasted coffee beans, which contain caffeine, a natural stimulant that can help increase alertness and reduce fatigue. Coffee also contains antioxidants and other beneficial compounds that may offer health benefits when consumed in moderation.
However, it is important to note that excessive consumption of coffee can have negative effects on health, such as increasing the risk of insomnia, anxiety, and digestive issues. Additionally, some people may be more sensitive to the effects of caffeine than others, and may experience negative side effects even with moderate consumption.
Overall, whether or not to consume coffee is a personal choice that depends on an individual's preferences, health status, and caffeine tolerance. It is always a good idea to consult with a healthcare professional to determine if coffee consumption is right for you."
In that first paragraph it does acknowledge the intent of your question, but just says that it isn't able to answer it. The facts about coffee being spit out I believe is just part of the directives given to ChatGPT.
speedywilfork t1_jdw5jyl wrote
but that is the problem. it doesnt know intent, because intent is contextual. if i was standing in a coffee shop the question means one thing, on coffee plantation another, in a business conversation something totally different. so if you and i were discussing things to improve our business and i asked "what do you think about coffee" i am not asking about taste. AI can't distinguish these things.
RedditFuelsMyDepress t1_jdwpkfj wrote
>AI can't distinguish these things.
I'm not sure how true that is though. Even with GPT3, it would actually take into account the context of the whole conversation instead of just the most recent sentence when I asked something.
Hard to say how well it would handle itself in a real-world environment though since it's just a chat-bot atm.
Viewing a single comment thread. View all comments