xott
xott t1_jecegvu wrote
China and the CCP are deeply invested in keeping the country stable. Xi is not a mad dog. He seeks economic power rather than military so I'd imagine any ai models will be aimed at market dominance rather than warfare.
xott t1_je747l7 wrote
New Zealand has had no big conversations about ai since introduction of ChatGPT.
Previously it looked like we were moving well, with a Digital Strategy and an Algorithm Charter.
They weren't great initiatives, being mostly well intentioned and aimed at XAI/accountability and preventing harm or bias against our citizens.
The biggest citizen group is called NZ AI forum. I don't like them very much as they come across as real pearl-clutchers, but at least they're promoting conversation.
There's been such a great advance in the last 6 months that the AI landscape has entirely changed. Like most countries, our government looks like it will end up being reactive instead of proactive.
xott t1_je2vshr wrote
Reply to comment by NotKoreanSpy in ChatGPT browsing mode plugin now available to certain users. by Savings-Juice-9517
That's impressive.
While I don't think it was a good prompt for assessing intelligence, that's a really good result.
xott t1_jczzxf1 wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
I agree with your idea that the rich will not kill the poor with a robot army.
More likely it would be an asymmetric action of a singular actor via a biological vector.
An action like this could be carried out without contradicting any of your reasons why a class war won't happen.
xott t1_ja4m1kx wrote
Is there any point in doing woodwork at home when you can just buy furniture from IKEA?
Some would say no but there's a satisfaction from creating something yourself.
Make your games for yourself, not to compete with AI
xott t1_ja2fzi2 wrote
Reply to do you know what the "singularity" is? by innovate_rye
I get the feeling OP aspires to be a MENSA member.
xott t1_j9wz7hz wrote
Reply to The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system by grungabunga
It's interesting that openai has somehow become the deciders of what is hateful or even moral.
"a small handful of unelected anons, mostly with engineering backgrounds and probably in their 20s and probably adherents to a system of moral reason that is quite controversial"
https://www.jonstokes.com/p/lovecrafts-basilisk-on-the-dangers
xott t1_j9qrok6 wrote
Reply to comment by RepresentativeAd3433 in Two Deans suspended after using ChatGPT to write email to students by Neurogence
Well, seeing as we still walk in spite of cars, that's probably an overblown worry.
xott t1_j9o6cr6 wrote
Now do feet,
that's where the fetish money is at.
xott t1_j9nj194 wrote
Reply to comment by CommunismDoesntWork in Stephen Wolfram on Chat GPT by cancolak
Integrating different modules into large language models is extremely interesting from both a research and a usability perspective.
Whether or not people need calculators to find square roots, it's still a useful function to have access to
xott t1_j9mkfhv wrote
Reply to comment by RiotNrrd2001 in Stephen Wolfram on Chat GPT by cancolak
The addition of a calculator seems so simple and straightforward that I'm amazed there's no calculation subroutine present.
xott t1_j9ihhe1 wrote
Reply to comment by CustardNearby in OpenAI has privately announced a new developer product called Foundry by flowday
Good bye middle management roles
xott t1_j9i2zg3 wrote
Reply to comment by Hands0L0 in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
It's the new Space Race
xott t1_j9gzjbr wrote
This is a really interesting case. I thought the email from the Vanderbilt deans about the Michigan State shooting was spot-on in terms of tone and style. I mean, using an AI language model is basically the same thing as using a communications team or a speech writer, so I'm not sure why people are saying it's inauthentic. In reality, it's not so different from what a human would have eventually produced.
To be honest, I think if they hadn't included the 'made by ChatGPT' disclaimer, no one would have even known it was generated by AI. It's not like the email lacked feeling or anything.
xott t1_j9gj8fy wrote
Reply to comment by 69inthe619 in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
Your body is biologically programmed to feel pain. Why do you think a machine could not be programmed the same?
xott t1_j9eq4l2 wrote
Reply to Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
>Even if it can’t experience emotion for real, does its thinking it experiences emotion effectively mean it is experiencing emotion because it will react in a way that it has learned is appropriate for the given emotion?
Since emotions are subjective to individuals, I think the answer to this question is yes.
Thinking you are experiencing an emotion and actually experiencing that emotion; same thing.
xott t1_j9bbqz8 wrote
Reply to comment by arckeid in Crime and punishment in a post-singularity society by [deleted]
Maybe "crimes of passion" could still be attempted.
Although you could view these as "crimes of emotional dysfunction"and be very hopeful that interaction with a post-singularity AI would help identify/cure these before they happen.
xott t1_j99pv9h wrote
It's probably that a post-singularity society will also be a post-scarcity society where poverty won't cause crime as it does today.
Crime prevention needs would be a lot lower.
xott t1_jedqb0n wrote
Reply to We have a pathway to AGI. I don't think we have one to ASI by karearearea
You're suggesting GPT7 won't be much smarter than GPT6?
Neither of those things even exist yet.