xott

xott t1_jecegvu wrote

China and the CCP are deeply invested in keeping the country stable. Xi is not a mad dog. He seeks economic power rather than military so I'd imagine any ai models will be aimed at market dominance rather than warfare.

15

xott t1_je747l7 wrote

New Zealand has had no big conversations about ai since introduction of ChatGPT.

Previously it looked like we were moving well, with a Digital Strategy and an Algorithm Charter.

They weren't great initiatives, being mostly well intentioned and aimed at XAI/accountability and preventing harm or bias against our citizens.

The biggest citizen group is called NZ AI forum. I don't like them very much as they come across as real pearl-clutchers, but at least they're promoting conversation.

There's been such a great advance in the last 6 months that the AI landscape has entirely changed. Like most countries, our government looks like it will end up being reactive instead of proactive.

6

xott t1_j9wz7hz wrote

It's interesting that openai has somehow become the deciders of what is hateful or even moral.

"a small handful of unelected anons, mostly with engineering backgrounds and probably in their 20s and probably adherents to a system of moral reason that is quite controversial"

https://www.jonstokes.com/p/lovecrafts-basilisk-on-the-dangers

8

xott t1_j9nj194 wrote

Integrating different modules into large language models is extremely interesting from both a research and a usability perspective.

Whether or not people need calculators to find square roots, it's still a useful function to have access to

5

xott t1_j9gzjbr wrote

This is a really interesting case. I thought the email from the Vanderbilt deans about the Michigan State shooting was spot-on in terms of tone and style. I mean, using an AI language model is basically the same thing as using a communications team or a speech writer, so I'm not sure why people are saying it's inauthentic. In reality, it's not so different from what a human would have eventually produced.

To be honest, I think if they hadn't included the 'made by ChatGPT' disclaimer, no one would have even known it was generated by AI. It's not like the email lacked feeling or anything.

170

xott t1_j9eq4l2 wrote

>Even if it can’t experience emotion for real, does its thinking it experiences emotion effectively mean it is experiencing emotion because it will react in a way that it has learned is appropriate for the given emotion?

Since emotions are subjective to individuals, I think the answer to this question is yes.

Thinking you are experiencing an emotion and actually experiencing that emotion; same thing.

3