Viewing a single comment thread. View all comments

ThisIsMyStonerAcount OP t1_iyblfm8 wrote

So obvious joke first: no, I don't agree because that's a continuous Random Variable and you're asking for a point estimate. badum tss

No but seriously, no-one can remotely predict scientific advances 10 years into the future... I don't have a good notion of what consciousness for an AI would look like. The definition Chalmers gave today ("experiencing subjective awareness") is a bit too wishy-washy, how do you measure that? But broadly speaking I don't think we'll have self-aware programs in 10 years.

12

canbooo t1_iyc5e42 wrote

Technically speaking, having 20% chance is not a point estimate, unless you assume that the distribution of the random variable itself is uncertain.

In that case, you accept being Bayesian so give us your f'in prior! /s

2

ThisIsMyStonerAcount OP t1_iyd4jfg wrote

what I meant is that you're asking me p(X=x)=0.2, where x is continuous, hence p(X=x) = 0.

2

canbooo t1_iydgqzt wrote

Oh, fair enough, my bad, I misunderstood what you mean. You are absolutely right for that case. For me the question is rather P(X>=x) = .2 since having more intelligence implies you have (implicit at least) 20% but this is already too many arguments for a joke. Enjoy the conference!

1

simplicialous t1_iye7mlu wrote

I think they're referring to a Bernoulli distribution being discrete, while the estimator that answers the dudes question would have to be wrt a continuous distribution.

​

Ironically I work with Continuous-Bernoulli latent-density VAEs so I don't get it. woosh.

2

canbooo t1_iyeabow wrote

Unsure about your assumption about the other assumptions but loled at the end nonetheless. Just to completely confuse some redditors:

r/woosh

1

simplicialous t1_iyebm79 wrote

Just shootin' from the hip.. I'm not sure why the answer to the guy's question would have to be continuous though...

I do know that the Bernoulli distribution (that is used to generate probability estimates) is discrete though...

🤷‍♀️

1

waebal t1_iydz7lb wrote

Chalmers’ talk was at a very high level and geared towards an audience that is completely clueless about philosophy of mind, but he did talk quite a bit about what would constitute evidence for consciousness. He just doesn’t see strong evidence in existing systems.

1

Phoneaccount25732 t1_iybm23q wrote

To operationalize the question a bit and hopefully make it more interesting, let's consider whether 2032 will have AI models that are equally as conscious as fish, in whatever sense fish might be said to have consciousness.

−5

ThisIsMyStonerAcount OP t1_iybqrrj wrote

How is that operationalizing it?

2

Phoneaccount25732 t1_iybrlqw wrote

It's easier to break down the subjective experience of a fish into mechanical subcomponents than it is to do so for higher intelligences.

−3