Viewing a single comment thread. View all comments

MarkArrows t1_ivd50xs wrote

The problem comes when you think it's a robot that can't be conscious, while it's telling you it is.

How are you going to differentiate a printf("I'm alive") vs "I'm actually alive you dick."

14

jeky-0 t1_ivd8r6o wrote

>nickbostrom.com/propositions.pdf

Haha

−1

Glitched-Lies t1_ivd5jt4 wrote

Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?

−10

MarkArrows t1_ivd66rq wrote

If you're right and computers never gain true sentience, what's lost by being ethical to them? It'd be like saying Please and Thank you to Alexa or Siri. Meaningless gesture, but harmless overall.

But on the other hand, what if you're wrong with that assumption?

12

Glitched-Lies t1_ivd6tmm wrote

Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms.

I'm not wrong with assumptions. That's not an assumption anyways.

−8

Ratheka_Stormbjorne t1_ivd7np3 wrote

It is, in fact. You don't have any evidence to support the claim, "Machines can never be conscious individuals," you've simply asserted it to be the case. Or do you in fact have an evidence supported hypothesis about consciousness adequate for building novel ones?

14

Glitched-Lies t1_ivd9ofc wrote

The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.

−5

Ratheka_Stormbjorne t1_ivdazau wrote

> Computers can't be; a machine being conscious would be different than digital computers.

How do you know that? What evidence has led you to this conclusion other than, "It's different."? Do you know that at various times and places various humans have been regarded as not being conscious because, "They're different."? What actual evidence do you have of this? Have you constructed a model of a conscious mind on a digital computer and have it fail to display consciousness? How did you discern whether it did or didn't? How do you know your model was accurate? How do I know any being in this universe aside from myself is conscious in a solid and grounded way, rather than just making the assumption?

10

Glitched-Lies t1_ivdc3j8 wrote

Well it wouldn't be a model, and generally speaking that's why. And basically "it's different" is observed by the fact that it just isn't fizzling like neurons and there is more too.

−2

Ratheka_Stormbjorne t1_ivdchkw wrote

Do you understand consciousness well enough to explain it such that no mystery remains?

9

Glitched-Lies t1_ivdda4b wrote

No, but at this point there is still a knowledge of difference that could be described at many points of difference from cause and effect which is the important thing. Which is just scientifically knowing a difference in how the "AI" operate and "digital" apposed to what brains do.

1

Ratheka_Stormbjorne t1_ivdeg8p wrote

And a heavier than air plane will never fly. After all, how can it flap the wings fast enough?

What knowledge, exactly, are you claiming, that lets you be so certain of this?

7

Glitched-Lies t1_ivdenel wrote

Because a simulation cannot be conscious, otherwise it becomes semantics.

1

Ratheka_Stormbjorne t1_ivdgp2y wrote

So, there is no compelling reason that consciousness cannot exist within a digital system?

6

[deleted] t1_ivdql2g wrote

How can you objectively prove that you are consciousness? Spoilers you cant.

4

Ratheka_Stormbjorne t1_ivgv15d wrote

I can't, yet. I do not think that you have sufficient evidence to claim that it cannot be done, merely that we do not yet know a way to do so.

1

[deleted] t1_ivh1tcu wrote

Do you believe that everything will eventually be explained ?

1

Ratheka_Stormbjorne t1_ivh6xuq wrote

Will? The prior on that is not sufficient to rise to the level that I would call belief.

Can? Yes.

1

Glitched-Lies t1_ivdyvz2 wrote

That doesn't matter. Because for fact humans are, so it doesn't need "proving". Because that's just simply a fact.

0

Glitched-Lies t1_ivdi4ub wrote

It would be "settling" ethics at an incomplete place. As by the very nature of what it would mean by a computer simulating a consciousness and relative wording about computations or the math. But by very nature the differences are that itself. An identical system wouldn't be a computer. It should be obvious from cause and effect it scientifically begins from this fundamental difference.

0

Ratheka_Stormbjorne t1_ivgux9p wrote

I did not say "simulating". I said consciousness and exist.

1

Glitched-Lies t1_ivgvna4 wrote

Digital systems can only simulate.

1

Ratheka_Stormbjorne t1_ivh6vbf wrote

That is a claim. What is the evidence for that claim?

1

Glitched-Lies t1_ivhae93 wrote

That's what simulation means

1

Ratheka_Stormbjorne t1_ivhbln5 wrote

You are the one who keeps insisting that everything on a digital system is a simulation.

I keep asking how do you know everything on a digital system is a simulation?

Can you please answer my question, instead of reiterating your claim?

1

MarkArrows t1_ivg8aua wrote

> I'm not wrong with assumptions. That's not an assumption anyways.

https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm

This is literally the very first logical fallacy people run into: I'm right, and I am unable to entertain the notion that I could be wrong.

The point of logical reasoning is to be able to take assumptions you do not believe in, and examine them starting from both sides - A serious attempt, not some pretend strawman. Once you have the full fallout of both sides, right or wrong, you can compare them.

Besides, the very fact that other people don't agree with your assumption in the first place shows you there's something more to it that you're not seeing or that they're not seeing. Whatever logic convinced you, it didn't convince others intuitively. From here, your question should be "Am I the strange one, or are they?" Instead, it seems more like you simply write other people off.

Start from the assumption that you're wrong and explore from that root downwards. It doesn't matter how you're wrong in this case, it's hypothetical. For example, some divinity shows up and tells the world outright that consciousness is a pattern, and computers are able to generate this pattern the same way we are. Or any number of reasons that you can't refute, make up your own if you want. We're interested in the fallout from that branch of logic.

1

Glitched-Lies t1_ivg9geg wrote

It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.

1

MarkArrows t1_ivgbxrc wrote

I'm a little impressed at how I show it's literally a logical fallacy to think "I can't be wrong because my argument has convinced myself." And your response is: "My argument has convinced myself, so it's a waste of time to consider alternate arguments."

RNA and DNA work on similar rulesets and determination. If you look at the base point of what makes cells function, you'll find plenty of similarities to mechanical true/false - if/else logic at the bottom of the pole. Everything ends up being math.

We wouldn't consider them conscious, but they are organic. A variation of all these rule-abiding proteins and microorganisms eventually evolved into us.

Thus because machines follow a line of rules right now, there exists a possibility that they build on this until it's complex enough to form an artificial lifeform with consciousness, in the same way we did.

That said, I think it's a lost cause to argue with you. You aren't even able to do the basics of debate, even when it's directly pointed out.

1

Glitched-Lies t1_ivgce1c wrote

I'm not debating it or starting an argument. Or over cells that don't work as comparison because they are not one human being of consciousness.

1

Glitched-Lies t1_ivgclwe wrote

Also, it's not actually a fallacy at all to ignore arguments.

1

ReasonablyBadass t1_ivdloar wrote

So? Why would a physical difference have anything to do with wether or not different system can be conscious?

6

Glitched-Lies t1_ivdyefh wrote

Evidence that it is not. Not just by empirical means to say. I mean the differences I am talking about are corely missing from these computers.

1

ReasonablyBadass t1_ive55c2 wrote

Consciousness isn't material. It's not a substance but an information pattern. As long as you can run that pattern, the underlying mechanism is irrelevant.

2

stucjei t1_ivdq1y8 wrote

>Computers and brains just simply are physically phenomenally speaking, different.

Why does this matter if the output is the same?

> The physical relationships to consciousness are not the same.

What physical relationship to the brain and consciousness can you concisely point towards? Why would an AI not be conscious if it's aware and responsive to surroundings?

5

Glitched-Lies t1_ivdyi2e wrote

Those behaviors or outputs are subjective.

0

visarga t1_ive9liz wrote

Apply the Turing test - if it walks like a duck, quacks like a duck..

5