SeaworthinessFirm653

SeaworthinessFirm653 t1_jadelan wrote

Consciousness is a function whose input is environmental stimulus and whose output is a cyclical thought, and/or a physical action (muscle contraction). The more environmental-semantic information this entity encodes in its memory, the more “conscious” it is, but consciousness is not binary.

Logic gates form if:then statements that, when assembled together, creates a system of behavior that acts in somewhat logical ways. Human biological neuron cells form these.

Consciousness inherently requires at least some memory, input, and processing. Every neuron in the human brain is technically computable because it’s just input and output of electrical signals.

A nerve cell is effectively just an analog neuron with a few extra properties. It’s not logical to assume that consciousness is just a bundle of nerve cells. It’s a very architecturally-dependent bundle of if/then clauses and memory that, when combined, simulates consciousness.

If a system can be described by if/then, then it is computable.

Also, if you cut a living brain in half, it ceases to become conscious. The reason for this is that the architecture becomes incoherent. When you are asleep (beasides REM/dreaming) you are also unconscious.

Regardless, all my points to say: consciousness is computable through architecture, not simply through nerve cells. Biological human nerve cells are neither necessary nor sufficient for consciousness.

−4

SeaworthinessFirm653 t1_jadd3a0 wrote

Consciousness is logically computable. Consciousness is defined by architecture, not by whether something is organic or responds to electric pulses. You can theoretically store consciousness on a computer as a program with sufficient input/output.

Worrying about nerve cells becoming conscious is a little bit of a misdirected concern. Advanced AI deep learning architectures are far more concerning.

−6

SeaworthinessFirm653 t1_iru86y6 wrote

Yes, I agree with that. I recall the analogy of taking the brain's neurons and connections, magnifying it in size to cover an entire block in a large city, and the immense density of connections would still be too large to make any meaningful observations even given our current technology.

I don't believe any optimism is required, though, to claim that we can be simulated. Unless we exist outside of the realm of physical things, that much is given. It's impossible to make good predictions about the future where the sample size is n = 0.

1

SeaworthinessFirm653 t1_irq7a9n wrote

Yes, I made my comment with the presumption that we are talking about AGI, not just a smart calculator bot making a slightly faster calculator bot. We have created some multi-modal AI that can accomplish different tasks, but the model itself is computationally inefficient and predictive rather than true learning (just like GPT-3 doesn't actually think logically, it's just a really advanced language prediction model).

As far as I am concerned, the difference between consciousness and AI is that an AI is an advanced look-up table using only simple logic while consciousness involves processing stored information for semantic meaning rather than adhering to an algorithmic process for syntactic meaning. See: Chinese translation room thought experiment.

AI today uses low-level logic en masse to produce high-level (relatively) thinking. With the addition of increasingly advanced neural networks, image-generation AI has utilized increasingly complex network structures, such as defining edges, shapes, complex shapes, and fuzziness around these levels. If we extend this notion to account for an AI that is capable of taking simple features such as moving shapes and we allow the AI to predict the shapes' locations, we may be able to reapply this scalable logic until the AI is able to understand complex ideas given sufficient inputs and sufficient training data. This is far-fetched from a modern technological standpoint, but not unbelievably far-fetched given how quickly we are advancing our AI.

If the human brain is made up of computations, then an elaborate series of computations is by definition what must define our consciousness, and thus it can be created with sufficient AI models. Switching to amplitude computers for computational efficiency or compressed memory models (current memory cell models scale linearly with space instead of logarithmically) may allow us to break through this barrier.

edit: sorry for the ramble

1

SeaworthinessFirm653 t1_irq2vcf wrote

That's actually a fair point; I hadn't considered the actual rate of growth in detail.

Edit: There is also an additional facet: If we are to assume that the AI in question is simply an AI designed to create superior AI, and this does indeed cyclically reproduce, then if you were to restrict it to its computational power, it would still run more efficiently than humans by such a large margin. It takes roughly 12 watts to run a human brain power-wise, and if a computer has access to enormously larger amounts of energy, then it is not unthinkable that a machine would be able to self-enhance to an insane degree. Sure, there may be logarithmically declining returns to a certain extent as exists with virtually any system, but the difference between a human and a machine that is at the point of diminishing returns would remain unimaginably wide. Humans were not designed to think; we were designed to be energy-efficient decent thinkers. A machine that can evolve at a million-times faster pace who is designed purely to think will inevitably pass us by a very long margin, even if the nature of acceleration belies an exponential growth function. The main caveat is that creating an AI that can produce superior AI relative to itself for the same intention of creating superior AI is incredibly difficult.

1