Comments

You must log in or register to comment.

AUFunmacy OP t1_j6c7ozd wrote

Isaac Asimov's 3 laws of robotics are a set of guidelines for the ethical treatment and behavior of robots, first introduced in his science fiction novels in the 1940s. These laws state that a robot must not harm a human being, must obey human orders, and must protect its own existence as long as it does not conflict with the first two laws. These laws were intended as a cautionary tale, highlighting the potential dangers of artificially intelligent beings and the importance of ethical considerations in their development and use.

As artificial intelligence (AI) continues to evolve at a rapid pace, Asimov's 3 laws of robotics have become increasingly relevant to society. The advancements in AI have led to the development of autonomous systems that can make decisions and take actions without human intervention. This has raised a number of ethical concerns, particularly in areas such as military and law enforcement, where robots can be used to make life and death decisions.

Asimov's laws serve as a reminder of the potential consequences of creating intelligent machines that are not programmed with a strong ethical framework. Without proper safeguards, robots could potentially harm human beings, disobey orders, or even cause harm to themselves. This is particularly relevant in today's society, where AI is being integrated into more and more aspects of our lives, from self-driving cars to medical diagnosis and treatment.

Furthermore, Asimov's laws are important to consider in the context of the AI's ability to learn and adapt. As a mutable AI learns and adapts, it can change its programming, it can make decisions that go beyond human understanding and control. This makes it even more important to have a set of ethical and technical guidelines in place to ensure that the AI's actions align with human values and ethical principles.

The laws serve to remind us of the possible consequences if we do not consider the ethical implications of AI. If we do not take the time to instill a sense of "empathy" into our Super AI's, how will they ever have the framework to make moral decisions. Think about the ethical and moral implications of AI, we risk creating machines that could cause harm to human beings, or even lead to the destruction of our society.

Asimov's 3 laws of robotics are not just science fiction, they are a reminder of the potential consequences of creating intelligent machines without proper ethical guidelines. As AI continues to evolve at a tremendous rate, it is increasingly important to consider the ethical implications of these technologies and to ensure that they are programmed with a strong ethical framework. Only by doing so can we ensure that the benefits of AI are realized, while minimizing the risks and negative consequences.

5

FenixFVE t1_j6c9dqg wrote

Asimov's laws never worked. They break down on the trolley problem

10

AUFunmacy OP t1_j6ca94e wrote

I love that you brought that up, I explained this in my article but didn't use the trolley problem as an example.

I referenced self-driving cars, and talked about if they were faced with a situation where it was either hitting a pedestrian to save the passengers or recklessly evading the pedestrian and potentially killing the passengers. Either choice breaks the first law, but there is a crucial flaw in this argument. It assumes there are only two choices from the beginning.

The trolley problem is a hypothetical ultimatum which can create a paradoxical inability to make a ethical choice; in other words, the trolley problem is immutable in the number of choices or actions that can be completed. In real life there are an infinite number of choices - if we get very technical. So for example, the self-driving car might be able to produce a manoeuvre so technically perfect that it evades all danger for both the passengers and the pedestrian; maybe the AI in the, self-driving is able to see 360 degrees, able to sense human pedestrians through heat sensors, able to spot humans far far away with some other technology and make appropriate adjustments to the driving.

It is possible to accomodate to the First Law, but it requires an effort and a need to ensure the technology you create is not going to intentionally or otherwise cause the death of a human (cause being the key word). I believe it would be an effort well spent.

8

AUFunmacy OP t1_j6cbsq3 wrote

General Article TLDR;

It is important to remember that Asimov's laws were written as a cautionary tale and not as a blueprint for how AI should be treated. By being conscious of the potential consequences of our actions, and striving to create a symbiotic relationship with AI where we respect and value its autonomy, we can ensure that we do not make the mistakes cautioned in the past, much like how we have done with Orwell's tales, and pave the way for a brighter future where humanity and AI coexist in harmony. The key is to remember that as we continue to push the boundaries of technology, we must also push the boundaries of our own morality and ethics to ensure that we do not fall victim to our own creations.

3

StolenErections t1_j6cileh wrote

Yes, but does that robot have an exposed nipple? How does that mesh with Asimov’s Laws?

1

just-a-dreamer- t1_j6cnlkk wrote

King Midas once wished that everything he touches turns into gold. He got what he wished, but not what he wanted. He starved within days.

Any rule for AI will turn out bad I think, for AI will deliever in the most efficient way possible from it's perspective.

But there is a difference in what we wish to happen and what we want to happen, for human capability to account for every eventuality is limited.

6

yaosio t1_j6crdry wrote

I'm illiterate and barely remember the Asmiov stories I read, but weren't some about finding ways around the laws of robotics? Such as redefining what a human is. I might be misremembering because, as I already said, I'm illiterate.

1

johnp299 t1_j6d8t67 wrote

A writing buddy once looked at my lack of respect for the 3 Laws with great concern. The Laws were invented by editor John Campbell (so the story goes) to add interest to the robot fiction. The big problem is, robots will take over at some point as soldiers and cops, and some level of harm or deadly force is probably necessary.

1

AUFunmacy OP t1_j6dbhyu wrote

Wow I didn’t know that, I’d have to fact check to fully believe you haha. Just the fact that Asimov (or his editor I suppose) was in a time where “robots” did very simple mechanical tasks that were basic and deterministic but still had the foresight he had. I love accurate cautionary tales especially when we get to the part where we get to follow the plot of tale.

1

Ignorant_Ismail t1_j6e7lex wrote

The article discusses Isaac Asimov's Three Laws of Robotics and how they apply in the real world as AI technology advances. The first law states that a robot may not harm a human or allow a human to come to harm, but in certain situations, such as self-driving cars, it can be difficult to determine the best course of action to avoid harming a human. The second law states that a robot must obey orders from humans, but only if they do not conflict with the first law. This can be problematic in military situations where robots may be ordered to harm or kill humans. The third law states that a robot must protect its own existence, but not if it conflicts with the first or second laws. The article highlights the need for clear and consistent ethical guidelines to be established and implemented in the programming of robots, especially in potentially harmful scenarios. It also emphasizes the importance of treating AI with respect and empathy, as they may soon be indistinguishable from humans.

1

Ignorant_Ismail t1_j6e7mdv wrote

The article discusses Isaac Asimov's Three Laws of Robotics and how they apply in the real world as AI technology advances. The first law states that a robot may not harm a human or allow a human to come to harm, but in certain situations, such as self-driving cars, it can be difficult to determine the best course of action to avoid harming a human. The second law states that a robot must obey orders from humans, but only if they do not conflict with the first law. This can be problematic in military situations where robots may be ordered to harm or kill humans. The third law states that a robot must protect its own existence, but not if it conflicts with the first or second laws. The article highlights the need for clear and consistent ethical guidelines to be established and implemented in the programming of robots, especially in potentially harmful scenarios. It also emphasizes the importance of treating AI with respect and empathy, as they may soon be indistinguishable from humans.

1

chewie8291 t1_j6ge39i wrote

I thought Asimov wrote a follow up book on how his three laws could be bypassed. I recall a poisoning and using two robots. Person tells robot 1 to put poison in a drink and leave. Person tells robot 2 to deliver the drink. Person 2 dies thus breaking the laws.

1

johnp299 t1_j6gqfjj wrote

It should be noted that Asimov actually had 4 laws, the "uppermost" being a prohibition against harm to Humanity. I believe this was meant to take precedence over the other 3.

1