Chemical_Ad_5520

Chemical_Ad_5520 t1_jdwnnaw wrote

I agree that this is how things should go. Not that publicly funded and freely available AI systems can't become corrupted by the interests of a powerful few, but it would be best to at least try to create and maintain it, because profit driven control by a powerful few is the default future if we can't agree on another action plan.

It's not only that equitable sharing of the benefits of AI creates positive change in humanity, but it also eases the critical problem of class division, which is set to create some real issues for everyone going forward, particularly the working/consumer class. If we give everyone the benefits of the best information and tools available, then we solve the bulk of the class division problem and can move on to figuring out how to mitigate the risks of the highly dynamic economy that would result. If we just let the entire economy get automated and monopolized in the hands of a few, then things will be weird and/or shitty and the playing field might never get leveled if we drift too far apart. Maybe that's okay, but I bet it would suck.

Government funding to provide people with a range of information and digital tools would be good. Nonprofit products could do the same thing with enough donation. In the absence of those options, we could always hope for an enterprise to be responsible with this technology and find ways to fund it while making it accessible and fair. But probably, if we don't find a way to organize and lobby for the interests of the masses in a unified way, the best tools will continue to be owned by a small group.

1

Chemical_Ad_5520 t1_jd12s78 wrote

I think there will be a mix of self-driving car producers and third party app developers trying to access this market opportunity. Whichever companies have the best mobile apps will dominate the market. Car manufacturers will make money either way and would likely end up with a competitive advantage.

Maintenance and storage depots are fairly likely to be outsourced or franchised, so those business models would be more accessible. The companies with the leading robotaxi apps might share ownership of the cars with the maintenance and storage depot business owners, but I feel it's less likely that individual people would be contributing their personal robo taxis the way Uber works. I think it would start with a company like Uber testing out purpose-made experimental robotaxis that Uber owns, and then they'd just keep buying more of their own purpose-built robotaxis. GM, or Tesla, or whoever starts making robotaxis specifically for this service might just make their own app and push out companies like Uber because the car manufacturers control access to the car's computer systems.

You could always buy stocks in companies that seem like they could dominate the future of the market, but I understand the preference for ownership of tangible capital, like your own fleet of cars to provide this service. Maybe think of some way to make a business out of the local services needed to support a national fleet, because that's the part that would be harder for GM, Tesla, or Uber to manage themselves.

1

Chemical_Ad_5520 t1_jcra7g7 wrote

Since you're not giving me any substantial reasoning to argue against, I'll just elaborate about my position. I'm not claiming any absolutes, but I do think that what we do now can affect the probabilities of one or another long term outcomes.

I basically feel like a dead universe is less interesting than one with life in it, because a dead universe is going to decay and destroy itself relatively predictively, but a universe with intelligent life existing for long periods of time is more dynamic and might do some pretty interesting things. There's no apparent objective meaning about the two possibilities, it's just my opinion that the dynamic nature of intelligent life is more interesting.

In light of this, I prefer that human life survives and figures out how to colonize space without destroying itself, because that would increase the potential longevity of earth life in the universe, which is the only life we have reasonable evidence of. If we can achieve space colonization - a near-term goal compared to our timeline of the lifespan of the universe - then the probability of earth life/intelligence organizing the universe such that the nature of its demise is affected goes from zero to potentially non-zero. You don't know that there is definitely no way for this to happen, except that it's obvious that life or intelligence can't change the universe if it doesn't exist. Thus if we colonize space, we are creating the possibility of outliving the solar system, which allows for some potentiality to affect the universe at extremely long time scales.

On the topic of determining what the best moves to prevent our extinction in this century are, I'd say that being very careful and wary of ethics in the development of AGI, nanorobotics, and genetic engineering are probably most important. Mitigation of ecological damage feels like it'd be next, then probably climate change, then probably we need cheaper desalination to curb conflict as we get past the middle of the century, and hopefully the risk of all-out nuclear war doesn't get too high. It would be nice to have a backup human colony in case something goes really wrong on earth during this dangerous period of technological development, but not at significant expense to these priorities. But on that note, society is nowhere close to optimally addressing humanity's risks and desires. Whether or not you think space colonization is worth any of our resources or not seems secondary to the ridiculous waste and inefficiency of the economy in general, which begs the question "how would you actually want to try to change things?"

It's already hard to see how we can even get future technologies developed with equity in mind, I really can't see how someone could expect it's possible to get all powerful people to forever abstain from creating incredibly powerful technologies, short of killing everyone, which defeats the purpose. So we have to deal with these risks and challenges, and we don't seem to have the option of doing it in an optimal fashion, so it's best to focus on what can actually be done to improve the future. Contributing to certain social movements or technological developments is the bulk of people's options for how to make impactful contributions. Working to make the development of technologies safe and their implementation ethical, and mitigating risks to communities and civilization at large are good ways to try to contribute in my opinion. If space colonization is something you can find a way to contribute to, I see that as being positive.

0

Chemical_Ad_5520 t1_jckgs4o wrote

What we do with technology in this century can determine whether life gets off this planet and survives the death of the solar system. If earth life can colonize space, then the organizing forces of life and intelligence may persist until entropy is defeated.

For people who are interested in preserving life in the universe for extremely long periods of time, these topics are interesting to think about because of how many future events hinge on the present - the fate of the only life we know depends so much on what we do today, it's awesome in a literal way.

I'm not saying that we can definitely accomplish anything particular, I'm saying that a lot is possible if life and intelligence continue to exist, possibly including extending the lifespan of the universe. Thus some people feel it's important to do what it takes to preserve life.

I'd be happy to debate this in more depth if you'd be willing to provide an argument grounded in evidence and logic. You just keep saying "it's too much time for anything to make a difference." Based on what? Give me a real argument to respond to.

0

Chemical_Ad_5520 t1_jci4978 wrote

But, since we're on the topic of how to control for effects over extremely long timelines, what do you think about the fact that earth life will die off in a relatively short period of time unless it can intelligently organize in order to colonize other solar systems? This solar system has a relatively near expiration date as far as the habitability for all life as we know it is concerned. Earth life has probably been around for more time than it has left before the sun kills everything here.

This period of time is so dynamic with regards to extremely long-term outcomes because we're so close technologically to being able to save earth life from this expiration date, but it's a damaging and dangerous time too. We're on the edge of destruction and salvation simultaneously, and the outcome depends on how successful we are at working together as a group to wield technology in favor of our interests (including long term ones).

The point of the above being that earth life is middle aged or elderly at 4 billion years old, considering the life cycle of this solar system. The only chance earth has to make an impact on a more distant future than a few billion more years years is for a species like humans to make space colonization possible. Could another intelligent species have performed this whole process better? Maybe, but a lot of the ills of our society and impact on ecology are integral to how a society must develop technologies like this, it just depends what kind of instincts you have to fight against as a group while doing it.

I feel like saving earth life from a relatively near-term death sentence is better than barring technological advancement because it created an ecological disaster. Lots of natural things cause ecological disasters, but instead of getting nothing out of it, we could be saving the only life we know of in the universe. Since we've already found ourselves in this position, I think the responsible thing to do is to do our best to control and stabilize climate and ecology while we take advantage of a potentially fleeting opportunity to help life get off this planet. It spent 4 billion years cooking up different creatures and destroying them, and now it's produced one that might be strong enough to leave the nest and make something of itself before this incubation chamber dries up. I feel compelled to take advantage of the opportunity.

There's a popular analysis called the Fermi Paradox, which postulates that the likelihood of technologically advanced alien life existing within a given proximity to earth seems higher based on a scientific analysis than we observe in space. We don't see robust evidence of technologically advanced alien life anywhere, and it begs the question "Why do we find ourselves so alone in our observable section of the universe?" The possible answers are:

•Maybe life is really difficult to get the right conditions for in the first place.

•Maybe technologically intelligent life is really difficult for life to evolve into.

•Maybe technologically intelligent life overwhelmingly tends to destroy itself with its own technology before it can use it to save itself and exist for a long time.

•Or maybe there are plenty of other aliens, and we either live in a simulated universe just for us, made by an alien, or the aliens overwhelmingly use technology that doesn't produce recognizable electromagnetic signatures for whatever reason.

The mainstream interpretation is that the evidence feels a little stacked against life being difficult to start in the first place, just because of the vast scope of the observable universe. The same goes for the idea that technologically intelligent life would be too difficult to evolve because of the competitive edge afforded by it, and based on the variety of intelligence we see across the animal kingdom. The third idea feels particularly compelling because this advanced technology does indeed feel dangerous to wield. The fourth possibility doesn't have robust evidence supporting it, but it's a possibility and should be included for the sake of rigor.

Futurists (Futurologists?) talk about what may be the "great filter" which has kept the universe so devoid of technologically advanced alien life, and worry that we may be close to encountering it. Considering how profoundly alone we find ourselves in the universe, I don't feel comfortable being so quick to throw away the one chance we know of to preserve life for the future.

1

Chemical_Ad_5520 t1_jchv9ql wrote

I'm just explaining that the distant future can be affected by current events, and that some people do place value on those future outcomes. You're the one claiming that this is absolutely untrue and impossible. That sounds like a better example of hubris than my analysis is.

I get that your message is that you'd prefer to focus on nearer-term outcomes, and that's valid, but that doesn't mean it's pointless to talk about how distant futures could be affected by near-term developments.

Most of what you're doing here is just expressing your emotions, hyperbolically claiming absolutes and adding nothing to an actual analysis of this topic. I'm being literal and a little more specific about the content of this topic, which I think is a better contribution than what you've made here.

1

Chemical_Ad_5520 t1_jchcxxc wrote

There are a variety of theories about how entropic forces may decay and destroy the universe, but consider that life and intelligence are able to organize parts of the universe in ways that resist entropy. That's evidence of a possibility to organize the universe such that it finds a sustainable equilibrium. Heat death is not absolutely guaranteed.

2

Chemical_Ad_5520 t1_jchc5my wrote

Just because billions of years is a long time compared to our lifespans doesn't mean these possibilities are irrelevant. You could argue that there's no objective meaning about it, but the same is true about all the decisions you make which affect the present too. Most people don't think very deeply about where humanity, life, intelligence, and the universe are headed, but some people do feel that these long-term outcomes have subjective meaning to them, the same way your choice to eat, work, and enjoy Reddit have some sort of subjective meaning for you.

I care about the fate of humanity, life, intelligence, and the universe, but some people just aren't interested in that. It doesn't make one set of interests right or wrong, it's just a matter of what people are trying to leave behind.

Many people would say that billions of years is too much time for what we do now to have a lasting effect, but we're actually living in a very dynamic and impactful time, which very reasonably could have bearing over the nature of the death of the universe. Entropic forces in the universe seem to be opposed in some ways by the organizing forces of life and intelligence. It's possible that the continued technological advancement of our society makes the difference between the universe eventually destroying itself or finding a sustainable equilibrium. So much is at stake right now, and people who look far into the future as part of their process to try to help control humanity's progress towards positive outcomes are the ones who are best equipped to keep technology and society from going off the rails. Certain technological, political, and social developments in this century could determine the fate of the universe.

−1