eliyah23rd

eliyah23rd t1_j2j64fk wrote

If I understand the author of this blog correctly, their reading of Murdoch leads to the following observations:

  1. Being good to the other is a matter of identifying, disabling and removing one's own ego in the relationship.
  2. In the relationship with an inanimate object, the object itself loses nothing if you fail to disable your ego. The loss is yours, probably due to epistemic vices resulting from your ego deflecting correct reflection regarding the object.
  3. In the case of an animate object, a person, animal or group, the harm imposed by the involvement of your ego is felt by them.
  4. The last assumes that without the ego, the remainder of your desire is to benefit the other. This would assume learning correctly what they need and desire and then spending the energy to implement the benefit. Your ego would lead to both epistemic vices in learning about the person and to decision making that would be influenced by your own needs rather than theirs.
103

eliyah23rd t1_ixdzjjc wrote

That might happen and it's a danger but that's not the mainline scenario.

Data being collected on facial expressions in the billions is more likely. Then you correlate that with other stuff. Bottom line, it's as if the cameras are installed in the privacy of your home, because mountains of data in public provides the missing data in private.

Then you correlate the inferred private stuff with more stuff. That's how you build "Minority Report"

−1

eliyah23rd t1_ixdr4b4 wrote

Fantastic video. Thank you.

This is the biggest thing happening on an ethical and social level IMO.

I am proficient with the tech. I can write Transformers, download HuggingFace models, and I know what these words mean. But I have no idea about the ramifications of this stuff on society. The people making policy, I am sure, know even less than me, and probably nothing about the the technology.

We need to give control of these changes to the broadest group possible.

The light of the sun has the power to purify.

1

eliyah23rd t1_ixdmlo5 wrote

>but only in the limited domain within which it practices

I agree. I'm not much into metaphysics but complex social constructs with multiple meanings don't do well. Billiard balls and components that are engineered to replicate exactly to the tested prototype do great. However, the power of these replicants is getting ever larger.

2

eliyah23rd t1_ixdhhc7 wrote

So as we have agreed I think, the prescription is hypothetical.

I think the prescription you offer uses both 1 and 2. 1. You may think you only have value X, but you also have unspoken values Y and Z. 2. The best way for both you and I to achieve Y and Z is to cultivate empathy and sympathetic joy

1

eliyah23rd t1_ixdgl80 wrote

Oh, I wasn't retracting on the value of the distinction. However, you had made me realize that the descriptive project can record the fact of one partner pressuring the other to accept a categorical and not just a hypothetical value.

I think I need to retreat to a usage that involves logic/reason. My position is that this pressure cannot succeed at a logical argument for accepting a categorical but only a hypothetical. It can try, but it must fail. However, limbic, non-lingustic pressure to accept a categorical is found everywhere.

1

eliyah23rd t1_ixddjv7 wrote

Thank you. I have never tried to use the feature before and was not aware of what the protocol was.

Do you think, BTW, that for older movie and such a general comment it is necessary to take this precaution?

Anyway, fixed it. If this had been the first thing I learned today, I would say that it was wort getting up this morning. But, thankfully, my day has been full of such experiences. ;)

1

eliyah23rd t1_ixdd4cl wrote

I remember in one of our earlier conversations, I proposed that "reason" should be turned into two terms: "cause" and "plan-given-knowledge". You weren't impressed.

In general I do believe that separating out different senses is important for reasoning because logic cannot allow for one meaning in one clause and another in a different clause of the same argument. This fallacy is omnipresent in anything but the equations of hard science IMO

Yes, of course I identify with whorfism. I would go further than the strong version. Non linguistic Neural modules programmed by our society generate assertions and assent to them at very advanced points in the chain of reasoning. Foundationalism as a realistic model for human reason is quite laughable really.

3

eliyah23rd t1_ixd8ne1 wrote

Amazed that the article does not mention "Minority Report". Spoiler! >!The movie posits a future where the tech is so advanced, that the police know in advance when the crime will be committed. (Pity the movie turned to Psychics instead.)!<

If today the program can tell the neighborhood, tomorrow it will be the street. Will we hit quantum effects before we can tell which house and when?

However, algorithm and computing power are not the only parameters. If we add extensive and invasive data collection to the process, the path from today to that moment is quite evident.

The question is (1) Do we want to continue increasing the data collection levels (you could argue that it will correlate to safety for some) (2) Do we want to keep this data collection in the hands of opaque institutions? (OTOH if you make it more public the chance of a leak, arguably, increases)

One last point. You'd be amazed how useful "innocent" incidental data is. Just the expressions on faces or even clothing style and gait may correlate to other data in unexpected ways.

12

eliyah23rd t1_ix97ve4 wrote

Yes. According to this paradigm we program each other.

If truth is a socially-constructed concept developed to allow human coordination, then we learn from each other when to assent. The education process is more one-sided than "each other" might indicate, but in the long term it works by "pass it forward".

Also socially constructed is not meant in the sense of constructed by society but in the sense of constructed in an interactive multi-person context.

Personally, I think a subjective model can do most of the work by itself, with other agents being a feature of the landscape of the evolving subject. However, the two alternatives might succeed in mapping to each other.

4

eliyah23rd t1_ix80x39 wrote

It seems like the article's problem is with the fact that they are being called "conspiracy theories".

The real problem seems to be with theories that (1) have very little or poorly-related evidence and (2) rely on some form of partisan or tribalist hate for their uptake.

It is true that conspiracies seem to figure prominently in this class of theories - for understandable reasons.

However, perhaps we should just propose to rebrand the class.

This comment intends to express no opinion as to the actual status of any specific theory that some may brand as "conspiracy theories".

100

eliyah23rd t1_ix7z1r0 wrote

I liked the speculative genealogy at the end of the article.

I wonder why, instead of phrasing it as a genealogy the author does not make that account the definition of truth.

"truth" is an expression of assent to a partner's verbally expressed position.

Of course, it robs most of the authority out of the word, but given that the context is an argument that denies truth altogether, I think that's excusable.

5

eliyah23rd t1_ix4aeqh wrote

Not quite clear what specific issue you are getting at here. It might come to me with more reflection, but I'll give my best answer for now.

I am not arguing that my current values are necessarily selfish. They might not even be for the purpose of "thriving". They are what they are regardless of how these values came to be. These values may be sacred to me and I would lay down my life for them. Yes, a researcher might identify a causal pathway that included the search for meaning or the pressure of my parental context. I might even be aware and accept the findings of the research. However, regardless of cause, the sacred remains sacred and may take precedence over any thriving.

So what would be the agenda of my prescriptive research?

  1. To identify the structure of the values I do have. For example, to highlight the fact that there is usually a multiplicity of values that could easily conflict in practice. Or to highlight that consistency and universalism are some of my goals.
  2. To figure out ways multiple people, each with their own and differing goals can work together.

For both these goals the prescriptive and descriptive researchers must collaborate. Or, at least, the descriptive researcher has much to teach the prescriptive practitioner.

1

eliyah23rd t1_ix48b3j wrote

I know that I'm responding to your post from three days ago but I've been thinking a lot about our discussion.

In the light of your response, I think the categorical-hypothetical distinction is not sufficient. The pressure that one person exerts on the other (partner) is to accept a categorical. Since this pressure may be a direct appeal to a non-linguistic "irrational" motivator, it may not be saying explicitly "IF you want to partner THEN you must seek X". For example the parent just encourages "seek X" even though the unwritten motivator is that the child desires to align with the parent.

However, this still leaves the analysis in the realm of the descriptive. The researcher identifies these pressures between partners.

But when I switch out of the role of observer to the rational subjective, I am not considering the observed objects. I ask only whether my partner has any hypothetical suggestions for me given the goals I already have. I reject any attempt to request the categorical (without a justifying hypothetical) as manipulation. As a rational actor I still have no reason I "should" accept a new categorical or modify the goals I already have.

The idea that I should accept any categorical because it has in the past been the cause of the current state of affairs, holds no appeal for me. That is the naturalistic fallacy.

1

eliyah23rd t1_ix45dij wrote

I am not against other people raising money for a cause. I am not interested in raising money. I don't want to be controlled by the wishes of those who give the money nor am I interested in buying talent and telling other people what I want them to create.

I just want to inspire, be corrected by and collaborate with other people who are trying to achieve the same goal. If we have differing goals but can find some goals in common, then that is fantastic too. If one or more of the team wants to raise money to further the ideas we've worked on, that would be great. It is just not the role I want.

That may all change. For now, I just want to create a forum where people who care can discuss the issues. I've got some ideas and these ideas need criticism. I want to hear the ideas other people have. Once there's some momentum, let's see where we all want to go from there.

1

eliyah23rd t1_iwu93ag wrote

I'm not trying to get $$$

I believe in collaboration instead.

Look at the open source movement. They produce far superior code to that of all the big corporations with budgets in the billions. Yes, some corporations hire teams of independent minded open source creators and exploit them IMO because they control the real gold: eyeballs and data.

I want to work without being tied to $$. I want to find the first collaborators and roll on from there.

:heart: > $

3

eliyah23rd t1_iwqyyo8 wrote

Rather than pour too many words on to the page, can I refer to the comment/caption in these posts:

https://www.instagram.com/p/Cj4_NgYsxUw/

https://www.instagram.com/p/CkIg-UkMbhw/

https://www.instagram.com/p/CkNx6RkMvQo/

I try to put a lot of the details directly in the description

2