Harvey_Rabbit t1_j2wxegi wrote
Can I pay to have my algorithm maximize my happiness instead of someone else's profit. I'd legit pay for an news feed that showed me all the things I'd be interested in without feeding into all the things that are bad for my mental health.
12beatkick t1_j2xep8k wrote
The issue at hand is defining what “interested in” is. Algorithms have defined this as clicks/engagement. By all measures it is delivering. “If it bleeds, it leads” remains true in the current age because by and large that is what people are interested in.
Harvey_Rabbit t1_j2xgcj0 wrote
Right, it's in their interest to maximize my engagement. I want to subscribe to an algorithm that maximizes for making me a happier, more well rounded person, who maybe isn't on my phone so much.
mdjank t1_j2xwigt wrote
You don't need to pay for that. You just need to delete your social media accounts.
Harvey_Rabbit t1_j2xxv7s wrote
I guess I'm just wondering if this power can be used for good instead of evil. Like, if I'm trying to lose weight, make it show me things that make eating healthy and working out look fun and eating crap look bad. The way it is now, they might identify that I engage with pictures of candy or fastfood, so they sell ads to crap food companies to get me to buy their crap food. Or, how about we incorporate smart watches into it, so they can measure my vitals. Say I want to get to bed at 10, I want my algorithm to start showing me soothing things at like 9:30. Over time, it might learn the kind of things that keep me up at night and avoid them.
mdjank t1_j2y0ffi wrote
Gamification of self improvement activities are its own industry. You can go buy a piece of software that already does that.
All social media can do is share your progress or lack there of.
Think of it this way. You're not going to stop alcoholics by putting a salad bar in a tavern and charging people to eat their salad.
Riotmakrr t1_j30fsiz wrote
I like this saying lol
Still_Study_6059 t1_j31cdei wrote
That wasn't entirely what he was saying though. Obese people usually come from obese environments, and so it is with other stuff. I need to read up on the science again, but commercials doing their best to make eating shit look great is definitely a thing.
What if you could pay to simply avoid that. In the Netherlands we've opened up the gambling market and with that came a flurry of gambling advertisements. And lo and behold we suddenly have a lot more people addicted to gambling, so now the ads are getting axed by 2025 or something.
Food works through the same mechanisms, as did smoking.
N=1 here, but if I watch the food channel or things like our version the great British bake off I find myself craving "comfort food"(diabetes on a fork). I've rid my life completely from that and have found a couple communities that actually are about that healthy lifestyle and am doing much better now. From 110 to 80 kg's on 1.87m. Avoiding exposure has made that process infinitely more easy to do for me.
Now imagine you could tailor social media to do that for you. Maybe you already can btw, simply by using the current algorithms to look for healthfood etc.
mdjank t1_j32oxg0 wrote
I already explained how algorithms worked in this post.
Tailoring your own social media to work for you is probable. It would require disciplined responses directed by unbiased self analysis. In other words, it's not bloody likely.
Then there's the question of limiting the dataset in your feed. You do not have direct control over the data in your feed. You can only control which people can publish to your feed.
You can cut people out of your feed for some level of success. The more people you cut, the less it is a "tool to keep you connected". It stops being social media.
The only sure way to keep from seeing material on social media is to not look at social media. You remove the drunk from the tavern. Change your environment by removing yourself from it.
Perfect-Rabbit5554 t1_j2yqbu1 wrote
In theory it's possible, but the incentives aren't there and when the technology is developed, it'll come after the initial mind flaying phase we're seeing now.
D2G23 t1_j2zw9rx wrote
I liked one kettlebell video, once in 2018, instagram has shown me kb workouts every 5th video since. To be fair, I now swing a lot of bells, so…
0james0 t1_j2xs04d wrote
It shows you things you engage with. You essentially have to retrain it now.
If you see a video that although you might usually watch it, but it's going to be a negative for your brain, quit the app. Doing that is the ultimate flag for the algo, as the last thing it wants you to do is leave.
Then after watching only positive videos, you'll then end up with a feed full of only that.
collin-h t1_j2yhs0v wrote
at least on tiktok I 100% quickly skip past videos that I know might be interesting, I just don't want to see more of them for the next hour.
a good algorithm is like a banzai tree... need to perpetually prune it into shape.
collin-h t1_j2yhlyh wrote
sure, just like I'm interested in what's going on with that terrible car wreck over there.... but it doesn't rubbernecking is my passion. c'mon algorithms.
futurespacecadet t1_j2xsz8g wrote
If a social media website made a happiness algorithm, people would gladly pay for it rather than knowing they rely on making profit from advertisers
mdjank t1_j2xze5c wrote
There are major problems with a happiness algorithm.
First how do you measure a person's level of happiness? The person's emotional state is not a metric in the system.
An algorithm can decide if a piece of media is uplifting but it cannot say if that media would produce the desired effect on an individual. It can only predict the media's effect on a group of individuals.
You can ask individuals about their mental state and measure changes after presenting stimuli. That introduces all the problems of self reporting. e.g. People lie.
Second, a solution to happiness already exists. It's called "delete your social media". Any "happiness algorithm" has to compete with this as a solution.
"Delete your social media" is such an effective solution that Social Media will lie to you to make it seem incomprehensible. It tells you "social media is the only way to be connected with others" and "you're 'in the know' because you use social media and that makes you special".
futurespacecadet t1_j2y0sme wrote
Well, I don’t think there is some magic, happiness algorithm, I’m just saying the concept of it. What would create happiness in an algorithm form? I think control.
I think control over what people see is pivotal to how they interact and I think we need to give back control to the users .
So maybe when you sign up, you can choose what you want to see, if you do want politics, maybe you can choose the level at what politics you see. Do you want to be challenged, do you want to be in a bubble? I mean that in itself could cause problems.
But I also think we don’t need any of that, I think what people really, like it was the fact that Facebook used to just be about connecting with your friends, purely a communication tool before it was bloated with wanting to be everything else like on marketplace and an advertisement center and pages for clubs etc
It’s the same thing that’s happening with LinkedIn. Are used to be affective as just a job search tool, and now it is bloated with politics. I don’t care about I would rather have more services that the one specific thing rather than one service that tries to do it all and I think that’s where people are getting overwhelmed and depressed.
mdjank t1_j2ycwdp wrote
The way statistical learners (algorithms) work is by using a labeled dataset of features to determine the probability a new entry should be labeled as 'these' or 'those'. You then tell it if it is correct or not. The weights of the features used in its determination are then adjusted and the new entry is added to the dataset.
The points you have control over are the labels used, the defined features and decision validation. The algorithm interprets these things by abstraction. No one has any direct control on how the algorithm correlates features and labels. We can only predict the probabilities of how the algorithm might interpret things.
In the end, the correlations drawn by the algorithm are boolean. 100% one and none of the other. All nuance is thrown out. It will determine which label applies most and that will become 'true'. If you are depressed, it will determine the most depressed you. If you are angry, it will determine the most angry you.
You can try to adjust feature and label granularity for a semblance of nuance. This only changes the time needed to determine 'true'. In the end, all nuance will still be lost and you'll be left with a single 'true'.
People already have the tools to control how their algorithms work. They just don't understand how the algorithms work so they misuse the tools they have.
Think about "Inside Out" by Pixar. You can try to make happy all the time but at some point you get happy and sad. The algorithm cannot make that distinction. It's either happy or sad.
Pseudonymico t1_j341oz6 wrote
> Second, a solution to happiness already exists. It's called "delete your social media". Any "happiness algorithm" has to compete with this as a solution.
>"Delete your social media" is such an effective solution that Social Media will lie to you to make it seem incomprehensible.
That really depends on who you are though. Social media really is a huge game changer for people who can’t get out of the house for whatever reason - in particular people with disabilities, parents of young children, and the elderly, along with other marginalised groups who can have trouble connecting in person such as the queer community. “Deleting your social media” for these kinds of groups means going back to being an isolated shut-in, and that’s part of why a lot just won’t do it. Regulating algorithms is a better solution by far imo.
mdjank t1_j34gvh4 wrote
Social media makes it easier for people to find their communities specifically because of the way statistical learners (algorithms) work. Statistical learners work by using statistics to predict the probabilities of specific outcomes. It matches like with like. Regulating the functionality of statical learners would require the invention of new math that supersedes everything we know about statistics.
Regulation is easier said than done.
It is not possible to regulate how the algorithms work. That would be like trying to regulate the entropy of a thermodynamic system. Eliot Nash won a Pulitzer for his paper on equilibriums. Statistical Learners solve Nash Equilibriums the hard way.
One thing people suggest is manipulating the algorithm's inputs. This only changes the time it takes to reach the same conclusions. The system will still decay into equilibrium.
Maybe it's possible to regulate how and where algorithms are implemented. Even then, you're still only changing the time it takes to solve the Nash Equilibrium. I would love to see someone disprove this claim. Disproving that claim would mean the invention of new math that can be used to break statistics. I would be in Vegas before the next sunrise with that math on my belt.
Any effective regulation on the implementation of statistical learners would be indistinguishable from people just deleting their social media. Without the Statistical Learners to help people more effectively sort themselves into communities, there is no social media. These algorithms are what defines social media.
To claim that people wouldn't be able to find their communities without social media is naive at best. People were finding their communities online long before social media used statistical learners to make it easier. If anything, social media was so effective that other methods could not compete. It has been around so long; it just seems like the only solution.
P.S. Your thinly veiled argumentum ad passiones isn't without effect. Still, logos doesn't care about your pathos.
Pseudonymico t1_j34hg0r wrote
> P.S. Your thinly veiled argumentum ad passiones isn't without effect. Still, logos doesn't care about your pathos.
Good grief, are we back in Plato’s Academy or something?
mdjank t1_j35exxw wrote
Going back to school might do you some good.
Pseudonymico t1_j35gr7p wrote
Argumentum ad latinum ≠ argumentum ad verecundiam
mdjank t1_j35n8pt wrote
Which begs the question; why do you think appealing to the needs of the downtrodden and infirmed is a valid argument for not deleting your social media?
Or maybe you're confusing reference to a specific field of mathematics as an appeal to authority?
Gloriathewitch t1_j2yh213 wrote
This could also be bad for you, not necessarily your Mental health, but peoples world views becoming even more insular and echo chamber like.
The reality of life unfortunately is that a well rounded person should have at least some experience with things they don't want to see so that we can all have a good sense of empathy for how others live and experience life, how they think.
There's a good reason people who travel have more empathy for different cultures and are usually less racist, and why people who are told their country is the best and never experience other cultures are less empathetic and often more racist.
I'm really wary of echo chambers like a lot of people are in nowadays because they radicalize people a lot.
Pistolf t1_j2yji1l wrote
Make an RSS feed, you don’t need to pay for that
ehhh_yeah t1_j2yr1r2 wrote
RIP StumbleUpon
dogonix t1_j2y390h wrote
>Harvey_Rabbit
That's definitely part of the solution to the dilemma. It's necessary but not sufficient.
For an algorithmic recommendation engine to truly serve the interests of consumers, it has to not only be paid for directly by the end users but also:·
1/ Unbundled from the platforms.
2/ Be run locally by users instead of by a central organization.
The tech may not be ready yet but it's a potential path out of all the currently occurring manipulations.
craybest t1_j2ynhwn wrote
r/UpliftingNews :D
[deleted] t1_j2xbklb wrote
[removed]
krectus t1_j3091w8 wrote
I got bad news for you, that’s what it already does. It feeds you what you want and you engage in it, that IS how it maximizes profits.
Chaos_Ribbon t1_j30on7k wrote
That's exactly what TikTok does. The problem is there's no such thing as being perfectly fed what you want without it impacting your mental health in a negative way. It can quickly lead to confirmation biases and prevent you from growing as a person when all you're receiving is information you want. It's also extremely addictive.
Viewing a single comment thread. View all comments