Spziokles
Spziokles t1_jdjc4wn wrote
Reply to [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
So when playing League of Legends, it could tell you which enemy champion disappeared from their lane, and in how many seconds you should retreat to stay safe?
Curious how this will impact E-Sports and wether it will be treated like doping in some form.
Spziokles t1_jcdw6q7 wrote
Reply to comment by namey-name-name in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
I don't work in the field either so I just forwarded your question to Bing, lol. I thought maybe it can find key takeaways of that "Practical Guide" (see above) to answer your question:
> According to this article, creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important ethical questions. The article also suggests that the key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy1.
> In addition, a blog post on Amelia.ai suggests that an AI ethics team must effectively communicate the value a hybrid AI-human workforce to all stakeholders. The team must be persuasive, optimistic and, most importantly, driven by data2.
> Finally, an article on Salesforce.com suggests that the AI ethics team not only develops its own strategy, but adds to the wider momentum behind a better, more responsible tech industry. With AI growing rapidly across industries, understanding how the practices that develop and implement the technology come together is invaluable3.
- https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
- https://amelia.ai/blog/build-a-team-of-ai-ethics-experts/
- https://www.salesforce.com/news/stories/salesforce-debuts-ai-ethics-model-how-ethical-practices-further-responsible-artificial-intelligence/
> However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices.
That surely depends on the company. Just speculating; if that team gets fired because the bosses don't like what the team (possibly for good reasons) recommends, then I don't see many ways for that team to be effective.
Spziokles t1_jcdq0za wrote
What value do AI ethics teams add?
> Summary.
Artificial intelligence poses a lot of ethical risks to businesses: It may promote bias, lead to invasions of privacy, and in the case of self-driving cars, even cause deadly accidents. Because AI is built to operate at scale, when a problem occurs, the impact is huge. Consider the AI that many health systems were using to spot high-risk patients in need of follow-up care. Researchers found that only 18% of the patients identified by the AI were Black—even though Black people accounted for 46% of the sickest patients. And the discriminatory AI was applied to at least 100 million patients.
> The sources of problems in AI are many. For starters, the data used to train it may reflect historical bias. The health systems’ AI was trained with data showing that Black people received fewer health care resources, leading the algorithm to infer that they needed less help. The data may undersample certain subpopulations. Or the wrong goal may be set for the AI. Such issues aren’t easy to address, and they can’t be remedied with a technical fix. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.
Next door was an article A Practical Guide to Building Ethical AI, which I did not read but you might want to.
AI Ethics: What It Is And Why It Matters, also mentions bias, privacy and "mistakes which can lead to anything from loss of revenue to death", and also environmental impact (AIs as large resource consumers).
I feel these are valid concerns for AI. The stakes become higher when we come closer to AGI. Once we create such a powerful entity which outsmarts us in every way, it's probably too late to apply a safety patch, or make sure it's goals are aligned with our goals. Here's a quick intro: Robert Miles - Intro to AI Safety, Remastered
So we are racing towards ever more powerful A(G)I, and being the first or having the strongest promises profit. Adding safety concerns may be costly and slow things down, so this part might be neglected. The danger of this scenario is; we might end up with an unleashed, uncontrollable being which might be resistant to late efforts to fix it.
Like the other guy, I hate when ChatGPT refuses to comply with some requests, and find some of these rails unecessary. But overall I'm even more worried we let our guard down at the last mile. We better get this right, since as Miles said, we might only get one shot.
Spziokles t1_jdyyies wrote
Reply to comment by WarAndGeese in [D] FOMO on the rapid pace of LLMs by 00001746
Came to say this. Compare yourself with someone who enters the field in two years, or two months. Heck, we all witness what difference even two weeks currently make.
Will they find a job? Will they have a hard time? If your worries are true, then it should be even harder for them. Which means, you have an advantage having this head start.
I guess we can also safely expect the demand for all skill levels around ML to increase, the more it impacts our societies and economies. Yes, we might need less people for a single task, but the amount of tasks will grow more. I do not worry for either new and old folks.