edjez
edjez t1_jchqnxm wrote
Reply to comment by Hydreigon92 in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
Awesome!
edjez t1_jchqj0v wrote
Reply to comment by Hydreigon92 in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
Agree 100% that it is important to have people embedded in product teams who have accountability for it.
Ai ethics teams are also useful because they understand and keep track of the metrics and the benchmarks and methods used to evaluate biases, risks and harm. This is a super specialized area of knowledge that the whole company and community can capitalize on. It is also hard to keep it up to date- needs close ties to civic society and academic institutions, etc. . Think of it as if you have to set up a “pipeline”, a supply chain of practices, that start with real world insight and academic research and ends with actionable and implementable methods and code and tools.
In very large orgs, having specialized teams helps scale up company wide processes for incident response, policy work, etc.
You can see some of the the output of this work at Microsoft if you search for Sarah Bird’s presentations.
(cheers from another ML person who also worked w reco)
edjez t1_j7t9rp3 wrote
Another emergent capability - and this depends on the model architecture, for example I don’t think Stable Diffusion could have it, but Dalle does - is to generate written letters / “captions” that to us look like gibberish but actually correspond to internal language embeddings for real-world cluster of concepts.
edjez t1_j7egs8x wrote
Reply to comment by GreenOnGray in [D] Are large language models dangerous? by spiritus_dei
Conflict, created by the first person in your example (me), and followed up by you, with outcomes scored by mostly incompatible criteria.
Since we are talking about language oracle class AIs, not sovereigns or free agents, it takes a human to take the outputs and enact to them, thus becoming responsible for the actions as it doesn’t matter what or who have the advice. It’s no different than substituting the “super intelligent AI” with “Congress”, or “parliament”.
(The hitchhikers guide outcome would be the AIs agree to put us on ice forever… or more insidiously constrain humanity to just one planet and keep the progress self regulated by conflict and they never leave their planet. Oh wait a second… 😉)
edjez t1_j785poj wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
People debate so much whether LLMs are dangerous in their own, while the biggest clear and present danger is what rogue actor people (including nation states) do with them.
edjez t1_j47kmx2 wrote
Reply to comment by ThePerfectCantelope in [N] GPT rumors by [deleted]
It is satire, classify under news/theOnion
edjez t1_jcyz2nu wrote
Reply to comment by SmackMyPitchHup in [P] TherapistGPT by SmackMyPitchHup
Curious- what is it using? OpenAI apis or Azure OpenAI ?