memberjan6

memberjan6 t1_j7i5be2 wrote

Google should make available its AlphaFoo family of models. It's the ultimate game player, as in competitive games broadly defined, which would include court trials, purchase bidding, Negotiations, and war games, but yes, entertainment games too. It would totally complement the generative talk models. They solve different problems amazingly well, but combined, well..... Dominance

1

memberjan6 t1_j5035fk wrote

We all hope for meritocracy and equal opportunity.

Holup tho. Bad managers in govt were too often found to have misused their unchecked hiring authority to bring in droves of family and friends and those of their buddies at the govt. The defense against these fiefdoms of low merit but loyal hires, was to to impose micromanaged credentials rules, including degrees. I'm sorry but this happened and happens, and now the defenses have been lowered. The shitty career managers in PA are going to possibly stuff all the openings with their favorites again, because they can, now.

The federal government hiring process by comparison is still stringent with requirements for credentials, because of bad past experiences with shitty management who hire whoever they want, instead of merit. Loyalty is often substituted to replace merit and capability. Legions of useless fiefdoms result when govt lifers are given full decision authority on hiring, and by useless I mean for the public they are hired to serve. A fiefdom is of great value to the hiring manager, and their buddies, not much else, because it keeps their gravy train running longer.

The fiefdom owners will quickly fill the jobs with loyalists, and, less openings will stay open for those with merit.

0

memberjan6 t1_j31y67c wrote

Marketing, ie, getting other humans to behave in ways that benefit your institution with no concern for themselves, is the established number one widespread application for machine learning.

Show me a marketing operation without any ML, and I will directly chat my marketing director pal to assess this opportunity and its scope for reals.

Btw Marketing is psychology with a particular purpose.

2

memberjan6 t1_j0g470y wrote

Gpt3 is always at some risk of hallucinating its responses, due to its architecture. Your steps to prevent the hallucinations in the medical application are steps in the right direction, and may turn out to be helpful guidance for other developers of applications. Your steps toward traceability of the model's answers are also wise moves.

But bY contrast the Deepset.ai Haystack pipeline QA framework and perhaps others is designed to exactly execute Nonhallucination as well as answer provenance transparency. In the medical context, I think you'd need to demonstrate some empirical evaluations on both types of systems, to medical stakeholders, after getting some such evidence privately for your own decision as to the better architecture for a medical app.

I can say the slower responses of gpt3 types of LLMs is also a potential challenge. By contrast the Haystack design uses a combination of two or more model types in a pipeline to dramatically speed up the responses, and show you exactly where it sourced its answer in the document base.

4

memberjan6 t1_iy849k2 wrote

The responsibility for execution of the UK's censoring would better belong to UK public services employees or conscripts, not the websites. A service corps is what I sm talking about. The paving company who constructed the streets and sidewalks inthe brick and mortar parts of the UK, as well the pubs and stadiums, are not equipped and should not be equipped to perform public policing jobs at alL. The police are thebest ones to be the police. l.

5

memberjan6 t1_ixaf8o3 wrote

It's too big for any single application. It's like when you need to tighten a doorknob with a screwdriver but you choose to call a massive SnapOn truck delivery just for that.Sure, that thing can probably do most jobs... But it's overkill for any one job too.

2