KvanteKat t1_j5j3ewk wrote
Reply to comment by TonyTalksBackPodcast in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
>I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI
I'm not sure regulation is the biggest threat to the field of AI being open. We already live in a world where a small handful of people (i.e. decision makers at Alphabet, OpenAI, etc.) have an outsized influence on the development of the field because training large models is so capital-intensive that very few organizations can really compete with them (researches at universities sure as hell can't). Neither compute (on the scale necessary to train a state-of-the-art model) or well-curated large training datasets are cheap.
Since it is in the business interest of incumbents in this space to minimize competition (nobody likes to be disrupted), and since incumbents in this space already have an outsized influence, some degree of regulation to keep them in check may well be beneficial rather than detrimental to the development of AI and derived technologies and their integration into wider society (at least I believe so, although I'm open to other perspectives in this matter).
Viewing a single comment thread. View all comments