Submitted by dryuhyr t3_125ltpj in Futurology
In light of the recent post about a call for halting GPT-4+ development, it’s got me thinking. Of course, I don’t think any of us trust our beloved lawmakers to grasp the intricacies of AI further than they could throw a microchip, but what about others in the field?
I know for philosophy there are many fields where people have basically solved issues ages ago that are still plaguing us, just because the expert in the field isn’t the guy making the rules. It seems like guiding the development of AI is a topic that was just about as easy to theorize about in the 1990’s as it is today. Is there any sort of consensus by those in the field about some rules we should really be following going forward, which are of course being ignored by everyone with money and investments in this tech?
[deleted] t1_je4njeq wrote
[deleted]