mrstubali t1_j5ykxri wrote
More predictable behavior from goons who haven't been paid off yet. Ladies and gentlemen, the message of education and publisher racket: "Hey, don't reference where you actually got your information from." Dude we're in for a wild ride in the next 5-10 years.
An-Okay-Alternative t1_j5zg33n wrote
If an academic publisher referred to ChatGPT as a source of information they should be laughed out of the business. At best the tech can take the busy work out of writing copy. Any factual statement the AI makes would have to be independently verified to have any veracity.
mrstubali t1_j5zuqsy wrote
Right GPT chat isn't a good source, and is particularly a bad reference because it doesn't even provide its own references with its answers. The issue is that people can use GPT chat or similar tools to morph their sentences to be "novel". The problem will get worse with time and they will sound more human. New writing software could be a tool to help people to construe something useful, and if it's used for that purpose, then it needs to be documented, and yeah the article does cover all of those bases and that makes sense.
However there is an issue- does an AI program itself make deductions and conclusions based on what data it receives, and do those deductions contribute meaningfully to the whole project? It's not just a calculation, it's about stringing complex techniques or coming up with a formula, for example a type of chemotherapy? I'd like to know if the computer/AI was doing most of the heavy lifting for coming up with a specific treatment vs an author. Because if it isn't clear who is doing what in a complicated process like that it just makes things less clear if something went wrong. Right, I get the intent of all of it but knowing when the AI is put to good use is, well pretty useful.
An-Okay-Alternative t1_j5zxt1g wrote
> Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.
Not listing an AI as an author doesn’t mean the use of it is being discouraged or hidden. For the foreseeable future the technology is still a tool used by humans and not a general intelligence that could serve the role as the originating researcher.
marketrent OP t1_j5yl9jl wrote
>mrstubali
>More predictable behavior from goons who haven't been paid off yet.
>Ladies and gentlemen, the message of education and publisher racket: "Hey, don't reference where you actually got your information from." Dude we're in for a wild ride in the next 5-10 years.
In my excerpt comment, quoted from the linked content:
>Arguments against giving AI authorship is that software simply can’t fulfill the required duties, as Skipper and Nature Springer explain.
>“When we think of authorship of scientific papers, of research papers, we don’t just think about writing them,” says Skipper.
>“There are responsibilities that extend beyond publication, and certainly at the moment these AI tools are not capable of assuming those responsibilities.”
>Software cannot be meaningfully accountable for a publication, it cannot claim intellectual property rights for its work, and cannot correspond with other scientists and with the press to explain and answer questions on its work.
Further reading:
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use, 24 Jan. 2023, https://www.nature.com/articles/d41586-023-00191-1
Viewing a single comment thread. View all comments