Submitted by Neurogence t3_118fok7 in singularity
xott t1_j9gzjbr wrote
This is a really interesting case. I thought the email from the Vanderbilt deans about the Michigan State shooting was spot-on in terms of tone and style. I mean, using an AI language model is basically the same thing as using a communications team or a speech writer, so I'm not sure why people are saying it's inauthentic. In reality, it's not so different from what a human would have eventually produced.
To be honest, I think if they hadn't included the 'made by ChatGPT' disclaimer, no one would have even known it was generated by AI. It's not like the email lacked feeling or anything.
Spire_Citron t1_j9hpmw3 wrote
Yeah, in this case the only real issue is that tacking on 'made by ChatGPT' feels kinda weird. I feel like in society we like to keep up appearances and pretend like there's this level of special sincerity in those kinds of messages, but we all know that when we go to write them we're just copying ideas from other similar messages we see elsewhere and doing our best. It's rarely something truly personal, but it's a bit crass to pull back the curtain and show that.
Neurogence OP t1_j9gzrti wrote
Yup. All they had to do was remove that part at the bottom.
But---I do think an email addressing fatalities requires more human touch.
MarginCalled1 t1_j9hp5hw wrote
I use ChatGPT to write all of my emails that require more than a line or two, then I go through and make edits as necessary.
We live in strange times, this is like someone in the late 80s writing an E-MAIL to address fatalities, they couldn't even write a normal letter and send it in the mail like everyone else??!
In a couple years we'll look back and nod at how dumb this is.
saleemkarim t1_j9hspjp wrote
Yeah, reminds me of a late Seinfeld episode about whether or not it's impolite to make an important phone call on a cellphone.
HermanCainsGhost t1_j9i1gfl wrote
Wow that's straight r/agedlikemilk right there
hijirah t1_j9i1ugg wrote
If they would've removed the citation, then they'd be on the hook for plagiarism. It's like they can't win for losing.
End3rWi99in t1_j9iisv2 wrote
> requires more human touch
As a counter argument, I believe a human touch can include anything all over the map. I don't blame someone looking to get more of an unbiased or more reserved iteration before submitting anything important, especially if you conclude it does a better job than you of conveying meaning. I sometimes struggle at getting to the core of what I mean through words. It's one of the reasons emoji's were created.
[deleted] t1_j9ixliy wrote
[deleted]
hijirah t1_j9i1pjk wrote
Exactly. Ppl are so archaic in their thinking. They did everything right, including properly citing ChatGPT.
ghostfuckbuddy t1_j9iwqjf wrote
The email's content is not as important as what it is supposed to represent - that the person writing it cared enough to invest time to personally craft a message. The whole point is completely undermined by outsourcing it to an AI.
ground__contro1 t1_j9j8b8a wrote
Genuinely curious, is the whole point completely undermined by contracting an outside PR person for a draft of a public announcement?
ghostfuckbuddy t1_j9j93rq wrote
I think if you're getting them to proofread after it's been written, then no, but if you're getting them to write the whole thing for you then yes.
ground__contro1 t1_j9j9x6b wrote
Maybe it’s not just about the right combination of words, it’s that they came from someone who is supposed to be an authority figure in that sphere. If you’re the Dean of a school, thinking and speaking about these issues should be resonating with you in a way it wouldn’t with either a pr team or chatbot, because neither are responsible for students
[deleted] t1_j9kvrq0 wrote
[deleted]
ground__contro1 t1_j9lawrm wrote
I’m not sure I agree with the comparison
[deleted] t1_j9mx06h wrote
[deleted]
Stakbrok t1_j9mc2w8 wrote
But it was outsourced with much love. Just like my wife who heats up frozen pizza with much love.
crua9 t1_j9ibw4y wrote
>To be honest, I think if they hadn't included the 'made by ChatGPT' disclaimer, no one would have even known it was generated by AI. It's not like the email lacked feeling or anything.
I honestly wonder if after a few months or whatever, they will just do it again but not include the disclaimer. Maybe throw it in quiltbot for extra measure.
GlobusGlobus t1_j9iu4ny wrote
It is indeed very strange that they included the made by ChatGPT. The way to do it is, obviously, to make some edits and take full ownership. You used a tool, nothing more, nothing less.
I mean, you have to check everything from ChatGPT. I have used it for composition and sometimes it (obviously) get things completely wrong.
GlobusGlobus t1_j9itz1h wrote
100%
External-Explorer330 t1_j9k6wja wrote
This is interesting. I think people are rightfully upset because these administrators’ specific roles are to support the student community. The aftermath of the shooting would be a time where their duties are of the utmost significant. The problem isn’t that they used AI—they could have used a template from a previous tragedy themselves (that’s basically what ChatGPT did)—the problem is that they did not reflect on the incident themselves and craft a genuine message that specifically pertained to the school and the students. Instead, they essentially did a “copy-pasta” which is insincere and beneath their roles. It probably would have been better if they sent a delayed message. It is acceptable to use a template/AI in many other cases such as a mass email about a fire-drill testing or club event, but a school shooting is a uniquely shocking and evil tragedy that deserves time, sensitivity, and care. Using a template is simply careless. I believe they should be suspended. I honestly can’t believe no one was like “maybe this isn’t a great idea,” or “we should give it more thought.”
Edits: Spelling, clearly I didn’t use ChatGPT lol
[deleted] t1_j9hwlce wrote
[removed]
Typo_of_the_Dad t1_j9kwnp7 wrote
Seriously? It shows they didn't even read it beyond perhaps making sure it follows the political Diversity Inclusivity and Equity message from one of their HR folders, proving a lack of respect and listening as mentioned in the letter. Of course it's hard to be personal with someone they probably never even met but c'mon.
Not that I disagree it couldn't have sounded pretty much the same if entirely manmade, it moreso shows the detachment and inadequacy of a system.
RepresentativeAd3433 t1_j9p9vtu wrote
The problem with this logically becomes, why would humans ever say anything at all again? Why would we write another word? If the robot can just spit out what we write faster why bother?
xott t1_j9qrok6 wrote
Well, seeing as we still walk in spite of cars, that's probably an overblown worry.
Viewing a single comment thread. View all comments