leondz
leondz t1_j9w1cuu wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Nah, there are a bunch of reasoning steps missing. Conjecture on conjecture on conjecture is tough to work with.
leondz t1_j8cc933 wrote
Reply to comment by berryaroberry in [D] Quality of posts in this sub going down by MurlocXYZ
As an academic, the non-academic nature of the sub has always been one of its great advantages. I get enough academic research in the day job
leondz t1_j0wkn5y wrote
Reply to comment by tpm319 in [D] Will there be a replacement for Machine Learning Twitter? by MrAcurite
<3 NORMCONF
leondz t1_j0wkigh wrote
sigmoid.social was started ages ago by The Gradient, all the cool people are there already
leondz t1_j0cugwd wrote
Reply to comment by WikiSummarizerBot in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
Surely you're not contending that autopilots
> Airplanes can fly on autopilot. Autopilot is part of the autopilot-using plane.
are only used in the handful of autonomous flights? also: if autonomous flights were reliable, and could fly reliably, they'd be used more! but they're not, because the problem isn't solved, because good autonomous flight isn't there, because autopilots can't reliably fly planes
leondz t1_j0cjg4x wrote
Reply to comment by respeckKnuckles in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
autopilot helps the pilot. it requires the pilot. who flies the plane
leondz t1_j0a9tdd wrote
Reply to comment by mocny-chlapik in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
People fly airplanes. Airplanes don't fly on their own.
leondz t1_j0a9rp8 wrote
Reply to comment by bballerkt7 in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
It's only just appeared on arXiv, so wouldn't expect this yet
leondz t1_izckeuh wrote
This happens all the time and it's awful. Please put this up on arXiv.
leondz t1_ixi3xse wrote
A little bit, yeah. Completely ignoring some of the tools that you have at your disposal, limits your power and efficiency.
leondz t1_ix96sfz wrote
Reply to comment by ReasonablyBadass in [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
Yeah, this gives you an idea of how little of the data is actually worth going through - most of it repeats structures found elsewhere in the data, and isn't very diverse. Going through huge low-curation datasets is inefficient: the data diversity just isn't there.
leondz t1_ix96ivb wrote
Reply to [R] Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning - Epochai Pablo Villalobos et al - Trend of ever-growing ML models might slow down if data efficiency is not drastically improved! by Singularian2501
We already did for most languages that aren't English. Data efficiency is the only way to catch up, for them.
leondz t1_ja9dk7x wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Depends who & what you're using it on, doesn't it, just like a driver's license. Do what you like on your own private property. If you want it to be critical in decision-making that affects others, some rudimentary training makes a ton of sense.