fortunum
fortunum OP t1_j1hzjrx wrote
Reply to comment by Phoenix5869 in Hype bubble by fortunum
Idk how to be honest. I am not holier than thou lol and I keep repeating myself in here but will give up now. If the singularity is imminent, I don’t need to prove it that it is not, but the person making the claim needs to do that. (If God is real, I don’t need to disproof it, someone needs to prove it). This is not an elitist, holier-than-thou attitude, every idea will be dissected and scrutinized. I am in fact trying to go outside of my bubble where the topic AGI and singularity are treated with ridicule tbh, which I disagree with. Also again, my point is not to disprove the singularity, but it is about the state of this sub
fortunum OP t1_j1hoskr wrote
Reply to comment by AndromedaAnimated in Hype bubble by fortunum
I am explicitly stating my bias, as in I am a student under people who believe the singularity is far away. I am saying I am no authority because I don’t study singularity or AGI, a PhD or Professorship or other titles do not implicitly make you more qualified to talk about just any subject, maybe that is a problem that you project here.
fortunum OP t1_j1hi7wi wrote
Reply to comment by fingin in Hype bubble by fortunum
I think you are right. With this particular sub it also seems to be that some people really ‘need’ it to be true - I saw in some threads that people say the singularity gives them hope with their particular mental health problem etc. I’m glad it does, but it doesn’t make it more true because of that
fortunum OP t1_j1g63rc wrote
Reply to comment by [deleted] in Hype bubble by fortunum
See the big shiny things we see in “AI” today are driving by a single paradigm change at the time, think convolutions for image processing and transformers for LLM. Progress could come from new forms of hardware (as it tends to btw, more so than actual algorithms) like we started using GPUs. The current trend shows that it makes sense to build the hardware more like we build the models (neuromorphic hardware), this way you can save orders of magnitudes of energy and compute so that it operates more like the brain. This is only an example of what else could happen, it could also be that language models stop improving as we are nearing the limit of language data apparently.
fortunum OP t1_j1g4yvb wrote
Reply to comment by CommentBot01 in Hype bubble by fortunum
Maybe I’m wrong here, is the purpose of this sub to argue the singularity is almost here? I made this post because I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity. Scientists can be wrong and are wrong all the time, everyone is always skeptical of your ideas. And I would say it is the contrary with singularity, I don’t have to give you a better, significant or alternative research paper lol. That is definitely not how this works. Outrageous claims require outrageous evidence
fortunum OP t1_j1g3rto wrote
Reply to comment by Sashinii in Hype bubble by fortunum
How does this address any of the points in my post though?
Extrapolating from current trends into the future is notoriously difficult. We could hit another AI Winter, all progress could end and a completely different domain could take over the current hype. The point is to have a critical discussion instead of just posting affirmative news and theory
fortunum OP t1_j1g2wqj wrote
Reply to comment by Comfortable-Ad4655 in Hype bubble by fortunum
You would need to define AGI first. Historically the definition and measurement of AGI has changed. Then you could ask yourself if language is all there is to intelligence. Do sensation and perception play a role? Does the substrate (simulation on von Neumann Architecture or neuromorphic hardware) matter? Does AGI need a body? There are many more philosophical questions, especially around consciousness.
The practical answer would be that adversarial attacks are easy to conduct, for instance chatGPT. You can fool it and get nonsensical answers, this will likely happen with succeeding versions of LLMs as well
fortunum t1_iz2v4li wrote
Reply to comment by modeless in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
Check out E-prop for recurrent spiking NN
fortunum OP t1_j1jb5w8 wrote
Reply to comment by sumane12 in Hype bubble by fortunum
Yea thanks for the reply, that’s indeed an interesting question. With this approach it seem that intelligence is a moving target, maybe the next GPT could write something like a scientific article with actual results or prove a theorem. That would be extremely impressive but like you say it doesn’t make it AGI or get it closer to the singularity. With the current approach there is almost certainly no ‘ghost in shell’. It is uncertain if it could reason, experience qualia or be conscious of it’s own ‘thoughts’. So it could likely be self motivated, to some extend autonomous and have a degree of agency over its own thought processes all of which are true for life on earth at least. So maybe we are looking for something that we don’t prompt, but something that is ‘on’ and similar to a reinforcement learning agent.