Submitted by Ok_Telephone4183 t3_117pmhr in singularity
[removed]
Submitted by Ok_Telephone4183 t3_117pmhr in singularity
[removed]
Thank you so much!!!!
Note: I work in AI , and have friends who work at OpenAI.
Computer science.
The reason why the other 2 subjects don't matter is they essentially are not used now. Neither neuroscience or cognitive science is relevant for current AI research. Current methods have long since left needing to borrow from nature. The transformer or current activation functions for ANNs do not borrow anything but the vaguest ideas from looking at old neuroscience data.
Current AI research is empirical. We have tasks we want the AI to do, or output we want it to produce, and we will use whatever actually works.
The road to AGI - which may happen before you graduate, it's happening rapidly - will be likely from recursion. Task an existing AI with designing a better AI. By this route, less and less human ideas or prior human knowledge will be used as the AI architectures are evolved in whatever direction maximizes performance.
For an analogy: only for a brief early period in aviation history did anyone study birds. Later aerofoil advancements were made by building fixed shapes and methodically studying variations on those shapes in a wind tunnel. Eventually control surfaces like flaps and other active wing surfaces were developed, still nothing from birds - the shapes all came from empirical data, and later CFD data.
Similarly, none of the other key element of aviation: engines: came from studying nature either. The krebs cycle was never, ever used in the process of making ever more powerful combustion engines. They are so different there is nothing useful to be learned.
Aren't there only like 400 employees at OpenAI or something? That's like saying you have a friend who won the lottery. That's pretty amazing. What's their experience like? Anything they can share? Is it all secretive?
Several friends. Others at AI startups. Somehow they are self taught. Good at Python, has a framework that uses some cool hacks included automated function memoization.
Note that until very recently, like 2 months now, OpenAI was kind of not the best option for elite programmers. It was all people on a passion project. The lottery winners were at Deepmind or Meta.
Have several friends there also. The Meta friends are all the usual background, with the graduate degree and 15+ yoe in high performance GPU work.
Why is python so widely used in AI when it’s a really inefficient language under the hood? Wouldn’t Rust be better to optimize models? Or do you just need that optimization at the infrastructure level while the models are so high level it doesn’t matter?
Also it’s really cool there’s people in the forefront of AI on this sub. I’m at a big tech company right now, and I want to transfer into infrastructure for AI there. Then hopefully, I’ll build a resume to get into a top PhD program. After that I could work in AI research.
>Why is python so widely used in AI when it’s a really inefficient language under the hood? Wouldn’t Rust be better to optimize models? Or do you just need that optimization at the infrastructure level while the models are so high level it doesn’t matter?
You make calls to a high level framework, usually pytorch, that have the effect of creating a pipeline. "Take this shape of input, inference it through this architecture using this activation function, calculate the error, backprop using this optimizer".
The python calls can be translated to a graph. I usually see these in *.onnx files though there are several other representations. These describe how the data will flow.
In the python code, you form the object, then call a function to actually inference it a step.
So internally it's taking that graph, creating a GPU kernel that is modified for the shapes of your data, compiling it, and then running it on the target GPU. (or on the project i work on, it compiles it for what is a TPU).
The compile step is slow, using a compiler that is likely C++. The loading step is slow. But once it's all up and running, you get essentially the same performance as if all the code were in C/C++, but all the code you need to touch to do AI work is in Python.
Python is just scripting for whatever talks to the metal…
What's your prediction for when a ChatGPT that doesn't make mistakes in answering and has 10x more memory will occur? What's your timeline for AGI, singularity?
Mistakes: Depends on the outcome of efforts to try to reduce answering errors. If self introspection works, months.
More context memory: Weeks to months. There already are papers that set up the groundwork: https://arxiv.org/abs/2302.04761 . Searching the past log for this same session (past our token window) is easily integratable with the toolformer architecture.
There are also alternate architectures that may also enormously increase the window.
AGI : it is possible within a few years. Whether it happens depends on the trajectory of outside investment. If Google and Microsoft go into an all out AI war where each are spending 100B plus annually? A few years. If current approaches "cap out" and the hyper diminishes? Could take decades.
Singularity: shortly after AGI is good enough to control robotics for most tasks. So shortly after AGI probably. (shortly meaning a matter of months to a few years)
A combination of math, neuroscience and machine learning (not computer science).
So, computational neuroscience.
For my high school subjects, should I pick biology or physics to better suit a computational neuroscience major?
Also take some courses in philosophy/ethics, sociopolitical theories, constitutional/international law, etc. While contributing to the decisions on what people should or should not be allowed to use GPT for, I had to refer back to the underlying reasoning (not just the arguments) of Hobbes, Locke, Rousseau, etc. Responsible AI should be an integral part of AI research, development, and productization, not a patch to be added afterwards. Having the philosophical foundations for thinking about responsible AI can be a differentiator from those with the typical technical backgrounds.
That's also a good idea.
The thing is that I can't really recommend one subject fully or leave any out, because they have separate contributions. And there is also a lot of overlap betwen mafh and physics.
For high school specifically:
Physics: mathematical reasoning, electronic circuits
Biology: inner workings of cells
Math: calculus and linear algebra
There are also a lot of other topics that are beneficial to your education that aren't relevant to computational neuroscience. For example it's good to know about DNA, evolution, ecosystems, optics, Newton's laws of motion, chemistry, etc., because it makes you smarter and able to conmect more dots.
So I'd say take both. And also, if available, a class focusing specifically on anatomy and physiology (they may spend a couple months on brains).
In my opinion you should ensure you have a broad education in high school, which will help you a lot more to decide what to pick in college.
And one more bit of unsolicited advice: as a freshman in college look for a research group that will take you in. You will learn much, much faster in a research group than in classes.
If you have to pick just one in high school, I'd say physics, but really it should be all of them. Learning math and physics is more a matter of practice than high school biology (which is all mostly memorization), so it's better to get started with those earlier.
But that's just my opinion. I don't really know anything lol. You should consult with someone you trust in real life instead of some rando on the /r/singularity board
Thanks so much for the detailed response!! I’ve done some searching online and found that little schools offer computational neuroscience as an undergraduate degree, so which major is better for that? And why not computer science major?
I guess you could do computer science, why not. That would also be educational. There are several AI approaches that are more traditional computer science topics that would be good to know about, if only just to compare to ML and brain methods.
Maybe neuroscience major if offered, then. Or some mix of neuroscience, math, and computer science major.
Data science would probably be a better choice than computer science though (or an AI degree if offered).
Don't give advice on things u don't understand.
Terrible advice tbh
What do you think? What should I choose?
Its the usual STEM-centric response u get from someone who lacks critical thinking skills. Someone who thinks all u need is math and science but ignores everything else. A foot soldier cog to be used by people who can actually think for themselves
I won't tell u what to do since its not my place. You have to figure that out yourself
Politician.
True. You would need power and influence to implement AGI
Soon I suspect corpo executives will become political forces. Google is going to be stronger than States, it's a matter of time. Politics in general will be very important. More than today.
vhu9644 t1_j9de7c9 wrote
I have two bachelors, one in Bioengineering (focused on mechanical engineering), one in pure mathematics (with enough classes taken in CS to have a minor if that were allowed at my school). I currently am doing an MD/PhD with that PhD being computational and systems biology. ML and AI are things I want to apply to my field, and I have enough in my background to understand some of the seminal papers in the field. I say this because I have studied core ideas in all of the majors you have put out there.
My recommendation between CS, Math, Neuroscience, and Cog Sci is, in order of priority, Computer science, then applied math, then pure math, then cognitive science, then neuroscience.
Neural networks now borrow nearly nothing from Neuroscience and Cognitive science. The relevant equations in Neuroscience and Cognitive science are intractable to do actual computation on, and while cognitive science (and some neuroscience) does try to use some SOTA stuff, it isn't where the ideas really come from. Also, the perceptron is from the 1960s. ConvNets are from the 1980s. So was backprop. What made these old things actually work was advances in hardware, and what brought them further was educated recursion and iteration. People had ideas mostly driven by deep mathematical and empirical understanding of what they were working with, and then iterated on them until it worked.
That said, If we went through a more formalism-driven proof based conception of machine learning and AI, then math would be more useful. This is not the case. While the ideas in mathematics can be helpful (for example, there is deep mathematical theory for understanding neural networks) many of these ideas are generally applied post-hoc. To my knowledge, we have basically one important theorem in play here, which is the universal approximation theorem. It doesn't say much other than 2 hidden layers is sufficient for approximation of functions by densely connected neural networks. I'm not giving this much justice because the math behind it is deep and hard and beyond pre-collegiate mathematics (hard enough that this subject is the first math class to make me physically cry). This is to illustrate how ill-equipped the mathematical world is at understanding SOTA neural networks.
This isn't to say knowledge of mathematics will not help you. For example, we know things like how the landscape of VAEs loss functions are similar to that of PCA. There is a cool math trick to make diffusion models a tractable training problem. There is work in trying to bring self-attention down to more tractable memory sizes that involves some numerical analysis. This means that if your goal really is to help with AGI, you will need to know some math.
What is important for actual AGI are scientific insights (what is sentience? How can a machine generate new ideas? How can a machine learn about the world?) and engineering solutions (How can we make machine learning tractable? How can we fit the processing power into our current hardware?). Computer science teaches you both. You will learn how to analyze algorithms in how they scale (important for fitting things into hardware), and you'll have electives that teach you how we have concepted machine learning and Artificial Intelligence. What you should supplement is solid numerical and continuous mathematics. Learn some numerical analysis. Learn some control theory. Learn some statistics. These are the core ideas and problems we want AGI to currently solve. Neuroscience won't care about making AGI work (and neither will CogSci). Mathematics is deeply beautiful and useful, but the reliance of proofs make mathematics generally a bit behind on the empirical fields.
If you have any questions, I've chosen a very different path in life, but I'll be happy to answer stuff from my perspective. Best of luck with your major choice.