Viewing a single comment thread. View all comments

AsheyDS t1_jdmxe1c wrote

This feels very 'on the nose' but I'll take the bait anyway..

In my case, you don't stay secret. Not too secret anyway. Secrecy = paranoia = loss of productivity, and sanity. Way too much stress keeping it quiet, and to whose benefit? So I'll be releasing my theory of mind stuff to the public soon, maybe some details on some things, but overall keeping the technical stuff private for now as it's in flux anyway.

Assuming the Corp/RL gets funding soon and nothing impedes our work, I would hope that the next decade or so of development and 'training' would yield additional safety measures to add to the list of several+ that we already have in mind. I'm not looking to rush things too much, and I hope that LLMs will essentially act as training wheels for people to get used to AI, and the misuses people have for them will be swiftly tamped and we'll develop or begin to develop a legal framework for use/misuse. But misuse is certainly inevitable, as is AI/AGI development. So that is certainly something we need to discuss across the world, right now, and continually. But it's not just that it's inevitable, it's also needed. Parts of the world facing population decline, or an aging population are going to need solutions soon just to maintain their infrastructure, and I think AGI and robotics can help with that.

Now I'm not going to say this is an official plan or roadmap, but ideally and fairly realistically, we would put the theoretical stuff out first, which we're currently organizing and expanding on. Hopefully get a relatively small amount of funding (we're financially considered low-risk/high-reward, at least for the first few years before more equipment and people are needed). First 1-2 years laying out the 'blueprint' that we'll work off of, -a few years- in development to see if at least the parts work and the technical design is sound, then put it together, 'hard-code' and 'train' the knowledge base and weights in the parts that are weighted, get it up to the equivalent(ish) of a human 5 year old, and then get it to learn from there, through largely unsupervised learning, and possibly a curriculum similar to what a human child learns including social development, but at a faster pace. Should still take some time though, and both cohesion and coherence need to be checked over time, as well as the safety measures.

But once it's working... Well.... SAAS may be the ideal first step, because we want people to be able to use it while still testing it and training it/programming it where necessary, making sure it can develop (not necessarily expand) at a predictable and practical rate while maintaining consistency, and continuing to adapt for misuse cases that may crop up. Now, I'm not all for centralization, or even making massive amounts of money. Everyone should be able to make money with this when we're done, and perhaps money won't even be a thing one day. But for as long as the current economic system survives, it should still be able to adapt and help with your money-making endeavors. However, we may need to start with this distribution model as a functional necessity because I'm not sure how far down it will be able to scale yet, and also as the technical requirements drop, technical capability of host machines will go up... so it's very hard to predict timetables beyond a certain point. Right now, it's looking like it will require a small supercomputer or cluster to be effective, possibly a data center, so I'm not sure how it would all scale up or down right now. In this model, I would say to minimize privacy risks and increase trust, splitting between a localized memory + user settings/preferences, and a few other things, and the rest of the functionality in the cloud may be best. But obviously the security risks would have to be weighed, and honestly, it's hard to fathom how that will change once AGI is in the mix, because it may be able to handle that just fine, and it may be perfectly safe that way, we just can't know yet. In this context, I mean safe as in user privacy and whether their data is safe from being exposed in a split topology like that.

Ideally, it would be able to operate as either a program/app on your PC or personal device, or possibly an operating system (so it would be entirely software-based), but may be most effective as a separate computer that operates your devices for you. In time, it would have many scaled up and down and optimized variations for different needs (though it should be fairly adaptable), and all open-source so everyone can use them. From there, I guess it depends on everyone else, but we're willing to at least try to be as transparent as is reasonable considering the circumstances, and to eventually try to get it small enough, easy enough, and available enough so everyone can use it, and on their own devices.

Realistically, I feel like the 'getting funding period' will be unreasonably protracted, development will go faster than expected, and in the end nobody will believe it's a 'real' AGI, and even when they see a fraction of its capability they'll assume it's some sort of LLM, etc. etc. So, what can you do, y'know? I can say from my perspective, it's like giving birth out of your head. It's a design that needs to come out. At the same time, I'm also aware of my own mortality, and that time keeps moving... On the business end, there's a great wave that I'm currently moving with, but don't want to be swept under... And people need the help it can offer, so time is a multi-faceted dilemma. I need plenty of it for development and training, but don't have much of it. I'm optimistic it will be quite safe, but I'm not sure if people will be. In that sense, it may even be ideal if most people believe it, but I think to combat misuse, people will probably want to use it to protect themselves. Over time, I think people will be using it just like ad blockers and virus scans and malware detectors to filter their access to the internet, and that in itself will reshape the internet. I think it's best to talk about these things now so people can prepare, but most likely much of what I say will be ignored or otherwise dismissed, so... yeah. But like I've said, development will continue. I think AGI will be developed by multiple people and companies and organizations, and will be everywhere in time. So why keep it a secret?

Also, as a dev, I have to say it's very surreal (and exciting) working on this stuff. But it's also very isolating. So keeping it quiet wouldn't be good for my mental health, and really, not good for others as well. Quite often I catch myself doing the dishes or something and just staring into space thinking about algorithms and the technical design of it all, and realizing this is real life and this stuff is happening, and yet I'm just a human in this world and still have to do the dishes! It's absurd really... and I only expect that feeling to intensify. A 'Futurama' future wouldn't surprise me one bit. Things are going to get weird.

0