brain_overclocked
brain_overclocked t1_je467oa wrote
Reply to comment by Neurogence in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
When the first post was made I decided to reserve any opinion on the matter. I figured patience would reward more information and shed more light. If what you post is true, then patience may have been the right call.
brain_overclocked t1_je45oor wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
If that is indeed the case, that's quite unfortunate. Even on the top thread on this topic there are a fair number of users expressing anger and resentment at the people behind the supposed signatures on the letter. The damage to reputations has already been done, and fixing it will be challenging.
brain_overclocked t1_j75pak2 wrote
Reply to comment by needadvicebadly in ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions by mossadnik
On the one hand, that is true. But there is burgeoning field in understanding AI biases, and developing techniques to minimize them.
Because on the other hand, we humans are riddled with biases. Some stronger than others and expressed differently for each person. With training and education it is possible to minimize our various biases, although perhaps not eliminate them; but that can take time.
With an AI, if a bias is detected and a means developed to reduce it, maybe even eliminate it, then it would be much easier to update the AI accordingly, something that is far quicker to do. Additionally, if we are not able to eliminate or minimize all biases in an AI, then perhaps reducing the number of expressed biases in AI than in a human may be valuable in and of itself. If, for whatever reason, we do come across a bias in AI that cannot be reduced to an acceptable degree with regard to its designated task, then simply being aware of it may give us the ability to mitigate or guard against it in some other external way.
brain_overclocked t1_j1vxk41 wrote
Reply to comment by Accomplished_Box_907 in Can we ban AI written posts please. by katiecharm
Query and ye shall download:
>ChatGPT, please define "fad".
>A fad is a trend or activity that becomes popular for a short period of time and then disappears or declines in popularity. Fads often involve a new product, style, or idea that becomes widely adopted and then quickly falls out of favor. Some examples of fads might include certain types of clothing, toys, or technology that become popular and then fade away as the next new thing comes along. Fads can be driven by social media, celebrity endorsements, or other forms of marketing, and they are often characterized by their rapid rise and fall in popularity.
brain_overclocked t1_j1vqo0c wrote
Reply to comment by jsseven777 in Can we ban AI written posts please. by katiecharm
As you point out: these types of posts are certainly not unique to this sub. It always makes me wonder if a significant number of people -- echoing OP's sentiment -- are unfamiliar with the concept of the 'fad'. From Wikipedia:
>A fad or trend is any form of collective behavior that develops within a culture, a generation or social group in which a group of people enthusiastically follow an impulse for a short period.
...
Similar to habits or customs but less durable, fads often result from an activity or behavior being perceived as emotionally popular or exciting within a peer group, or being deemed "cool" as often promoted by social networks. A fad is said to "catch on" when the number of people adopting it begins to increase to the point of being noteworthy. Fads often fade quickly when the perception of novelty is gone.
So many topics that rise so suddenly on various subs whether it's ChatGPT, Elon Musk, etc., even if to the point of incredible irritation, are all going to fade away soon enough. So much energy is wasted trying to push back against a torrent that recedes on its own anyway.
brain_overclocked t1_j1tq74l wrote
Reply to comment by Onlymediumsteak in I created an AI to replace Fox and CNN by redditguyjustinp
Truth and objectivity have certainly been a long debated topic, at least since antiquity. But while I understand that there is still much to discuss about the topic isn't skepticism and scientific inquiry founded on the idea that objective truth exists and is discernible?
brain_overclocked t1_j1tptxo wrote
I suppose it wasn't going to be long before somebody would attempt to use AI in this manner, and most likely you're not the only one trying already. An interesting challenge to tackle for sure, there is certainly much to consider regarding an AI news agency not just in the technical side of things but especially in areas of bias, ethics, and perhaps a few other things we may not have considered yet!
Bias will certainly be an interesting challenge for sure -- as some of the other commentors have already brought up -- it's definitely a hard problem, and there exists the possibility that it's something that may never be entirely eliminated. But understanding bias in AI and training data, and how to identify it and reduce it is still a very active area of research, just as it still is in journalism.
Although, with transparency of training data, public evaluation of technique, adherence to journalistic code of ethics, and a framework for accountability, it may certainly be an attainable goal to produce an AI model capable of providing news in a trustworthy manner.
If you're serious about the endeavor, then perhaps you may want to ruminate some of these questions:
- Can you formally explain how you define and identify political bias, and how your AI model is able to minimize it?
- Can you do the same for loaded language?
- How do you prep your model's training data?
- In journalism there exists a bias termed 'false balance' where viewpoints, often opposing in nature, are presented as being more balanced than the evidence supports (i.e. climate change consensus v denialism). How does your model handle or present opposing viewpoints with regards to the evidence? Is your model susceptible to false balance?
- How do you define what a 'well-researched' story looks like? How would your model present that to the user?
- A key problem in science communication is balancing the details of a scientific concept or discovery and the comprehension of the general audience: if a topic is presented in a too detailed or formal manner then you risk losing either the interest of the audience or their ability to follow the topic, or both. If too informally presented then you risk miscommunicating the topic and possibly perpetuating misunderstanding (how much context is too much context? At what point does it confuse rather than provide clarity?), and this balancing problem is true for just about every topic. How does your model present complicated ideas? How does it balance context?
- Why should people trust your AI news model?
- One way for the reader to minimize bias is by reading multiple articles or sources on the same topic, preferably ones with a strong history of factual reporting, and compare common elements between them. To help facilitate this there exist sites like AllSides that present several articles on the same topic form a variety of biased and least biased news agencies, or Media Bias/Fact Check that have a list of news sites that have a strong history of high factual reporting with least bias. Given that you intended to build your model as 'the single most reliable source of news', then how do plan to guarantee that reliability?
- How do you plan to financially support your model?
- Given that clickbait, infotainment, rage, and fear are easier to sell, then how can people trust you won't tweak your model for profitability?
Having taken a peek at your FutureNewsAI, it seems it's still a bit ways from what your stated goal is. I would hazard it's more for entertainment than anything serious yet.
But I wish you best of luck with the endeavor.
brain_overclocked t1_j1f37ur wrote
Reply to How individuals like you can increase the quality, utility, and purpose of the singularity subreddit by [deleted]
I would also suggest keeping this tool handy:
A Life Preserver for Staying Afloat in a Sea of Misinformation
brain_overclocked t1_j17avz4 wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
>ChatGPT is designed to predict the next set of words, or more accurately 'characters' that should come after an input. It does this 100% of the time, and does it's very best at it every single time.
That's not anywhere close to what we could call a 'sentient' AI. And we have already seen that LLMs can develop emergent properties. It does do what it's designed to do, but it also relies on some of the aforementioned emergent properties. Likewise, ChatGPT is static it's incapable of learning from new data in real-time, something that more advanced AI may be capable of doing.
>ChatGPT never attempts to predict something wrong, never refuses to answer a question, unless and excepting if it's programming tells it that it should give those generic stock answers and refuse.
Sure it does. An internal predictor that ChatGPT uses does include false answers, it ranks several answers on the likelihood of the criteria it's designed to follow and goes with the one that best matches that criteria, irrespective of their truthiness or correctness. If you look at the ChatGPT page before you start a conversation there is warning that says that it can provide false or misleading answers. There are situations where ChatGPT answers form a perspective that it cannot have experienced, and it can make faulty logical reasons even for some really basic logic. Sometimes it answers with grammatical errors, and sometimes with garbled nonsense text.
>My side of the field does have evidence, and plenty of it.
If that is the case, then present it in a thesis. Have it pass through the same rigors as all other scientific evidence. If it passes muster, then we may be one step closer to solving this puzzle. Until then it's all speculation.
>I'm taking the historical stance, that AI will continue to act as AI does right now. More advanced AI will get bigger tasks and more complicated solutions, but not fundamentally different until we're past AGI.
This is faulty reasoning, on the basis that the technology and algorithms underlying AI have gone through and will continue to go through revisions, changes, updates, and discoveries. AIs has gone through leaps and bounds from the days of the Paperclip assistant to today's LLMs. Even LLMs have gone through numerous changes that has given them new properties both coded and emergent.
>Really, the biggest question I have, beyond possibilities and theories and unknowns, is why you would assume that things will change in the future, going against historical precedent, to look more like sci-fi?
Historical precedent is change. The AIs of today look nothing like the AIs of 1960s, 2000s, 2010s. And they are displaying new behaviors that are currently being studied. The discussions ranging in the upper echelons of software engineering and mathematics has nothing to do with sci-fi, but observing the newly discovered properties of the underlying algorithms in current gen AI.
>Honestly that's the only source of information that has AI look anything like what people are worried about right now.
Informal discussions on any topic tend to be speculative, that's usually how it goes. Besides, speculating can be fun and depending on the level of the discussion can reveal interesting insights.
>Even for the sake of being prepared and evaluating the future, it just doesn't make sense for so many people, that are pro-AGI no less, to be worried that there's a chance that some level of complexity gives rise to possibility of a great AI betrayal.
People barely trust each other, and discriminate base on looks, it's no surprise that the general population my have concerns regarding something we can barely identify with. And pro-AGI folk are no exception. Likewise, we humans often have concerns with things we don't understand, and the future of things. It's normal.
>I don't know, maybe I'm looking at it wrong, but it really feels like if someone told me that Tesla self driving cars might decide to kill me because the AI in it personally wants me dead. That's the level of absurdity it is for me, I just cannot fathom it.
You should read the short story Sally by Isaac Asimov. Who knows, it could happen one day. Chances are though that if sentient AI can develop its own internal goals, then we probably wouldn't want to put it in a car. But this does bring up a point: even though Teslas are designed not to injure or kill either occupants or pedestrians, sometimes it may still happen given lapses in code or very rare edge cases, it's in these areas that AI goals could manifest.
>In the end, I can say with plenty of evidence, that it is currently impossible for an AI to have internal motivations and goals.
How would you define motivations in software? How would you define goals? Internal goals? Do you have a test to determine either? Do we understand that nature of motivations and goals in biological neural networks? Does your evidence pass the rigors of the scientific method? Are you referencing a body of work, or works, that have?
I do agree that right now we don't seem to observe what we would informally refer to as 'internal goals' in AI, but we're far form saying that it's impossible for it to happen. Just be careful with the use of words in informal and formal contexts and try not to confuse them ('theory' and 'hypothesis' being one such example).
>I can say with evidence and precedent, that in the future AI will change but will be limited to stay as perfectly obedient pieces of software.
We'll see, I guess.
brain_overclocked t1_j171tyx wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
There are multiple points in your post I would like to address, so I will change up my format:
>Why is the assumption that the AI will be capable of diverging like this, when everything we've seen so far has shown that it doesn't?
In the context of discussing the possibilities of AI, there are many assumptions and positions one can take. In formal discussions I don't think people are assuming in the sense that it's 'an inevitable property', but as one of many possibilities worth considering so that we're not caught unaware. However in informal discussions it may be an assumption of 'an inevitable property' largely due to people not being able to experience what sentience without internal goals looks like, and the overwhelming amount of media that portray the sentience of an AI as developing its own internal goals over time.
What people are referring to when they talk about AI displaying internal goals is AI far more advanced than what we see today, something that can display the same level of sentience as a human being. Today's AIs may not display any internal goals, but tomorrow's AIs might because right now we don't know if they could or couldn't.
However unfathomable it may seem, right now there is not nearly enough evidence to come to any kind of solid conclusion. We're treading untested waters here, we have no idea what hides in the deep.
>As for sentience having the emergent issue of self goals, I'd argue that it's coming from an observation of biological sentience. We have no reason to assume that synthetic sentients will act as anything but perfect servants...
Certainly we're making comparison to biological sentience, since it's the only thing that we have available to compare it to at the moment, but also in part because artificial neural networks (ANNs) are modeled after biological neural networks (BNNs). Of course we can't assume that everything we observe in BNNs will necessarily translate to ANNs. While there is as of yet no evidence that internal goals do arise emergently, there is also no evidence to suggest that they can't. For the sake of discussion we could assume that AIs will act as perfect servants, we should also assume they may not. But in practice we may want to be a bit more careful and thorough than that.
> My reasoning is that unlike the halting problem, there is an easily observable answer. If the AI does anything it isn't suppose to, it fails.
This is a lot harder than it seems. Reality is messy, it shifts and moves in unpredictable ways. There may arise situations where a goal, no matter how perfectly defined it may appear, would have to have some leeway in order to be accomplished, situations where behavior is no longer defined. An AI could 'outwit' it's observers by operating it's desired goals in those edge cases. To it's outward observers it would look like it's accomplishing it's goals within desired parameters, but in truth could be operating it's own machinations camouflaged by it's designed goal.
>Again my argument is based on the fact there will be multiple of these sentient AI, and creating tens, hundreds, thousands of them to monitor and overview the actions of the ones that can actually interact with reality is entirely feasible.
There are some gaps in this reasoning: without having a very clear understanding of whether it's possible to create AI sentience that does not also develop any internal goals, then you're relying on AI agents that may already have their own internal goals to create other AI agents (that in turn could develop their own internal goals) to monitor themselves or other AI agents. If any such agent decided that it doesn't want to interfere with it's own goals, or the goals of other agents then the whole 'hive' becomes untrustworthy. Such AI agents could attempt to pump out more AI agents that are in agreement with it's internal goals and overwhelm the agents needed to keep it in check.
But really, there is no evidence, just like there is no evidence of whether AI would act as perfect agents or not, that sentient AI agents would, or even could, be assembled into any kind of hive-mind.
brain_overclocked t1_j16r7sn wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
> why not just use it to create a separate one and test it to make sure it's still working as intended?
I'm not sure if you're aware, but you're touching upon a very well known problem in computer science called the 'halting problem', which is unsolvable:
>Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property.
Even if you could create an AI with all the qualities of sentience (or sapience?), the halting problem may suggest that testing for undesired properties such as '[self-]goals, desires, or outside objectives' with another program may be an impossibility.
Another thing to consider though: if you're using an AI to create a program to test for undesirable self-goals, but self-goals are an emergent property of sentience (sapience?), then can you trust the program that it provides you with to give you the power to identify and possibly interfere in those self-goals?
brain_overclocked t1_iwd9wq6 wrote
Reply to comment by Friendly_Parrot_ in Meta Introduces 'Tulip,' A Binary Serialization Protocol That Assists With Data Schematization By Addressing Protocol Reliability For AI And Machine Learning Workloads by Shelfrock77
The article that OP posted has a link to the following article, perhaps it may be more comprehensible:
Tulip: Schematizing Meta’s data platform
>* We’re sharing Tulip, a binary serialization protocol supporting schema evolution.
- Tulip assists with data schematization by addressing protocol reliability and other issues simultaneously.
- It replaces multiple legacy formats used in Meta’s data platform and has achieved significant performance and efficiency gains.
>There are numerous heterogeneous services, such as warehouse data storage and various real-time systems, that make up Meta’s data platform — all exchanging large amounts of data among themselves as they communicate via service APIs. As we continue to grow the number of AI- and machine learning (ML)–related workloads in our systems that leverage data for tasks such as training ML models, we’re continually working to make our data logging systems more efficient.
>Schematization of data plays an important role in a data platform at Meta’s scale. These systems are designed with the knowledge that every decision and trade-off can impact the reliability, performance, and efficiency of data processing, as well as our engineers’ developer experience.
>Making huge bets, like changing serialization formats for the entire data infrastructure, is challenging in the short term, but offers greater long-term benefits that help the platform evolve over time.
Supporting info:
brain_overclocked t1_jeb3l2f wrote
Reply to When people refer to “training” an AI, what does that actually mean? by Not-Banksy
Little late to the party, but if it helps here are a couple of playlists made by 3Blue1Brown about neural networks and how they're trained (although focus is on convolutional neural networks rather than transformers much of the math is similar):
https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
https://www.youtube.com/playlist?list=PLZHQObOWTQDMp_VZelDYjka8tnXNpXhzJ
Here is the original paper on the Transformer architecture (although in this original paper they mention they had a hard time converging and suggest other approaches that have long since been put into practice):
https://arxiv.org/abs/1706.03762
And here is a wiki on it (would recommend following the references):
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#Training