jugalator
jugalator t1_jeb10ef wrote
Reply to Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
If this were to come true, it would only punish the public as governments around the world would of course write their own regulations allowing their AI arms race to proceed. The AI cat is out of the bag.
jugalator t1_jd427y8 wrote
I guess it's a pretty good version of DALL-E 2 but the generated images still look like sharing the DALL-E DNA to me. I think it's far behind Midjourney V5 and maybe even V4. It succeeded pretty well at my five fingers test though.
Given the recent trend with Bing Chat and GPT-4, I'm a little surprised they broke their streak by not underpromising and overdelivering here. Enthusiasts probably won't turn to this one and it'll be more of a gimmick for now.
jugalator t1_jcxw5k8 wrote
Reply to comment by S3ndD1ckP1cs in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
> Teach the concepts, then teach a better, more efficient way of doing things.
You just stated why they don't want them in elementary school.
Also, it's hardly punishment to teach young kids why they arrive at certain answers. The calculator skips the steps, so you're doing them a disservice in the long run to not "punish" them. It's going to be a much harsher punishment trying to understand later on because math is really unforgiving to that. The greatest problem in math is generally that the students don't follow out of lacking understanding in preceding courses.
jugalator t1_jcfcy6v wrote
Reply to comment by Veleric in Can you use GPT-4 to make money automatically? by Scarlet_pot2
Yes, I'm just now trying Whisper out. Yesterday evening I was writing a little tool to use its speech transcription, feed the transcribed text to ChatGPT API and then retrieve the response and have it spoken back to me via Microsoft Azure Neural Voices.
I didn't get it quite done yet but I think I got almost done in a few hours. It'll feel funny to leapfrog Google Home, Alexa and Siri like this in a rare opportunity lol.
It's easy to make though, so no money in it. It's already been made even so it's just hacking for fun. There's a Siri shortcut for ChatGPT too, already been made.
It's pretty wild how an amateur can make this however, and none of these big billion dollar companies have a product to do it.
Very strange feeling and moment in science.
jugalator t1_jcbczxl wrote
Reply to comment by Nanaki_TV in GPT4 makes functional Flappy Bird AND an AI that learns how to play it. by gantork
That's eerie, like how they think this is worth trying now, that it may be within reach. They aren't like redditors falling for the hype either but experts in their field.
jugalator t1_jacupt4 wrote
Reply to the town of "Fucking, Austria" shows up as OwO in apple weather on iOS 16.3.1. how does this even happen??? by squabbledMC
lmao, it's infamously a real town but this looks like an Easter egg someone sneaked in
Was more common back in the day, now it's more frowned upon. :p
jugalator t1_ja7zdc4 wrote
Reply to Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
And last month it was revealed the Chinese government wished to secure an even stronger influence on Tencent operations: https://www.ft.com/content/65e60815-c5a0-4c4a-bcec-4af0f76462de
jugalator t1_j9xznxx wrote
AI of today already is useful not as an "answer machine" (which is unfortunate because it'll mislead lots of people using Bing AI now, because Microsoft as well as the AI itself gives another impression) but as a very powerful guidance tool.
It may not write software for me, but what it can do is to give me large chunks of code almost correct that I'll just need to do some quality assurance on. So I don't need to problem solve as much myself, instead focusing on bug fixing. Guess which part of software development consumes more time?
This is just one example.
We're also looking at it from other angles in my company. Midjourney is making professional logotypes for our internal and external projects, we're looking into using AI for remote sensing science etc etc.
So, I think criticism like this often boils down to having a too simple worldview without greys, and only blacks and whites. If AI can't solve it all, it's useless.
It's like in politics when you only look for the simple solutions and quick fixes. We have plenty of parties directly engaging these folks because it's well known they are there. They just don't know they are being exploited. Politicians play them like fiddles, presenting quick fixes in time for elections.
AI won't do single things that makes a company go "Welp, that's that. Now we can sit on our asses and cash in!" but instead it's about identifying the places where it can offer aid to your processes.
Taken together, yes, on a large company and depending on the kind of business, the time savings may well earn you $1 million in a year. Salaries aren't cheap in engineering for example.
jugalator t1_j9vgkvo wrote
Reply to Massive 'forbidden planet' orbits a strangely tiny star only 4 times its size. by Rifletree
Maybe it doesn't even belong to the solar system so that it doesn't have to comply with formation theories, and it was just a caught rogue planet.
jugalator t1_j9jadh0 wrote
Reply to What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
I think there is still a ton to learn about usefulness of the training data itself, and how we can find out what is an optimal "fit" for a LLM? Right now, the big LLM's simply have the kitchen sink thrown at them. Who's to say that will automatically outperform a leaner, high quality, data set? And again, "high quality" for us me be different to an AI?
jugalator t1_iy339dx wrote
Reply to comment by gbersac in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
Yeah sure, my argument isn’t that it will be poor at creativity. It’s already great at that. But how it can match fluctuating client specs depending on business situation and which boss they just hired and the vision he/she has, and work together with their lifecycle policy is still unproven and this can introduce a ton of human, illogical factors.
Or if you don’t work as a consultant like me and maybe write iOS games, the tricky bits instead turn into market analysis and understanding what your gamers want.
The act of programming is sort of the easiest problem in software development, lol
But yeah if that’s all you do and is commanded by someone “higher up” what to do in a one-way communication from the top, these jobs are probably most at risk?
My experience is that this is however often only a part of our jobs. I transitioned from that role alone within my first three years or something.
jugalator t1_iy0ajhd wrote
Reply to comment by Noname_FTW in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
Yes, I'm not that convinced an imminent "end" to human software development. Sure, programming may become less manual but I think software architecture/design will remain manual for the foreseeable future.
I can compare it to me getting an awesome oil painting out of Midjourney already. It feels like anything is possible with a ton of power on my fingertips and the text prompt I give it.
BUT! That's not helpful at all in order to match a client specification of something. Let's say a new tool is supposed to integrate with a financial software's output files that was made obsolete a few years ago but still has a decade before being phased out, so they need something to do it. This is a quite normal scenario where I work.
An AI won't help you there just like Midjourney won't help me perfectly creating a drawing that matches a client spec to the letter. It'll create something, sure, but it's only going to impress under the assumption there is no clearly defined spec and it has a ton of leeway in what it creates. If it can handwave something out for you, and that is all you ask from it, then sure it's a great tool. If not, it's awful. I can tell Midjourney to recreate Mona Lisa but only because it's been trained on that popular painting specifically. Instead try to give instructions to recreate her without her name and you're facing hell even if Midjourney is fantastic at painting.
So, I think these jobs will involve a ton of guidance but sure, jobs will disappear. Not the field of software development involving humans though. And a current programmer that keeps reasonably on top of things will probably naturally transition into similar roles, maybe only on a slightly higher level. But you can rest assured not just any guy will start whipping together custom AI-guided Python apps anytime soon, even as AI guidance exists. You'll still need to know Python to deal with AI quirks left behind and fill in the gaps, to begin with. Packaging, distribution, client contacts and bug reports, updates, dreadful long meetings etc etc. The entire lifecycle is still there.
jugalator t1_is04b9z wrote
Reply to "New antibiotic hiding in diseased potatoes thwarts fungal infections in plants and humans" by tonymmorley
Now watch us overexploit that one so that we breed resistant bacteria and lose our potatoes too other than a specific bioengineered kind from Nestlé...
jugalator t1_jeh12mh wrote
Reply to When do you guys think chatgpt 5 is gonna come out ? by Klaud-Boi
GPT-3 was released three years ago and it took another three years for GPT-4 so maybe yet another three years. It feels like advancements have been super quick, mere months, but this is not true. They just happened to make the ChatGPT site with conversation tuning soon before GPT-4, but GPT 3 is not "new".
I don't expect some sort of exponential speed here. They're already running into hardware road blocks with GPT-4 and currently probably have their hands full trying to accomplish a GPT-4 Turbo since this is a quite desperate situation. As for exponentials, it looks like resource demand increases exponentially too...
Then there is the political situation as AI awareness is striking. For any progress there needs to be very real financial motives (preferably not overly high running costs) and low political risks. Is that what the horizon looks like today?
Also, there is the question when diminishing returns hit LLM's of this kind. If we're looking at 10x costs once more for a 20% improvement it's probably not going to be deemed justified and rather trying to innovate in the field of exactly how much you can do given a certain parameter size? The Stanford dudes kind of opened some eyes there.
My guess is that the next major advancement will share roughly GPT-4 size.