dat_cosmo_cat
dat_cosmo_cat t1_j0f0fgz wrote
Reply to comment by fin_quant in Neural networks and machine learning for data science in business [D] by lordgriefter
what the fuck
dat_cosmo_cat t1_izyz5hj wrote
Several of our internal teams have arrived at similar conclusions when comparing AWS models to pre-trained open source models. Specifically; zero shot CLIP, and a fine-tuned ResNet (ImageNet) out performed Rekognition on various classification tasks (both on internal data sourced from 9 e-commerce catalogs, as well as on Google Open Image v6). Zero shot DETIC out performs it on image tagging. We even collaborated with a technical team at AWS to ensure these comparisons were as favorable as possible (truncating some classes from our data, combining others, etc...).
dat_cosmo_cat t1_ixratyz wrote
Reply to [D] First time NeurIPS by innocentgilbertsmith
In the past there's been an app where people set up events / group meetups. Also hit up the industry hall and get invites to the after parties.
dat_cosmo_cat t1_iwteguv wrote
Reply to comment by ---AI--- in [R] The Near Future of AI is Action-Driven by hardmaru
You and I are literally saying the same things. These models have been in prod on every major software platform since BERT.
We don't even need to look at offline eval metrics anymore. If you're an actual MLE / data scientist you likely have the pipelines set up which directly measure the engagement / attributable sales differences and report the real business impact across millions of users each time a new model is released.
I work on a team that has made millions of dollars building applications on top of LLMs since 2018, so when I see the claim "LLMs finally got good this year" it's hard not to laugh. --this is what I am getting at.
Edit*: did you read the article?
dat_cosmo_cat t1_iwt6yam wrote
Reply to comment by ---AI--- in [R] The Near Future of AI is Action-Driven by hardmaru
Because LLMs are incrementally better each year, not fundamentally better. The claim that now they are useful has become a cliche within the field of ML.
dat_cosmo_cat t1_iwqnbt1 wrote
Reply to comment by Dankmemexplorer in [R] The Near Future of AI is Action-Driven by hardmaru
The ubiquity of pretrained BERT + ResNet models in commercial software applications (and the measurable lift they deliver) is proof that they've been "good enough" for years. Sometimes these articles can come off a bit naive to the impact that the technology has already had or how widely it is used beyond the specific application that is most observable / accessible to the author.
dat_cosmo_cat t1_iwpav2u wrote
Reply to [R] The Near Future of AI is Action-Driven by hardmaru
> [this year], large language models (LLMs) finally got good.
Every year it's like dejavu with this shit since 2018.
dat_cosmo_cat t1_iuz49g9 wrote
Reply to comment by nomadiclizard in [D] DALLĀ·E to be made available as API, OpenAI to give users full ownership rights to generated images by TiredOldCrow
It is easy to read it like an ad for NFTs, we've seen so much bullshit out if that community I don't blame anyone for getting triggered. The implication behind this seems different though; it is advertising an opportunity to profit off of free use, rather than scarcity.
dat_cosmo_cat t1_j2a3t0o wrote
Reply to comment by pridkett in [P]Run CLIP on your iPhone to Search Photos offline. by RingoCatKeeper
> the data stay secure in iCloud
Lmao. Dude really missed the entire point of the project.