Franck_Dernoncourt
Franck_Dernoncourt t1_jclpll3 wrote
Thanks for sharing! How does it compare against other models (eg, alpaca or gpt 3.5/4)?
Franck_Dernoncourt t1_jc60wue wrote
Reply to comment by f10101 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Research done by the industry. Eg, FAIR or MSR.
Franck_Dernoncourt t1_jc4tdft wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
https://crfm.stanford.edu/2023/03/13/alpaca.html:
> We emphasize that Alpaca is intended only for academic research and any commercial use is prohibited. There are three factors in this decision: First, Alpaca is based on LLaMA, which has a non-commercial license, so we necessarily inherit this decision. Second, the instruction data is based OpenAI’s text-davinci-003, whose terms of use prohibit developing models that compete with OpenAI. Finally, we have not designed adequate safety measures, so Alpaca is not ready to be deployed for general use.
Why only academic research and not industry research? I don't see where that limitation comes from in their 3 factors.
Franck_Dernoncourt t1_jc0sacm wrote
Reply to comment by Taenk in [R] Introducing Ursa from Speechmatics | 25% improvement over Whisper by jplhughes
Pretty sure commercial product only. Speechmatics has never opensourced any of their models.
Franck_Dernoncourt t1_jc0s4fi wrote
Reply to comment by rshah4 in [R] Introducing Ursa from Speechmatics | 25% improvement over Whisper by jplhughes
No
Franck_Dernoncourt t1_j9v5wwh wrote
Why SOTA? Did they compare against GPT 3.5? Only comparison against GPT 3.5 I found in the LLaMA paper was:
> Despite the simplicity of the instruction finetuning approach used here, we reach 68.9% on MMLU. LLaMA-I (65B) outperforms on MMLU existing instruction finetuned models of moderate sizes, but are still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022)).
Franck_Dernoncourt t1_j8v206a wrote
Reply to [D] Compare open source LLMs by President_Xi_
For summarization: Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, Tatsunori B. Hashimoto. Benchmarking Large Language Models for News Summarization. arXiv:2301.13848.
Franck_Dernoncourt OP t1_j8ouo28 wrote
Reply to comment by Delicious-Adeptness5 in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
> If you want to use the exclusion then I would keep the policy with that date.
Got it, thanks
>You can always have multiple policies.
No thanks, my goal is minimizing expenses.
Franck_Dernoncourt OP t1_j8n6vps wrote
Reply to comment by beerbeerbeerbeerbee in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Not my fault if policymakers are publishing stupid laws.
Franck_Dernoncourt OP t1_j8kq0yy wrote
Reply to comment by SereneDreams03 in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Yeah I wish we had a clear table of prices for each qualifying LTC insurance. This law and its implementation are very poorly thought-out. Policymakers were either bribed by insurers or profoundly stupid.
Franck_Dernoncourt OP t1_j8ke7ig wrote
Reply to comment by Delicious-Adeptness5 in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Thanks!
> then hold onto it.
so it means one can't change one's policy?
Franck_Dernoncourt OP t1_j8kb91p wrote
Reply to comment by SereneDreams03 in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
One can get insurance for less than 25 USD/month, eg https://www.reddit.com/r/Washington/comments/112ejat/whats_the_cheapest_longterm_care_insurance_that/j8kayl0
Franck_Dernoncourt OP t1_j8kb2iu wrote
Reply to comment by jenlb930 in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Nice, thanks!
Franck_Dernoncourt OP t1_j8k6er0 wrote
Reply to comment by krisztinastar in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
thanks, how much is it?
Franck_Dernoncourt OP t1_j8k6bzr wrote
Reply to comment by AnonymityIsForChumps in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Got it, thanks!
Franck_Dernoncourt OP t1_j8k41s7 wrote
Reply to comment by atlantic_pacific in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
> ongoing proof that you have an active policy.
Can one change one's LTC insurance and still be tax exempt? I'm guessing so but I couldn't see the info in the law.
Franck_Dernoncourt OP t1_j8jzfan wrote
Reply to comment by Scared_Calligrapher in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Thanks, they are still planning to implement that tax, from July 2023 last time I heard about it but the state legislators keep changing their minds.
Franck_Dernoncourt OP t1_j8jz9ex wrote
Reply to comment by AnonymityIsForChumps in What's the cheapest long-term care insurance that qualifies for the Washington state long-term care payroll tax exemption? by Franck_Dernoncourt
Got it, thanks. Given the high price I'm paying currently for the LTC, I'd like to switch ASAP to some cheap insurance and just cancel it if the law change confirms one can drop it.
Franck_Dernoncourt t1_j86wojs wrote
Reply to comment by Sola_Maratha in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Why not impressive?
Franck_Dernoncourt t1_j6ydkiu wrote
> I was surprised at how much better GPT3 davinci 003 performed compared to AI21's 178B model. AI21's Jurassic 178B seems to be comparable to GPT3 davinci 001.
on which tasks?
> Of course, I didn't expect the smaller models to be on par with GPT-3
You could read Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, Tatsunori B. Hashimoto. Benchmarking Large Language Models for News Summarization. arXiv:2301.13848.:
> we find instruction tuning, and not model size, is the key to the LLM’s zero-shot summarization capability
Franck_Dernoncourt t1_j4scxq2 wrote
Reply to comment by gdpoc in [D] Unlocking the Potential of ChatGPT: A Community Discussion by North-Ad6756
Wouldn't ChatGPT inaccuracies be an issue if used for education?
Franck_Dernoncourt t1_j4sbqee wrote
Reply to comment by gdpoc in [D] Unlocking the Potential of ChatGPT: A Community Discussion by North-Ad6756
For which downstream application?
Franck_Dernoncourt t1_j4sbgvu wrote
Reply to comment by Haunting-Ad-5191 in [D] Unlocking the Potential of ChatGPT: A Community Discussion by North-Ad6756
> I mean some kind of home assistant that integrates CHATGPT is obvious right?
How do you handle the fact that some answers are inaccurate?
Franck_Dernoncourt t1_jdpfgpn wrote
Reply to [D] An Instruct Version Of GPT-J Using Stanford Alpaca's Dataset by juliensalinas
Another similar project: https://github.com/databrickslabs/dolly
> This fine-tunes the GPT-J 6B model on the Alpaca dataset using a Databricks notebook. Please note that while GPT-J 6B is Apache 2.0 licensed, the Alpaca dataset is licensed under Creative Commons NonCommercial (CC BY-NC 4.0).