Submitted by fintechSGNYC t3_1095os9 in MachineLearning
SaucyLoggins t1_j3wlim6 wrote
Reply to comment by Lawjarp2 in [D] Microsoft ChatGPT investment isn't about Bing but about Cortana by fintechSGNYC
Yeah, why limit it to one area. They'll probably incorporate it into Visual Studio.
SwitchOrganic t1_j3wpejh wrote
I could see Github Copilot getting a significant rehaul.
--algo t1_j3x812s wrote
Github copilot and chatgpt are built on the EXACT same apis. What would be different?
SwitchOrganic t1_j3x9e5w wrote
While both are modified GPT3 models, Github Copilot is designed specifically to produce code while ChatGPT is a more general chat bot.
I could see them combining outputs, with ChatGPT generating a description/explanation while Copilot generates the code itself. ChatGPT can also parse a wider variety of inputs than Github Copilot. For example, you can ask ChatGPT "Can you find the error in this code?" while I'm pretty sure you can't ask Github Copilot that; but I haven't used Copilot since it left beta.
londons_explorer t1_j3xa6n2 wrote
> while I'm pretty sure you can't ask Github Copilot that
You can comment out the code, then write underneath:
"# Version above not working due to TypeError. Fixed version below:"
Then use Copilot completion. It will fix whatever the bug was.
SwitchOrganic t1_j3xah1u wrote
Oh interesting, that's a pretty clever solution.
Thanks for sharing!
Top_Lime1820 t1_j41sr9w wrote
Also you can ask CoPilot questions. Type your question in a comment after q:. Then create a new comment that starts with a: and it'll answer your question
# q: Which are the most popular R packages for plotting?
# a:
satireplusplus t1_j3xkvn2 wrote
What ChatGPT does really well is dialog and its useful for programming as well. You ask it to write a bash script, but it messes up a line. You tell it line number 9 didn't work and you ask it to fix it. It comes up with a fixed solution that runs. Really cool.
visarga t1_j414x9n wrote
Copilot is not prompt-tuned, chatGPT would understand new tasks much easier.
GeoLyinX t1_j404nv8 wrote
No they are not, they are 2 different api’s and even 2 distinct AI models. It’s not just a different api that uses the same AI differently, it’s an entirely different model together with different output layer parameters and likely the input layers as well, just both models based originally based off GPT3 for their hidden layers mostly.
--algo t1_j40j84w wrote
We are both right and wrong. To be pedantic, it's this paper for both https://arxiv.org/abs/2203.02155 but with different training data
Hyper1on t1_j43crwx wrote
That's the InstructGPT paper, which is right for ChatGPT, but Copilot is based on Codex, which does not use RLHF.
--algo t1_j43rpre wrote
Are you sure? This implies otherwise: https://openai.com/blog/instruction-following/
But maybe it's only for the non-codex models
Hyper1on t1_j43wyf3 wrote
You can see the full details here: https://beta.openai.com/docs/model-index-for-researchers
Copilot itself is the 12B Codex model, with further refinements.
GPT-5entient t1_j4rop7m wrote
Nope. CoPilot is Codex and ChatGPT is Da Vinci.
Deeviant t1_j42d8cr wrote
Honestly, I don't need AI to write the code for me (If it can, cool, but that seems way further out), but if it could write tests for me, I'd give my left <insert_body_part> for it.
sockcman t1_j3wupbb wrote
Already a plugin for it
RandomCandor t1_j3wwj98 wrote
From my experience with it's incredible coding abilities, i expect ChatGPT to explode in this area first and foremost
Agreeable-Tomatillo2 t1_j3z2xc4 wrote
You clearly don’t write any type of complex code, nor anything that deals with basic numbers. Chat gpt couldn’t even tell me the correct biggest exponent of 2 in a list of 10 items lmfao
RandomCandor t1_j3z608k wrote
> Chat gpt couldn’t even tell me the correct biggest exponent of 2 in a list of 10 items lmfao
You're confusing mathematics and software engineering. It's a very typical junior mistake, nothing to be embarrassed by. Once you've been doing this professionally for 3 decades like I have, you will (probably) not make that kind of dumb mistake.
visarga t1_j4157w3 wrote
Of course the code fails at first run. My code fails at first run, too. But I can iterate. If MS allows feedback from the debugger, the model could fix most of its errors.
And when you want to solve a quantitative question the best way is to ask for a Python script that would print the answer when executed.
Viewing a single comment thread. View all comments