OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.
What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.
I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.
A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.
As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.
It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.
Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.
I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.
I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.
Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.
If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.
The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.
I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.
Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.
Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.
Considering asking my company to pay for the subscription as I can justify that it’s worth it.
Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.
I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.
Btw if you aren’t in the chat beta I really recommend it
Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.
I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.
What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.
I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.
A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.
As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.
It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.
Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.
I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.
I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.
Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.
If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.
The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.
I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.
Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.
Not me, but my boss would… wait a minute…
Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.
Removed by mod
Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.
Edit: shit, maybe I’m too dependent on it.
I’m also having a good time with copilot
Considering asking my company to pay for the subscription as I can justify that it’s worth it.
Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.
I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.
Btw if you aren’t in the chat beta I really recommend it
Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.