I trialed GitHub Copilot and used ChatGPT for a bit, but recently I found myself using them less and less.
I’ve found them valuable when doing something new(at least to me), but for most of my day-to-day it seems to have lost it’s luster.
They’re very useful for the boilerplate stuff and it’s somewhat rewarding to type out 3-4 letters, hit tab and wind up with half a dozen lines in a bash script or config file.
They tend to get in the way more for complicated tasks, but I have learned to use them as a psychology trick: if I have writer’s block, I just let them pump out something wrong since it’s easier to critique a blob of text than a blank page.
if I have writer’s block, I just let them pump out something wrong since it’s easier to critique a blob of text than a blank page.
Yeah I mentioned this before while taking to a friend about it. Humans are much better at editing than coming up with stuff from scratch, so seeing the suggestion sometimes is helpful even if it’s wrong.
I just don’t trust these tools to write code as efficiently as I can, knowing they are just backed by LLMs. If I have to spend my time vetting what they spit out to ensure correctness, efficiency, security, etc, then I might as well just do it myself from the beginning. I’m sure some find these tools useful and timesaving, but they’re not for me.
I was skeptical at first, but after using phind.com it partially changed my opinion on using AI for development assistance.
It massively helps me to filter out information and leads me to the right answer.
Like the other day, I searched on how to write some Latex symbols or how to use Java Stream API, it spit out the result immediately that saved my precious time on searching the Internet.
I don’t use CoPilot though.
Is it entirely free? Could not find an answer on their website.
Yes. It’s free and doesn’t have any limitations on usage for now (unless you use “Expert” model).
But I guess you’re sending your query data to Phind so it’s not totally “free”, as you’re selling your query data.
I’ve been using ChatGPT4, through the phind.com web site because it allows one to include web links which phind.com pulls information from and then includes within the context info delivered to ChatGPT4. This has proved to be invaluable when trying to figure out new libraries - I just include a link to their documentation and start asking my specific integration/usage questions.
I’ve also been learning how to write my own Stable Diffusion implementation, and Phind.com’s context packing functionality has been extremely helpful explaining and describing how components work, how they integrate, and explaining the aspects of the papers this work is based I am not confident I completely understand. It’s a tireless explainer, which never gets bored and always responds with a chipper attitude.
Oh wow! That is really cool. I used Google Bard for a bit and liked it because it included some web links, but I found the answers not as good as ChatGPT(especially GPT4), this looks like the best of both worlds.
I use ChatGPT (with GPT-4) all the time for coding. I’ve developed a feel for the maximum complexity it can handle and I break down bigger problems into smaller subtasks and ask it to write code for them (usually one function at a time, after a detailed explanation of the context in the beginning). I need to review and test everything it produces thoroughly but it’s worth it. Sometimes it helps me complete tasks that would have otherwise taken a day to complete in 1-2 hours.
I also have Copilot installed but it isn’t as useful as ChatGPT. It’s nice to get a smart completion sometimes. I’m even in the Copilot Chat beta which uses GPT-4 and I find it inferior to ChatGPT with GPT-4.
I never touch GPT-3.5 anymore. It hallucinates too much and the quality of the output is very unpredictable. I guess most people who say AI is useless for coding haven’t tried GPT-4 yet.
Oh, and something else. In my experience, the quality of the output depends a LOT on the prompt. If you give a clear, detailed description of the problem, and try to express your intent instead of the specifics of the implementation, it usually performs very well. I’ve seen some people at work write the worst, sloppiest prompts and then complain how useless ChatGPT was.
This is really useful info, can you recommend a tutorial that you feel shows how to effectively use these tools along with traditional style coding? Or would you say it’s just a try and see approach/learn as you go. Personally, I think your comment best demonstrates where we are right now with AI assisted development.
Unfortunately the tutorials out there are mostly terrible. I’ve learnt it by experimenting a lot and seeing what worked for me. Some general advice:
- Subscribe to both Copilot and ChatGPT Plus and try using them as much as possible for at least a month. Some people prefer the former, others the latter, and you can’t know in advance which.
- Always use the GPT-4 model in ChatGPT but keep in mind that there is a 25 answers/3 hours rate limit. So try to squeeze as many questions and information into your messages as possible. GPT-4 is miles ahead of any other publicly available LLM, including GPT-3.5.
- Tips for ChatGPT:
- Give detailed, well-written prompts. Try to describe the problem the same way you would to a coworker.
- After describing the problem, ask ChatGPT if it needs any additional information to implement the code well. It usually asks very insightful questions.
- Answer the questions and then ask it to break down the problem into individual functions and then, in separate messages, ask it to implement them one by one.
- Remember that the context window is limited, after some time it won’t remember the beginning of the conversation so it’s worth repeating parts of the specification later.
- Tips for Copilot:
- Write the method signature and have Copilot implement it for you
- Write a comment and have Copilot implement the corresponding code
- Paste the code as a comment in a different language, write “the same logic in $lang2” in a comment, and it will translate it from $lang1 into $lang2.
Thanks! Again this is really helpful
I am an avid user of Copilot. I don’t have statistics, but I’d say it writes about 10-50% of my code. It’s not providing great ideas about what the code should do, it mostly just automates away the obvious stuff.
It works especially great if your code is well documented and everything has sensible naming. Which is a state you want to be in anyway.
On the other hand, it helps you document the code and create useful error messages, since writing verbose text is much easier with it, and it can often generate useful text from just a few characters, given the surrounding context.
I also use it as an adhoc docu search when working with libraries that I am not very familiar with. What’s the arguments called for turning the label text green? Just add a “green text” comment on the line and use the suggestion that copilot spits out. This works very well with popular libraries in Python, which usually don’t have great type hints.
Another thing I find it useful with is math. Did you know that Copilot can even generate derivatives of math functions? Not a 100% correct every time, but when I have to do some “quick maths” like doing coordinate transformations or interpolating keyframes in an animation, I get the correct formula given the variables I have, in 90% of the time, autocompleted in about a second.
All in all, copilot saves me a bunch of time and keystrokes. If you write your code in an explicit, explainatory way, it just does what’s obvious, leaving you to think and elaborate instead of typing that all out.
As for ChatGPT, it is sometimes useful to figure out what API I might need in a specific situation, or explain error messages. I don’t use it as often, especially not to generate a bunch of code I want to use in my project. That’s Copilot’s job. But ChatGPT can explain a lot of things, and you can describe your problem much broader than with Copilot.
Also GPT 4 is much better. But with twice the price of Copilot (in my country), it doesn’t bring as much bang for the buck, at least for my kind of usage.
I was thinking that one effect copilot-like tools will have in projects is more comments describing the code. Because copilot can both help with the code if you document it well as it can document code well with descriptions and their parameters.
I don’t use it that much for programming in the project directly, but sometimes to ask for input about ideas I have and pros/cons and follow up questions.
But just this week I used ChatGPT to help me write some git hooks I didn’t know were possible.
I feel a similar way - I just use text generation AIs to find out new approaches or different ways to accomplish something.
My use case is primarily to list and briefly explain several pros/cons of an approach, which can provide a good starting point for further reading, or where to look in the docs.
I find ChatGPT more accessible and usable than GitHub Copilot. I did initially use Copilot when it was first released, but I found myself being interrupted by the suggestions. There were times when it was useful, but it got in the way more often than not. I’ll concede that Copilot is really good at suggesting dummy data.
With ChatGPT I tend to explore a problem and a solution - it’s more of a purposeful back-and-forth. I will often ask for example code, but even if I use it, it will most often be re-written. The key thing here is that I am stepping out of my editor to interact with ChatGPT, and that works really well for me - I’m in a different thinking state, and I find this a very productive way to work.
Doing web-dev it’s not uncommon for me to run into libraries with poor documentation or missing examples. I could spend a lot of time trying to find the official docs, read through pages, not really find what I’m looking for, go to stackoverflow, maybe find something better maybe not. Now my first step is just asking ChatGPT my question. More often than not it gives me a working example, I get an answer faster than it would take to navigate official docs, and I can immediately get answers to followup questions or ask it to modify the example to be closer to what I actually need for my application.
I still have copilot on but I find it not really useful beyond very simple things. It is a smarter autocomplete, so it’s nice. But you always need to have your brain turned on because it definitely invents things.
It’s also sometimes entertaining when it makes things up. I especially enjoy when it makes up entries in the changelog.
As for ChatGPT, I use it occasionally mostly for tedious things I don’t want to spend time on. But I’ve definitely used it less lately. The hyper has faded.
ChatGPT 4 Helps me with better code, content ideas and mindstorming.
GitHub Copilot Speed up my writing since it can take context and give suggestions. I also have the Beta version with the chat but it is as good as ChatGPT 3.5 so i don’t use it.
Stable Diffusion I can’t draw so i use a lot of AI images, but mostly for placeholder.
Stable Dreamfusion Tried, did not get good results, don’t have enough vram.
NeRF Tried, haven’t use it more then a few test run, but will use it more in the future for when i make a 3D Game.
I don’t have Copilot and never plan to use it. I tried Tabnine back when it was early and found it broke my flow more than helped. And I don’t have the patience to try either again… I’m too reliant on muscle memory and key bindings to have these tools get in the way.
As for ChatGPT, my company has a decently-restrictive policy. We can and are encouraged to use it, but no IP can be given. So everything is really generic. That said, I find it most useful in two areas. First, getting me unstuck. And what I mean by that is, I basically ask it if it’s possible to solve a problem. If it spits out code, I’ve got a general idea of how to get there. In short, I skip Stackoverflow.
Second, I like it for a second perspective. For example, I recently had a task to remove “duplicate rows” from a database table. We’ve used the same algorithm 2 or 3 times prior. Something wasn’t sitting quite right with a reviewer of the changes I made for this particular need, but even after group discussion, no one could put a finger on it.
So, I asked ChatGPT how to do this same thing and its solution was actually sound. We still kept my solution… and eventually figured out what was wrong, after the migrator didn’t do its job properly. That resulted in a good, solid day of hand-fixing databases… I really should have listened to my gut when ChatGPT gave a totally different, probably more bulletproof answer. It may not always correctly, but that doesn’t mean it’s not necessarily helpful.
I use GPT4 regularly. I find it really helps with brainstorming or thinking through a problem. The more I use it the more I learn about how it can help. Copilot is convenient sometimes but I wouldn’t be upset if I couldn’t use it anymore.
I use ChatGPT with GPT-4 as a search engine when a Google search doesn’t immediately turn up the answer I’m looking for. ChatGPT won’t necessarily give me the right answer either (though sometimes it does), but reading its answers almost always causes me to think of a better search query to plug into Google. That doesn’t sound like much but it can save a lot of time compared to stumbling around trying to figure out the right keywords.
Occasionally I ask ChatGPT to write code samples, but (though they’re way better than GPT-3.5) they still hallucinate a bit too much, e.g., inventing library functions that don’t exist or, worse, inventing plausible-sounding but wrong facts about the problem domain. For example, I recently asked it to write some sample code to work with geographic data where the coordinate system could be customized, and it invented a heuristic about coordinate system identifiers that is true most of the time but has a ton of exceptions. If I didn’t already know better, I might have tried it out, seen that it appeared to work on a simple happy-path example, and accepted it without knowing it was going to break on a bunch of real-world data.
Every once in a while I give Copilot another shot, but so far, I’ve always ended up turning it off after realizing that I’m spending more time double-checking and fixing its code than it would have taken me to write the code by hand. To be fair, I’m usually working on backend code in a language that doesn’t have nearly as much training data as some other languages. Maybe if I were writing, say, Node.js code, it would do better.