Quality of output depends a lot on how common the code is in its training data. I would guess it’d be best at something like Python, with its wealth of teaching materials and examples out there.
Quality of output depends a lot on how common the code is in its training data. I would guess it’d be best at something like Python, with its wealth of teaching materials and examples out there.
Interpretive pole dancing to the death? Now I’m starting to see the appeal of Star Trek
You’ve never read the book in question… Because you think it’s filled with gut feelings and anecdotes… Which you know, because of gut feelings and anecdotes…
+1 for PorkBun, I’ve never had a bad experience with it.
I was using a nice firefighter duck named Cleo, but he was underperforming so he had to be let go. Now I have Rufus:
I was afraid a mouse wouldn’t be able to do a duck’s job, but he threatened to sue so I had to give him a shot. Glad I did, he’s proven as capable as any duck I’ve known.
It really doesn’t work as a replacement for google/docs/forums. It’s another tool in your belt, though, once you get a good feel for its limitations and use cases; I think of it more like upgraded rubber duck debugging. Bad for getting specific information and fixes, but great for getting new perspectives and/or directions to research further.
I don’t know about others’ experiences, but I’ve been completely stuck on problems I only figured out how to solve with chatGPT. It’s very forgiving when I don’t know the name of something I’m trying to do or don’t know how to phrase it well, so even if the actual answer is wrong it gives me somewhere to start and clues me in to the terminology to use.
Error 404: Costume Not Found is a classic.