Hi recently starting using ChatGPT and found it very interesting because how easy you can code with it. I am not sure if i am write or wrong but still just wanna know can I start my python journey.
What I do is test the code out that the AI gave spit out and if it is wrong I simply reply back saying it’s wrong. However, I first look at the code to see if it looks right (Good way to see if you are actually learning the language) and if I spot something that looks really off. I simply reply back say that can’t be right. When it comes to writing secure code it’s still best to get another human to look at it rather than AI as the AI doesn’t always use the latest coding practices or that security knowledge of the code is in its database isn’t there yet. You can still ask to spit out the secure code, but I still would have a human look at it. Even with regular code it sometimes goes off in a tangent and that is why forums like these are still very valuable to use.
You might find this course on Sitepoint Premium of help.
ChatGPT is getting much attention but there are other AI tools but I am not familiar with any at the coding level. IBM was among the first to explore AI. Are you familiar with IBM’s Watson? Watson was able to out-perform two of the top Jeopardy champions. Jeopardy is a TV game show.
If you are developing for commercial purposes then there are alternatives.
Starting your Python journey with ChatGPT is a great idea. Python is commonly recommended for those who are new to coding:) Good luck!
I frankly worry for programmers with the world of ChatGPT and other AI’s.
And if you’ve relied on ChatGPT to generate the code in the first place, you know it looks right because…
Right. Here’s the point i’m trying to draw to, and which Pepster is alluding to throughout his post; Using an AI to generate code is fast; but it cannot replace knowledge of what you’re doing. The tool is there to be used, but you have to use it well. If you throw a coding assignment at an AI, take what they give you verbatim, and spit it out…
- you’ve learned nothing.
- you could be spitting out bad code.
- you could be spitting out someone else’s code.
- AIs will lie about the veracity of their results. Ask these guys. (https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt)
If you let your code create from someone else (doesn’t matter if AI or hired cheap programmer what many companies does) you need to
- be able to understand the code to see the probably issues on it
- need someone who writes test functions for the code which finds probably issues
- test all possible combinations of usage this code and check the results
At the end you spend more time on validating your code then you have saved on writing it.
If by “start” you mean to get the ball rolling with actually learning how to write code yourself, then possibly, thouhg I could not say as I don’t have experience of learning that way.
If by “start” you mean to take a short-cut by missing out the process of learning how to code and using a machine to write your code for you, then I suspect you will be creating very poor code.
Also as we have seen many times here in the forums, taking short-cuts by refusing to learn, actually takes longer, a lot longer, to yield results, like infinity longer.
Mentioning no names.
I have used ChatGPT to do some coding, with very mixed success.
I have found it helpful to create simple scripts. For applications it can be a useful starting point, but for several uses I tried it with it made multiple mistakes, would apologise and give a fixed script which usually needed further work.
I’m not sure it’s much use if you know nothing or little about the programming language you’re asking it to use, and you’re still going to need to thoroughly test the script.
I’m not sure it would be much use to teach programming to a beginner.
I very much have my reservations about using chat gpt as a beginners tool. I certainly wouldn’t rely on it as a sole source for tutoring.
I think if it’s used in combination with the more traditional routes, courses, books, forums etc then it may be of more value.
I think using it as an aid is where it comes into it’s own. The key thing though is that you need to have knowledge to get the most out of it.
In the middle of that nice bit of generated code will be a depreciated function or a flawed bit of logic. That’s where you have to give it a bit of a nudge in the right direction.
Certainly worth looking into though, just don’t rely too heavily on it and do your own research.
I have tried to use GPT-3 to generate a program in rust and shell script. The rust program generated used libraries/concepts incompatible with wasi. However, I specifically instructed the ai to generate a rust program for wasi. The second time I tried to instruct the ai to generate a shell script to update package.json files. The bash scripts generated over several instructions to refine never worked. Others claim entire programs can be written using GPT-3 but I have yet to see it generate anything that works for my use cases.
With that said I have found it useful for specific questions. Especially questions about libraries and frameworks. GPT-3 is far more effective answering SPECIFIC questions about languages, libraries, and frameworks than it is generating complete programs, scripts.
Example of a question after a long conversation that generated completely incompatible code.
Create me a wasmcloud actor that implements HttpServer serving static files inside the static directory embedded directly into the wasi via objectc using the rust programming language without using standard library, wasi, and wasmtime.
My first experience using GPT-3 to generate a complete program.
The other conversation I’ve had with GPT-3 to generate code began with the following question.
How to automate update of Angular libraries package.json files dependencies and peer dependency versions to match project package.json.
After a series of responding to incorrect answers I believe the ai got pretty close. However, nothing it generated worked out of the box. The last script in the conversation generated cleared all the dependencies.
GPT-3 can generate code. However, to generate anything meaningful requires input to be specific. I don’t see how anyone without some level of programming knowledge would be able to instruct the ai to generate meaningful, working code. Syntax questions sure but asking the ai to generate a functional program is not a trivial task. To generate complete programs requires domain knowledge with specific programmatic instructions.
Perhaps GPT-4 will be more effective.
I have nothing against ai to generate code. However, I do feel that doing so requires a combination of project manager and technical skills. You need to speak to the ai like not just a project manager but a project manager capable of writing the program themselves.