Sanity check
2024-04-05
I've helped a few student groups with coding projects over the past few months. For some reason, if they choose to take an error to me, it's almost always because they got the code from somewhere else (probably some LLM) and can't debug what is otherwise an incredibly simple problem.
Just yesterday, an ITE group asked about uploading images from a Django HTML form to S3. The first thing I do in a debugging call is to ask the group what their code is supposed to do, then I ask them to trigger it. We then observe what actually happens. In this group's case, after they filled up the form, the image was supposed to be transmitted to their S3 bucket, but when they clicked the submit button, nothing happened. We unpacked the error and discovered that there was no opening <form>
tag in their template, only a closing </form>
tag. This pretty clearly won't work. I took the group through the basics of setting up an HTML form and explained what the form action and method are. We eventually wired up their form to the view function, where we got a different error but at least one that they could deal with on their own.
What was particularly jarring to me about that session was that the group had a lot of code dedicated to retrieving the file from the form and to uploading the file to S3. I asked them about it, and they openly told me that most of the groups in ITE rely on ChatGPT to help them code. This group in particular had used ChatGPT to generate the form code and the boto3
code, then they just grafted it onto their codebase.
I don't think it's a hot take to have problems with this. Sure, the ITE groups are under a lot of pressure to build their products. Since they aren't all programmers, it's understandable that a lot of them would gravitate towards AI. What's becoming clear to me now, though, is that they end up not understanding how things work at all. I learned to program before AI, and the way I would learn is through incremental feedback. I would write out one line or a few lines of code before running them to check that they do what I expect. It's a nice way to understand what every line of your code does (or is supposed to do). The ITE groups are getting none of this because they now rely on magic.
If you, the reader, are an ITE student, I ask you to learn to check your basic assumptions about your code. I am serious. If you paste in a large section of AI code and find that it doesn't work, strip it away to the very basics of what it's supposed to do and get it to work on that level. If you want to upload an image to S3, check first that your HTML web page is actually sending a POST request to your controller. If you get an inscrutable error trying to load an API key into OpenAI's library, try doing OpenAI's hello world in a separate project. What you want to do is to build a solid foundation of truth about your system. Once you know how your system behaves and why, you can build on it pretty easily.