Last updated on 11/03/2024
I use Generative AI tools because, as a developer, I can’t ignore them. My clients and peers expect me to be at least conversant, if not an expert, in AI technologies. Maintaining that expertise means engaging with new tools as they come out. I’ve made a point to use them where I could for the last six months. Here’s how I’ve used them and what I’ve learned.
Generative AI & Professional Writing
The Blank Page Problem
I would never publish unaltered machine-generated text. First of all, there are too many IP concerns there. I don’t know exactly what was trained against. This has implications for both IP and whether or not I’d agree with the points in the text.
I don’t love the way ChatGPT and similar tools write. The text they emit makes me angry. That’s great though, because as soon as I see something there, I know what’s wrong with it. I know what I want to change, and what I would write instead.
As such, while I struggle to get started with a blank page, I don’t need to anymore. I can ask a tool to present some text for me to start from. I always end up throwing it all away, but it gets me started, and that’s half of doing anything.
Asking for Summaries of my own work
As much as I pride myself on the clarity and directness of my writing, I’ve been toldperhaps I shouldn’t. Something I like to do after I’ve written an article is ask a generative tool for a summary of the article. If I agree with the summary produced, that’s good. If the AI thinks the point of my article is different, that’s cause for concern. This often leads to me promoting certain ideas I thought were central to my article, but the AI didn’t.
Office Use Cases
Email and Slack
I will never use generative tools to respond to a colleague or client. I feel like it violates the social contract. There’s an expectation that when I send an email, I wrote it.
Where I do use generated emails is dealing with sales people. I don’t have form letters for every interaction I have. Usually I need to convey that I’m curious about their product, but not in a seat to purchase it myself. I’ll ask an AI tool to help me generate text for a situation like that, but again, only when someone cold-emails me.
Summarizing & Question Answering
I read a lot for work. Articles and white papers and books and technical manuals. What I’m reading depends on my work load, but it’s always quite a bit of reading. Generative AI tools can provide a rough summary of a document. They can also do some rudimentary question answering from a document.
Now, I never take these things as gospel. GenAI gets things wrong more often than is acceptable. Remember, it’s meant to generate plausible text. That’s different than either correct or verbatim text. Even so, it gives me an inkling of what’s likely to be true, which is extremely useful for search the document for the specific information I need.
My Love/Hate Relationship With Code Gen
No article on generative AI is going to be complete without touching on its ability to write source code. So, a few things:
- The bottleneck to software development has never been typing
- Bad code is way more expensive than good code
- Nearly-correct code is harder to detect than clearly wrong code
In my personal experience (with Github’s Copilot), code gen tools are fairly capable. When asked questions, they’ll generate material falsehoods often enough that it’s noticeable. For example, I got in argument with copilot about switch statements in python3. It was convinced they didn’t exist; I knew better.
Worse, they don’t always make perfect code. Now, no programmer makes perfect code, but we tend to fail loudly. We do the wrong thing, we write stuff that doesn’t compile. Copilot made things that looked right, but had subtle bugs. As any programmer will tell you, subtle bugs are hard to catch.
Test case generation is another area where the outputs look good, but aren’t. The test cases I’ve had the tool generate don’t test interesting things. They also don’t provide meaningful coverage. This may be odd, but I’d rather have no tests instead of lousy tests. No tests indicates a problem in a clear way. Bad tests can hide a problem until it becomes an emergency.
My General Stance on Generative AI
I find generative AI techniques somewhat useful. This might be surprising, given my negative bent above. The thing is, I find the enthusiasm for these tools has far outpaced their utility. The only way I can reconcile people’s claims of time saved with my experience is assuming others blindly accept LLM output.
The tools that exist today are helpful. However, they have to be usedjudiciously. That means we have to inspect the output of generative techniques with a critical eye. How much diligence is due depends on the use-case, but there’s always some. No one wants to commit bad code, make false statements, or be an also-ran in a business context. If you’re not careful, that’s exactly where generative AI will take you.
On the flipside, no one wants to be a luddite. The technology that exists is helpful, when used responsibly. If you’d like to talk further about the responsible use of AI, generative or otherwise, please reach out to us.