Last updated on 06/03/2024
The world is incredibly excited about AI. We’re told that it can make our employees super human and solve our business problems over night. People selling a service have a tendency to exaggerate its quality and impact. Most of us aren’t AI experts, and that makes it hard to tell what’s real and what’s snake oil. I can’t make you an AI expert in the space of an article. However, you don’t need to be one to recognize AI snake oil. Here are the tell-tale signs:
Inscrutable Approaches
The Wizard of Oz was impressive because the thing he produced was a spectacle. Once you peered behind the curtain, it was clear that he was a confidence man. Quite literally, he helped the characters believe in themselves. Similarly, if an AI offering doesn’t describe how it works, you should be skeptical. You’re looking for:
- Diagrams showing how data moves through the system
- Names of techniques for processing information
- Examples of the system working
- Demos you can control the input to
- Related academic publications from the lead engineers or scientists. Preferably a long established research record.
Alone in their Field
When was the last time you bought a product or service where there was only one provider? I don’t mean “I adopted the technology day 1” purchases. Nor do I mean “They had elements that clearly differentiated them and forced the decision.” I mean “This is established and literally no other people sell it.” If other people aren’t doing it at all, that’s a good sign that it actually can’t be done in the first place. When you see this, dig deeper. Look for academic papers describing the techniques.
Magical Thinking
Sometimes it’s not clear how a technology will deliver on its promises. Now, the marketing copy is almost always clear in what the product will do. Sometimes I can’t see how their technology results in the glorious future they depict. Usually there’s some logical leap they’re asking me to make, and often that leap doesn’t hold up to scrutiny. Here are two timely examples:
Chat-GPT Will Improve Office Communication & Efficiency
Chat-GPT is good at producing text in a particular vein that looks reasonable. The leap we’re being asked to make is “If my workers spend no time writing emails, they’ll be more efficient”. It’s true, it does take me time to write an email. However, the time I spend writing is tiny compared to the time handling the email takes:
- Scheduling the meeting
- Waiting hours or days for it to happen
- Having the conversation
- Doing the things we said we’d do
Most applications of LLMs in this context are penny-wise and pound foolish. Even if they write the perfect email every time, and they don’t, they can only save me a tiny fraction of the total time. Yes, technically an improvement, but far from the glorious future that they promise.
Code Co-pilots will make all my developers 10x Developers
Let me tell you a secret about software development. Usually, programming is not the hard part. Some esoteric systems are painful to write, it’s true, but they aren’t the norm. You know what is hard? Building the right thing. Finding ways to deliver value to the user at every step in an agile fashion. Working with humans who might be busy or having a bad day and maybe they don’t show up as their best self every day. Co-pilots can’t do anything about the hard part of the work. They can help us with the easiest problem, but they aren’t flawless. This limits their utility.
Bringing It Home
When you see one of these features, you’ll likely see several in concert. While these signs aren’t proof-positive of snake oil, they should give you pause. In that pause, ask further questions. It’s uncommon that a product is all smoke and mirrors. More often, it delivers something useful, but far less than was promised.
If you need help vetting an AI products claims, please reach out to us.