These days, one of the questions we hear most often is: “Do you use AI?”
The short answer is yes.
The longer, more nuanced answer is also yes — but probably not in the way you might expect.
So what does that mean?
The average person seems more than willing to hand all kinds of tasks over to AI, trusting it to make quick work of a daunting to-do list. For many, AI feels almost magical: an expert in every field, instantly producing complex breakdowns and polished output that make the user look like a genius.
But the reality is more complicated.
AI is very good at confidently repeating questionable information, with no real understanding of whether what it is saying is correct. Large language models are the clearest example of the idea that if something is said confidently enough, people will believe it. This becomes obvious very quickly when you ask AI about something you actually know well — in our case, building web systems.
AI is excellent at chewing through tedious, repetitive, or well-defined work. But when it is allowed to make decisions on its own, problems start to appear. We think of an LLM as a naive but eager intern: enthusiastic, fast, and highly motivated to impress, but not always aware of what it does not know. And that matters, because someone who does not know their own limits can go very far down the wrong path very quickly.
Incorrect research, flawed assumptions, and logic implemented without understanding can easily paint a team into a corner. Backtracking — or worse, starting over after heading too far in the wrong direction — helps no one. Our view is simple: it is better to avoid even the smallest unnecessary risk of that happening in the first place.
Does that mean AI is all bad? Of course not.
As we mentioned earlier, LLMs are incredibly useful when applied to well-defined tasks with clear guardrails. Just as you would give a human intern thoughtful instructions, context, and checks for success, we use AI the same way. We provide detailed prompts that explain not only what to do, but what tools to use and how to verify that the task was completed correctly.
Used properly, LLMs can be a major time-saver, and we are glad to have access to that technology.
Three simple principles
Our use of AI is grounded in three simple principles:
- Do not ask an LLM to do something you cannot do yourself.
- Do not blindly trust the output of an LLM.
- Do not give an LLM data that could be harmful if it became public.
Beyond those principles, there are several areas where we make a clear distinction between human work product and artificially generated output.
Planning
All projects are planned by humans. This includes entities and their properties, user flows, business rules, interfaces, language selection, framework selection, and technology selection. We determine the who, what, when, where, and why. Our plans are built on real knowledge and real experience.
Branching and Deployment
Humans decide how work is grouped, organized, and saved. Deployments are never automated in the hands of an AI agent.
Media and Resources
In most of our projects, clients provide the copy, images, video, audio, and other resources used in the final product. During development, it is important to test with the actual assets that will live in the project. Unfortunately, those resources are often not available during the testing phase, so we may sometimes use generated assets as temporary stand-ins.
Any resources we create for a project should be considered development-only and discarded before launch, as they may have been AI-generated.
Communication
We do not automate client communication with AI products or AI-assisted communication features. When you speak with us, you are speaking with humans. That applies to email, chat, phone calls, and text messages. We would not think to treat our clients any other way, and we would hope for the same courtesy in return.
Testing
Testing and quality assurance are foundational to building good software. Traditionally, that job belonged to detail-oriented humans whose level of scrutiny could occasionally be described as grating — but necessary. We still test our work as humans, but we also supplement that process with AI-assisted testing routines.
Having read our official policy on the use of AI in our work, we hope our approach is clear: it is deliberate, practical, and careful. Every effort is made to ensure that the products we deliver are safe, correct, and functional.
We believe AI is a tool — a very powerful one — that can accelerate a workflow when used with discipline and respect. As professionals, our responsibility is to use every useful tool available to us, but to use it the right way.
Rest assured, our use of AI never looks anything like: “Hey Chat, build this client’s project and tell me when you’re done.”