Valstar.dev

Your Company Bought AI Tools, But Forgot to Update the Workflows

12 min read


I’ve watched companies hand out AI coding tools like GitHub Copilot with zero guidance, while others set arbitrary targets like “30% of code must be AI-generated.” Both approaches lead to terrible outcomes. AI tools need thoughtful integration into how teams work, not just mandates or wishful thinking.

How It’s Going

I’ve been watching the same scenario play out at company after company lately, both ones I’ve worked with and many online. The C level discovers AI coding tools, gets excited about the productivity promises, and announces their new “AI initiative” for the development teams. The assumption driving all of this? That AI will automatically make developers faster and more productive with nothing else added.

These rollouts generally fall into 2 categories:

Throw Tools at Them

The first step at these companies is either to figure out a tool (usually a big name like Microsoft) and give either part of their dev team or the entire team a license; Or they give the team ALL the tools and let them figure it out (See Shopify).

This is generally provided with 0 additional guidance (aside from legal guidance), a hope and a prayer that productivity will increase.

These tools are given with no real thought to what part of the development process they help with or what their impact may be, or even who on the team it’ll benefit most or less.

Mandated Usage

Usually after teh stage of getting the tools okayed by legal, and the executive team doesn’t immediately see a gain in productivity they roll out a mandate around the only real metic they can grab on to with these tools: number of lines written by the tool.

What Happens

This will inevitably lead to a few developers getting great results, a few resisting using the tooling, while others fight with the tools or simply turn off their brain and let the tool do their job. This is not a great outcome that not only leads to no real gains in productivity but also changes team dynamics with people resenting others. Those who see the tools as a good tool for them see those that don’t as luddites, while others wil see those using the tools without using their brains as lazy or just bad developers.

What is a Developer Workflow

Before getting into how to fix these problems and do a better rollout, lets first start by defining what is a developer’s workflow? the way I generally see this workflow play out from a developers perspective is this:

  1. Gather/confirm requirements
  2. Explore the problem
  3. Determine a solution (or path to a solution)
  4. Write the code
  5. Test the code
  6. Submit for review
  7. Apply feedback
  8. Submit for business review
  9. Apply further feedback

(Note: there is sometimes a step (or many) when integrating with external systems that is not covered in this article)

With the above workflow developers only really end up interacting with AI tools during the coding step, and generally it’s left to that same developer to determine what that means

Why Does it Fail?

So why does this approach fail? if you give developers these amazing tools, why aren’t they immediately faster?

In my opinion what is mostly missed with the above workflow is the developer’s knowledge; it’s not written down anywhere. Your teams even relatively fresh onto a project will know and understand the codebase, and more importantly the business better then the LLM can given what context it will be given for the problem. None of that knowledge is passed to the LLM when you ask it Can you add a new column to the table for the status. And yet despite that lack of context many developers will tell you the LLM is still doing a good job!

But you could get much better performance from your LLM, while also as a developer get what you really want out of these LLMs: to do teh tedious work so you can work on the real problems!

And it all comes down to a lack of proper context for the LLM, or giving the LLM problems it’s not equipped to solve.

Defining a New Process

So what does a new process look like that separates out AI-able problems and those that require a human? How do get the most out of these tools? The biggest problem is one that is already a problem even without and LLM: bad requirements. This is a classic garbage in / garbage out problem. Improve your requirements!

a process I am working on with one of my teams is the following:

  1. Gather/confirm requirements
  2. Explore the problem
  3. Enhance the requirements (and confirm)
  4. Determine a solution (or path to a solution)
  5. Enhance the requirements again (and confirm)
  6. Attempt to have the LLM write the code
  7. Review the LLM code
  8. Modify (slightly) or restore and write the code yourself
  9. Test the code
  10. Submit for review
  11. Apply feedback

The key change here is that as we explore the problem we are further enhancing the requirements to a point where they are almost as detailed as the code that you need to write based on them. And critically adding a step to give up on the LLM and just write it yourself.

I have found with this process our documentation in the ticket is nearly 5x more detailed then was previously written, but the chances of someone coming back with “that’s not what I ment” after the ticket is completed have reached near 0; enough for me to no longer have that as a step in our process.

Critically this has also allowed us to integrate AI into the process more, and even further speed up the team.

Adding More AI

After modifying our process, the team has even found more uses for LLMs now that we have this wealth of documentation / requirements that are just plain better then they have ever been. We have been able to have ChatGPT review the requirements for not only completeness but with a focus on any complications it can guess at, or contradictions in the requirements (we have some standard template questions to ask about our tickets at various stages). We moved to Linear just for the MCP server so we can reference the ticket in our chats. We have also been able to feed these requirements into CodeRabbit so that a better code review can be done by the AI. And we can now have the AI update our business documentation based on the code changes made.

We are also now looking into exploring tools to help write requirements and to cross-reference those requirements against previous ones.

Distilling it Down

The way I often like to frame Copilot, Cursor, Claude Code, etc. to people, including the business is that they are (at the time of this writing) the equivalent of a 2nd year university student, that is incredibly fast but if over-confident in it’s own abilities. So treat them like one, don’t give them overly complicate tasks, make sure the tasks you do give them are incredibly specific, and you review their code extra closely!

Make your processes revolve around writing requirements for fresh grads, who just joined your company yesterday but aren’t allowed to talk to anyone or ask questions, they ned to know what to do based on the ticket alone.

Final Thoughts

I am of the opinion that most projects/teams can benefit highly from adding AI to their workflow, if it’s done correctly; especially corporate apps. With these tools I think just about any team can get a 25% or greater performance boost if you can modify your workflow to work with the tools. Your team needs training, and your workflow should also be reviewed.

The world of AI tools is changing constantly, and you’ll need to review our processes every 6 months at least, even if you don’t change tools; the models are constantly being updated and your team will be constantly finding ways to use the tools better, you need to keep up to date if you want to keep receiving the productivity gains.

These tools can make the developer experience better or worse, these tools can be a force multiplier, they can also be a crutch your team relies on to be able to turn off their brain. Make sure your team understands the tools, especially about their weaknesses.