Graham Knapp

Adopting LLMs in a startup

written by Graham Knapp on 2025-09-14

I have been gradually increasing my use of LLM tools since 2022 when I started experimenting with the preview version of GitHub Copilot before encouraging my team to try it out. But adopting AI tools effectively requires more than just signing up for ChatGPT or GitHub Copilot. From my own experience testing and deploying AI across different workflows alone and in a team, here are some practical lessons worth sharing.

1. Be Clear About Context and “Why”

AI works best when you don’t just tell it what to do, but also why. For example, instead of asking “write a sales pitch,” tell it what audience you’re targeting, what problem they have, and why your product solves it. This mirrors how we delegate tasks to humans: explaining the reasoning improves the output.

2. Keep a Library of Prompts and Context

Rather than reinventing the wheel each time, save prompts that work well. These can be stored in text files, docs, or even as custom GPTs that your team can share. The memory function in modern tools aims to provide this automatically but I like to have as much control as possible over what is in the LLM context so I prefer to manage this myself for important tasks.

For coding tasks, provide a standardized context file describing your project’s architecture, frameworks, and conventions. This ensures tools like GitHub Copilot work with higher accuracy and fewer hallucinations. Over time, expand this file with lessons learned from past errors so the model doesn’t repeat them. In particular I try to start each new piece of coding work with a review and update of the relevant sections of my agent instructions and repo context files.

3. Know When to Start Fresh

Trying to “correct” a messy AI conversation often leads to worse results. If a chat goes off track, don’t force it—restart the conversation from the last good point with better input. It’s often faster and more reliable than patching mistakes.

4. Use Multiple Models for Perspective

Sometimes it helps to cross-check outputs. If one model gives a poor response, hand it over to another (e.g., Claude, GPT, Gemini) and see how it compares. Putting “AI subcontractors” in competition can surface better results and highlight blind spots. Using weaker local models via Ollama helps me to understand the weak spots of LLMs. Regularly seeing extreme examples of failure like this helps me see where stronger models may fail more subtly or infrequently. I can then update my prompts to avoid these errors.

5. Build Custom AI Tools for Repeated Tasks

If your startup has recurring workflows—parsing documents, extracting structured data, or analysing standard formats—it’s worth creating a custom GPT or fine-tuned workflow. For example, I built a “scan to Excel” tool that turns messy drawings into clean, structured tables. This kind of internal utility saves hours of repetitive work and opens up opportunities for whole new workflows.

The How I AI podcast has a nice episode on building custom GPTs.

6. Protect Sensitive Information

Don’t paste credit card numbers or shareholder agreements into an AI tool. Even if you use a paid account with stricter data policies, it’s best practice to avoid exposing sensitive data. Treat AI tools like external contractors—share enough context to do the job, but keep confidential material secure.

7. Ask for Criticism, Not Just Praise

Most AI tools lean toward being agreeable. To avoid shallow validation, explicitly ask the model to critique, roast, or challenge your ideas. Negative prompting can surface weaknesses in your thinking or code that would otherwise go unnoticed.

Final Thoughts

By treating AI like a junior team member with clear instructions and careful checking you can unlock real value without falling into the traps of vague prompts and drowning in AI generated slop. By regularly reviewing the results alone and as a team you can iterate towards a more efficient workflow with LLMs at the heart.

The key is to combine experimentation, transparency and discipline: play with the tools, but also put guardrails in place so your team scales their productivity safely.

This blog post was seeded from a ChatGPT rewrite of my personal contributions to a 1-hour discussion about LLMs at work - thank you to all my colleagues for their participation.

ai