Graham Knapp

AI bookmarks design patterns django e57 feature flags pointcloud python Rhino 3D talks TypeScript wind engineering

Adopting LLMs in a startup ๐Ÿ”—

written by Graham Knapp on 2025-09-14

I have been gradually increasing my use of LLM tools since 2022 when I started experimenting with the preview version of GitHub Copilot before encouraging my team to try it out. But adopting AI tools effectively requires more than just signing up for ChatGPT or GitHub Copilot. From my own experience testing and deploying AI across different workflows alone and in a team, here are some practical lessons worth sharing.

1. Be Clear About Context and โ€œWhyโ€

AI works best when you donโ€™t just tell it what to do, but also why. For example, instead of asking โ€œwrite a sales pitch,โ€ tell it what audience youโ€™re targeting, what problem they have, and why your product solves it. This mirrors how we delegate tasks to humans: explaining the reasoning improves the output.

Read Full Post


Always say "Please", never say "Thank you" to your LLM ๐Ÿ”—

written by Graham Knapp on 2025-09-10

We've all been there โ€“ that moment when you catch yourself thanking your AI assistant. But could that simple 'thank you' could have more environmental impact than you think?

LLMs run on tokens - one token is roughly equivalent to one word so when you say 'please,' you're feeding 1 extra token into your request. In an average 3-turn conversation with an LLM the LLM will re-read that token 3 times, so that's a cost of 3 extra tokens in an exchange which probably includes many hundreds or thousands of tokens.

But when you say 'thank you,' the AI has to reprocess the entire conversation from start to finish, plus the extra "thank you", taking up significantly more resources. Where "Please" costs 3 tokens, "thank you" costs an extra api call with many hundreds of tokens.

Lets avoid "thank you" but with the new generation of AI tools having memory, saying 'please' can help the AI understand your personality and communication style better over time. Yin et al (2024) show us that it may give better results - but don't overdo it! Overly polite requests can degrade results.

OK so mind your Qs, please and thank you for reading!


Bookmark: System Design Interview: An Insiderโ€™s Guide by Alex Xu ๐Ÿ”—

written by Graham Knapp on 2025-09-10

A structured guide to approaching system design interview questions using real-world case studies and a repeatable frameworkโ€”from scaling basics to designing complex systems.

โ€‹ Key Ideas / Takeaways

Alex Introduces a 4-step framework to tackle system design questions:

  1. Understand the problem and establish the scope
  2. Propose a high-level design and get buy-in from the interviewer
  3. Dive deep into chosen components
  4. Wrap up with optimizations, bottlenecks, and improvements

I found this really useful for demystifying the process and giving some structure to help tackle this kind of interview. The repetition helps to reinforce the process.

Read Full Post


Testing GitHub Copilot agent mode ๐Ÿ”—

written by Graham Knapp on 2025-08-23

I tested GitHub Copilot agent mode in July 2025, setting Copilot to work online on different sized features - the workflow looks like this:

  1. Chat with Copilot online - ask it to open a PR to work on a specific feature. Copilot starts working in its own virtual machine on GitHub.
  2. 30 minutes later I get an email saying the PR is ready for review - I read it online and ask for any corrections via github.com
  3. If and when I am happy with it I pull the branch to my PC, review, modify, fix, change.
  4. I push from my machine and merge to trunk

Some stats:

  • 19 Pull requests opened against our main monorepo in 5 weeks.
  • 7 merged
  • 8 still open on the 31st of July (3 of those created on the final day)
  • 3 closed unmerged because they clearly didn't work or were not worth finishing
  • 1 closed because I reimplemented it more successfully on my dev PC

For example, I tasked Copilot with refactoring 3 instances of near-duplicate code into a common service and make some improvements to error handling on the refactored service. My experience of code review from the last 4 years definitely helps with this workflow - reviewing code from an agent is similar to reviewing colleagues' code except that I don't feel guilty about leaving a PR unread for more than a day. Those PRs still become stale however and merge conflicts are a pain if the agent changes overlap with other PRs.

One challenge is that this makes it very easy to set Copilot working on easy to define low-impact work but that work still takes to review. It would be easy to get into the habit of doing lots of unimportant busy work with this workflow. I now want to explore how to use coding agents to achieve more ambitious changes, perhaps changes I would not take on individually because they lie near the limits of my current knowledge.


Playing "In my bag..." with LLM agent templates ๐Ÿ”—

written by Graham Knapp on 2025-08-10

There's a memory game "In my bag..." where you pretend you have a list of things in your bag "In my bag I have a comb and a cat". The next person in the circle has to list all the same things and add one more at the end "In my bag I have a comb, a cat and a clock". The game ends when someone makes a mistake or gives up.

It occurred to me that this is a great analogy for the templates used in LLM chatbots! Every time you chat with an LLM the LLM re-reads the whole history of the conversation before responding. This is why they get slower over time, particularly if they are making web searches, viewing images, using MCPs or other tools which add a lot of hidden tokens to the chat history.

The ollama docs describe a simple template here


Feature flags Pt 3: Deploy to some of the people all of the time, and all of the people some of the time! ๐Ÿ”—

written by Graham Knapp on 2025-04-25

My talk from DjangoCon Europe 2025 - I discuss:

  1. What are Feature Flags ?
  2. Why use Feature Flags ?
  3. How Acernis uses Feature Flags
  4. Getting started with Feature Flags in Python and Django

Preview image from the talk

Read Full Post


AI coding - stone soup ๐Ÿ”—

written by Graham Knapp on 2025-03-01

There's an old European folk tale of a man going into a village and asking for a pot and some water so he can make stone soup. Intrigued, the villagers give him a pot and some water and he boils it up. After a while he tastes it and says "not bad but it could use some onion - do you have some leftovers". The villagers give him a bit of onion and a few herbs whilst they are at it. It carries on like this until he has a delicious soup and the villagers are amazed that he did all that with just a pot of water and a stone.

I love Copilot, Claude and all the good things but the iterations of GitHub issues, suggested actions, debugging and fixing in the newer tool previews often taste suspiciously like stone soup ๐Ÿ˜… ๐Ÿชจ๐Ÿฒ .


TIL: Django caching doesn't cross versions ๐Ÿ”—

written by Graham Knapp on 2025-02-11

Reading into this issue on Django-waffle I learned that Django uses pickle to store objects in its cache but does not guarantee that these objects will remain valid between versions, so it raises a RuntimeWarning when you access potentially stale objects.

The Django docs recommend clearing the cache on upgrade, and this Stackoverflow post discusses ways of doing that.


AI coding patterns: Language bridging ๐Ÿ”—

written by Graham Knapp on 2025-02-10

A pattern I enjoy with Copilot or other AI coding tools is something I'm calling "Language bridging":

Language bridging : Write code to solve a problem in a language you know well, then use an AI to translate the code into the language you want or need to use.

Read Full Post


Real developers ๐Ÿ”—

written by Graham Knapp on 2025-02-08

There's this myth we tell ourselves about "Real developers" and especially "this is why I am not a Real developer". It got me thinking - is there actually a useful definition out there?

Here is my best try:

*A 'Real Developer' is someone who writes code or other instructions for a computer which run successfully and get something useful or delightful done in the real world

Read Full Post