I have been gradually increasing my use of LLM tools since 2022 when I started experimenting with the preview version of GitHub Copilot before encouraging my team to try it out. But adopting AI tools effectively requires more than just signing up for ChatGPT or GitHub Copilot. From my own experience testing and deploying AI across different workflows alone and in a team, here are some practical lessons worth sharing.
AI works best when you donโt just tell it what to do, but also why. For example, instead of asking โwrite a sales pitch,โ tell it what audience youโre targeting, what problem they have, and why your product solves it. This mirrors how we delegate tasks to humans: explaining the reasoning improves the output.
Read Full Post
We've all been there โ that moment when you catch yourself thanking your AI assistant. But could that simple 'thank you' could have more environmental impact than you think?
LLMs run on tokens - one token is roughly equivalent to one word so when you say 'please,' you're feeding 1 extra token into your request. In an average 3-turn conversation with an LLM the LLM will re-read that token 3 times, so that's a cost of 3 extra tokens in an exchange which probably includes many hundreds or thousands of tokens.
But when you say 'thank you,' the AI has to reprocess the entire conversation from start to finish, plus the extra "thank you", taking up significantly more resources. Where "Please" costs 3 tokens, "thank you" costs an extra api call with many hundreds of tokens.
Lets avoid "thank you" but with the new generation of AI tools having memory, saying 'please' can help the AI understand your personality and communication style better over time. Yin et al (2024) show us that it may give better results - but don't overdo it! Overly polite requests can degrade results.
OK so mind your Qs, please and thank you for reading!
A structured guide to approaching system design interview questions using real-world case studies and a repeatable frameworkโfrom scaling basics to designing complex systems.
Alex Introduces a 4-step framework to tackle system design questions:
I found this really useful for demystifying the process and giving some structure to help tackle this kind of interview. The repetition helps to reinforce the process.
Read Full Post
I tested GitHub Copilot agent mode in July 2025, setting Copilot to work online on different sized features - the workflow looks like this:
Some stats:
For example, I tasked Copilot with refactoring 3 instances of near-duplicate code into a common service and make some improvements to error handling on the refactored service. My experience of code review from the last 4 years definitely helps with this workflow - reviewing code from an agent is similar to reviewing colleagues' code except that I don't feel guilty about leaving a PR unread for more than a day. Those PRs still become stale however and merge conflicts are a pain if the agent changes overlap with other PRs.
One challenge is that this makes it very easy to set Copilot working on easy to define low-impact work but that work still takes to review. It would be easy to get into the habit of doing lots of unimportant busy work with this workflow. I now want to explore how to use coding agents to achieve more ambitious changes, perhaps changes I would not take on individually because they lie near the limits of my current knowledge.
There's a memory game "In my bag..." where you pretend you have a list of things in your bag "In my bag I have a comb and a cat". The next person in the circle has to list all the same things and add one more at the end "In my bag I have a comb, a cat and a clock". The game ends when someone makes a mistake or gives up.
It occurred to me that this is a great analogy for the templates used in LLM chatbots! Every time you chat with an LLM the LLM re-reads the whole history of the conversation before responding. This is why they get slower over time, particularly if they are making web searches, viewing images, using MCPs or other tools which add a lot of hidden tokens to the chat history.
My talk from DjangoCon Europe 2025 - I discuss:
There's an old European folk tale of a man going into a village and asking for a pot and some water so he can make stone soup. Intrigued, the villagers give him a pot and some water and he boils it up. After a while he tastes it and says "not bad but it could use some onion - do you have some leftovers". The villagers give him a bit of onion and a few herbs whilst they are at it. It carries on like this until he has a delicious soup and the villagers are amazed that he did all that with just a pot of water and a stone.
I love Copilot, Claude and all the good things but the iterations of GitHub issues, suggested actions, debugging and fixing in the newer tool previews often taste suspiciously like stone soup ๐ ๐ชจ๐ฒ .
Reading into this issue on Django-waffle
I learned that Django uses pickle to store objects in its cache but does not guarantee that these objects
will remain valid between versions, so it raises a RuntimeWarning
when you access potentially stale objects.
The Django docs recommend clearing the cache on upgrade, and this Stackoverflow post discusses ways of doing that.
A pattern I enjoy with Copilot or other AI coding tools is something I'm calling "Language bridging":
Language bridging : Write code to solve a problem in a language you know well, then use an AI to translate the code into the language you want or need to use.
Read Full Post
There's this myth we tell ourselves about "Real developers" and especially "this is why I am not a Real developer". It got me thinking - is there actually a useful definition out there?
Here is my best try:
*A 'Real Developer' is someone who writes code or other instructions for a computer which run successfully and get something useful or delightful done in the real world
Read Full Post