Une élue de la région Nantaise m'a récemment dit «Nantes n'a jamais été en Bretagne», phrase que j'ai entendu plusieurs fois depuis que j'ai déménage dans la région. Et si je vous disais que cette affirmation est à la fois parfaitement vraie et complètement fausse ?
Oui c'est vraie - Nantes n'a jamais été en "Bretagne", mais seulement depuis la création de régions administratives modernes, dont la Région Bretagne, en 1959. Avant cela, Bretagne voulait exclusivement dire la zone culturelle, géographique et politique de l'ouest de la France.
Read Full Post
Most of the research on large language models (LLMs) suggests a familiar pattern: experts, already skilled in their field, benefit most from AI assistance. So I was surprised to come across a study in The Quarterly Journal of Economics that seems to show the opposite:
"Less skilled and less experienced workers improve significantly across all productivity measures, including a 30% increase in the number of issues resolved per hour... AI has little effect on the productivity of higher-skilled or more experienced workers"
Note that this finding comes from a very specific context: customer support for business software. Here, the AI was trained on the full archive of support calls, tasks are relatively uniform, and staff turnover is high. In that environment, AI not only boosted productivity but also reduced customer complaints and helped retain new employees. Still, the result raises a broader question: where might we benefit from this effect? Perhaps in any domain where we are thrown into unfamiliar work and the AI has access to rich, relevant training data.
The study also suggests a way forward: when novices follow AI recommendations closely, they not only become more productive in the moment but also retain those improvements when the AI support is removed. That’s encouraging — it hints that AI can be more than a crutch. Used well, it can help us build lasting skills and confidence, rather than leaving us permanently dependent on the machine.
Following my article on adopting LLMs in a startup, I was asked about the GitHub Copilot instructions file I maintain for one of our core projects. I’d like to explain what’s in that file, why it’s structured the way it is, and how it differs from standard Python/Django agents guideline templates.
My team uses GitHub Copilot in 3 main ways in VSCode and JetBrains PyCharm:
This file is used in Copilot Chat/Edit and in agentic coding sessions — not for raw autocomplete.
My copilot instructions markdown file is a project overview — a map of the terrain rather than a rulebook. It gives LLMs, but also developers, the essential context to work productively without overloading the context window. It covers:
Read Full Post
A surprisingly engaging read, the authors set out a strong argument that Silicon Valley doesn't stand for much of any importance, followed by some rather fragmented arguments and opinions on what should be done about it.
Part 1 describes the current state of Silicon Valley as the authors see it, highlighting the reluctance of many tech firms to engage in military, policing or surveillance contracts. For me this highlights their refusal to engage with no real opposition or alternative. They seem to hanker for a stronger pro-American attitude from their big tech colleagues.
Read Full Post
I have been gradually increasing my use of LLM tools since 2022 when I started experimenting with the preview version of GitHub Copilot before encouraging my team to try it out. But adopting AI tools effectively requires more than just signing up for ChatGPT or GitHub Copilot. From my own experience testing and deploying AI across different workflows alone and in a team, here are some practical lessons worth sharing.
AI works best when you don’t just tell it what to do, but also why. For example, instead of asking “write a sales pitch,” tell it what audience you’re targeting, what problem they have, and why your product solves it. This mirrors how we delegate tasks to humans: explaining the reasoning improves the output.
Read Full Post
We've all been there – that moment when you catch yourself thanking your AI assistant. But could that simple 'thank you' could have more environmental impact than you think?
LLMs run on tokens - one token is roughly equivalent to one word so when you say 'please,' you're feeding 1 extra token into your request. In an average 3-turn conversation with an LLM the LLM will re-read that token 3 times, so that's a cost of 3 extra tokens in an exchange which probably includes many hundreds or thousands of tokens.
But when you say 'thank you,' the AI has to reprocess the entire conversation from start to finish, plus the extra "thank you", taking up significantly more resources. Where "Please" costs 3 tokens, "thank you" costs an extra api call with many hundreds of tokens.
Lets avoid "thank you" but with the new generation of AI tools having memory, saying 'please' can help the AI understand your personality and communication style better over time. Yin et al (2024) show us that it may give better results - but don't overdo it! Overly polite requests can degrade results.
OK so mind your Qs, please and thank you for reading!
A structured guide to approaching system design interview questions using real-world case studies and a repeatable framework—from scaling basics to designing complex systems.
Alex Introduces a 4-step framework to tackle system design questions:
I found this really useful for demystifying the process and giving some structure to help tackle this kind of interview. The repetition helps to reinforce the process.
Read Full Post
I tested GitHub Copilot agent mode in July 2025, setting Copilot to work online on different sized features - the workflow looks like this:
Some stats:
For example, I tasked Copilot with refactoring 3 instances of near-duplicate code into a common service and make some improvements to error handling on the refactored service. My experience of code review from the last 4 years definitely helps with this workflow - reviewing code from an agent is similar to reviewing colleagues' code except that I don't feel guilty about leaving a PR unread for more than a day. Those PRs still become stale however and merge conflicts are a pain if the agent changes overlap with other PRs.
One challenge is that this makes it very easy to set Copilot working on easy to define low-impact work but that work still takes to review. It would be easy to get into the habit of doing lots of unimportant busy work with this workflow. I now want to explore how to use coding agents to achieve more ambitious changes, perhaps changes I would not take on individually because they lie near the limits of my current knowledge.
There's a memory game "In my bag..." where you pretend you have a list of things in your bag "In my bag I have a comb and a cat". The next person in the circle has to list all the same things and add one more at the end "In my bag I have a comb, a cat and a clock". The game ends when someone makes a mistake or gives up.
It occurred to me that this is a great analogy for the templates used in LLM chatbots! Every time you chat with an LLM the LLM re-reads the whole history of the conversation before responding. This is why they get slower over time, particularly if they are making web searches, viewing images, using MCPs or other tools which add a lot of hidden tokens to the chat history.
My talk from DjangoCon Europe 2025 - I discuss: