Our profitable request for Peter Kyle’s ChatGPT logs surprised observers
Tada Photographs/Victoria Jones/Shutterstock
Once I fired off an e mail at the beginning of 2025, I hadn’t meant to set a authorized precedent for a way the UK authorities handles its interactions with AI chatbots, however that’s precisely what occurred.
All of it started in January once I learn an interview with the then-UK tech secretary Peter Kyle in Politics Residence. Making an attempt to recommend he used first-hand the know-how his division was set as much as regulate, Kyle stated that he would usually have conversations with ChatGPT.
That obtained me questioning: might I acquire his chat historical past? Freedom of data (FOI) legal guidelines are sometimes deployed to acquire emails and different paperwork produced by public our bodies, however previous precedent has urged that some personal information – equivalent to search queries – aren’t eligible for launch on this approach. I used to be to see which approach the chatbot conversations could be categorised.
It turned out to be the previous: whereas a lot of Kyle’s interactions with ChatGPT had been thought-about to be personal, and so ineligible to be launched beneath FOI legal guidelines, the instances when he interacted with the AI chatbot in an official capability had been.
So it was that in March, the Division for Science, Trade and Expertise (DSIT) offered a handful of conversations that Kyle had had with the chatbot – which grew to become the idea for our unique story revealing his conversations.
The discharge of the chat interactions was a shock to information safety and FOI consultants. “I’m shocked that you just obtained them,” stated Tim Turner, an information safety knowledgeable based mostly in Manchester, UK, on the time. Others had been much less diplomatic of their language: they had been surprised.
When publishing the story, we defined how the discharge was a world first – and having access to AI chatbot conversations went on to realize worldwide curiosity.
Researchers in numerous international locations, together with Canada and Australia, obtained in contact with me to ask for recommendations on find out how to craft their very own requests to authorities ministers to attempt to acquire the identical info. For instance, a subsequent FOI request in April discovered that Feryal Clark, then the UK minister for synthetic intelligence, hadn’t used ChatGPT in any respect in her official capability, regardless of professing its advantages. However many requests proved unsuccessful, as governments started to rely extra on authorized exceptions to the free launch of data.
I’ve personally discovered that the UK authorities has change into a lot cagier across the thought of FOI, particularly regarding AI use, since my story for New Scientist. A subsequent request I made through FOI laws for the response inside DSIT to the story – together with any emails or Microsoft Groups messages mentioning the story, plus how DSIT arrived at its official response to the article – was rejected.
The explanation why? It was deemed vexatious, and checking out legitimate info that should be included from the remainder would take too lengthy. I used to be tempted to ask the federal government to make use of ChatGPT to summarise the whole lot related, given how a lot the then-tech secretary had waxed lyrical about its prowess, however determined in opposition to it.
General, the discharge mattered as a result of governments are adopting AI at tempo. The UK authorities has already admitted that the civil service is utilizing ChatGPT-like instruments in day-to-day processes, claiming to save as much as two weeks’ a 12 months by means of improved effectivity. But AI doesn’t impartially summarise info, neither is it excellent: hallucinations exist. That’s why it is very important have transparency over how it’s used – for good or ailing.
Matters:
- politics/
- 2025 information overview

