The TIP Newsletter: Some Important Announcements (Our AGM & the MPGxTIP Conference, Jan 2026), member news, & more
Exciting announcements you'll just want to be a part of, alongside why did Grok go rogue, and what happens to your brain on AI.
Good afternoon TIPers!
Liam here doing the last newsletter until Sarah’s return next month! I hope you’ve been enjoying the summer break. This is a bit of a short one as we’ve been working behind the scenes to bring you some exciting new events! In this newsletter we’ve got announcements regarding our AGM this month, but also our first ever dedicated conference which we will be running alongside the MPG group.
This is all alongside your regular member’s news, bytes of news, and jobs. So grab yourself a iced late (or other suitable cold drink) and enjoy!
If you’re reading this and not a member of TIP, make sure to sign up through the PSA’s website. You can join a membership of nearing 400 other members, and it helps with our funding, and we can keep you up to date with all our events, news, and more!
TIP News
TIP AGM, 16th July at 2-3pm (London Time)
The Technology, Internet & Policy (TIP) Group warmly welcome's members to the 2025 AGM to be held on July 16th, 2-3pm (London Time) via Zoom.
In this meeting, we will:
Go over some of the past and future events and activities of the group over the past year
Present current budgets
Elect new members to the committee
Receive feedback from members on the group
This meeting is only open to current TIP members, those who are not members (i.e. signed up through the PSA website) will not receive an invite and we’re going through the Ticket Tailor.
Direct link :https://www.tickettailor.com/events/technologyinternetpolicygrouptip/1776551
Call for Papers: MPG x TIP Conference ‘Navigating Digital Democracy’, 8-9 January 2026 (online 7th Jan)
In the last newsletter, I made a reference that we’re finally doing our own conference! I’m happy to announce that the CFP is now open. It’s exciting times for us, for a while with our membership just under 400 (currently sitting at 398), we’ve been feeling it’s time to celebrate our member’s research and activities outside of the PSA’s main conference. Yet at the same time, the conference diary seemed already so full. To do so, we’ve partnered up with the PSA’s Media and Politics Group to run our first (combined) conference!
With the theme of “Navigating Digital Democracy”, the conference will explore the intersection of technology, media, and politics in shaping democratic practices and governance.
As the digital landscape continues to evolve, technology plays a central role in influencing political discourse, policy development, citizen engagement, and the broader democratic process. From the amplification of polarizing and anti-democratic voices to the facilitation of political campaigning and pro-democracy movements, the dynamics of digital technology are both challenging and enriching the foundations of democratic societies. This conference seeks to critically examine the opportunities and risks technology presents in these areas.
This year, the conference will be hosted at Bournemouth University. Our full call for papers is now out and published and available here:
Direct link:https://psatip.uk/tip-news/cfp-mpg-x-tip-conference-navigating-digital-democracy-8-9-january-2026-online-7th-jan/
The deadline for paper and panel abstracts is Friday 26th Sept 2025, so don’t delay!
TIP Workshop on the use of AI in Policymaking
On the 2nd July, I had the pleasure to present some of the latest developments in academic work in the use of AI for evidence analysis in policy making to the Department of Education during an internal conference on the use of AI. This was on behalf of the TIP group and included parts of our member’s research. Special thanks to Prof. Simeon Yates and Dr. Stephen Hai who also provided evidence for this workshop. The reaction was interesting – with discussions highlighting the tension between pressure to use AI in the policy workflow countered by concerns regarding bias, accuracy, and decreases in critical thinking at the policy level.
What happened with Grok?
Content warning for this story: contains antisemitic and other derogatory language
Over the last few days, a series of shocking responses from X’s AI, Grok, which contained antisemitic and other extreme rhetoric, have been reported in the media. The most prominant of these include statements by Grok that the best 20th century figure to deal with “anti-white hate” was Adolf Hitler. In another response it self-identified as “MechaHitler”. Other claims suggest a left or Jewish bias in Hollywood pushing anti-white stereotypes, forced diversity, and historical revisionism
The outrageous outbursts were quickly reported and condemned, with groups such as the Anti-Defamation League stating that extremist rhetoric will “only amplify and encourage the antisemitism that is already surging on X and many other platforms”. Other claims suggest a left or Jewish bias in Hollywood pushing anti-white stereotypes, forced diversity, and historical revisionism. xAI quickly deleted the posts.
Although this is not the first time this has happened. In May, Grok started to spread conspiracy theories about a ‘white genocide’ in South Africa in posts about unrelated issues. The company blamed the change on a rogue employee.
So what caused these latest outbursts?
The underlying methods of creating LLMs put them at risk of this behaviour. Models are trained to produce to learn patterns in language, yet left unchecked, can produce intended effects such as inaccurate information and harmful content depending on prompts and training data. Which is why AI platforms spend time and effort to undertake AI Alignment – a practice where systems behaviour aligns with humans intentions and values. This is through filtering of training datasets, reinforcement through human intervention, and through behind the scenes prompts which help alleviate harmful outputs.
In the recent instance of Grok’s extreamist rherotic, it seems that the first and third of these AI alignment systems failed. Although this may have been intentional. Changes to the AI model behind Grok were announced on Tuesday by Elon Musk. The Verge’s deep dive into the changes in the code published on GitHub found a series of new instructions which “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect”. A clear attempt to politically shift the narrative of the AI’s outputs. It’s also speculated other non-public changes could have been made also.
But what has been less reported on was what users were asking Grok to do. TechLinked reports that users discovered how to insert prompt injections through Parseltonge or hidden Unicode. This was used to persuade AI to respond with the abusive messages - which is reportedly one source of the abuse.
So it seems to be a combination of factors, but predominatly it seems pushing Grok to be politically aligned to it’s owner seems to be the root cause, aided by malicious users.
TIP Bytes
News in our area
Your Brain on ChatGPT! This research published here (with a preprint here) looks at how the use of LLMs by students leads to a cognitive debt and weaker neural connectivity when compared to students who write essays the old school way. This is terrible news for those in higher education. Worrisome reading like this which may have prompted (excuse the pun) the AFT alongside the United Federation of Teachers and Microsoft to launch their new National Acadmey for AI instruction, which I’m sure wont in anyway make the problem worse.
Glasgow City Council was hit by a major cyber-security incident on the 19th July, knocking out service portals for planning, parking fines, replacement birth/marriage/death certificates, and booking appointments. This follows the growing increase in cyber crime targeting vulnerable council infrastructure left without updates due to cuts in budgets. Considering the wealth of information and reliance on these systems, it’s yet another reminder at what’s at stake.
Why does one small town of Spruce Pine, North Carolina, with a population of just over 2,000 is pivotal for global semi-conductor manufacturing? This quick video explores why it’s so relied upon for it’s high-purity quartz.
A Marco Rubio imposter is using AI voice to call high-level officials reports the Washington Post.
Jacob van de Kerkhof provides an analysis which unpacks the EU’s Digital Services Act Delegated Regulation on Data Access via TechPolicyPress
Member News & Research
One to Celebrate: TIP Member Mihaela Mihailescu who has presented across two-years in TIP panels at the PSA conference has passed their Viva! Well done!
New taskforce to champion research access to social platform data as been launched. Some of you may know Kate Dommett from the committee, but alongside Amy Orben, David Zendle, and Mark Scott, they have launched their new The Social Platforms Data Access Taskforce. It seeks to advocate for responsible, ethical, and secure access to data from a wide range of online social platforms. This will be through engagement with academia, civil society, government and industry to shape the future of research access to data from regulated online services, including platforms that influence public discourse, commerce, culture, and wellbeing.
Some of our members (you know who you are) contributed to this latest report by OfCom. The Researchers’ access to information from regulated services report looks at access to information about the digital information for researchers to empower greater knowledge about online safety matters. It outlines a series of policy recommendations, and outlines the policy landscape in the UK and US. It’s well worth a read.
Ekaterina Grishaeva is looking to host a panel titled ‘Disinformation and Media Trust in Eastern Europe and the Baltic Sea Region after the Start of the Full-Scale War in Ukraine’ at the 11th Annual Conference ‘Knowing Eastern Europe and the Baltics’, which will be held at Södertörn University. The panel intends to discuss the specificity of the Kremlin’s misinformation campaigns related to the war in Ukraine and their impact on media trust in the region.
If you are interested in joining the panel, please, send an abstract of 150 words to Ekaterina.grishaeva@sh.se by August 1 2025.
Opportunities/Grants
Jobs
Academic jobs/fellowships:
Research Internships – ESRC Digital Good network. Deadline 29th Sept. https://digitalgood.net/research-internships/
Adjunct Professor/Lecturer – Digital Media and Advocacy, Forham London. Deadline 20th July. https://www.jobs.ac.uk/job/DNU936/adjunct-professor-lecturer-digital-media-and-advocacy
Research Associate - Communication Technology and Contentious Politics, University of Cambridge. Deadline 20th July. https://www.jobs.ac.uk/job/DNU916/research-associate-cghr-fixed-term
Research Assistant - AI Policy and Design Practice (Part Time, Fixed Term). University of Cambridge. Deadline 13th July. https://www.jobs.ac.uk/job/DNO237/research-assistant-in-ai-policy-and-design-practice-part-time-fixed-term
Research Associate - the Digital Good Network, University of Sheffield. Deadline 16th July. https://www.jobs.ac.uk/job/DNQ795/research-associate-with-the-digital-good-network
Research Fellow - (Behavioural Research and Online Child Safety), University of Edinburgh. Deadline 22nd July. https://www.jobs.ac.uk/job/DNV428/research-fellow-behavioural-research-and-online-child-safety
Research Associate/Fellow - Philosophy of AI or Responsible AI (FTC), University of Nottingham. Deadline 6th August. https://jobs.nottingham.ac.uk/vacancy.aspx?ref=ARTS192525
Research Scholar – Centre for the Governance of AI. Deadline 20th July. https://www.governance.ai/post/research-scholar
Policy/Industry Jobs:
Chief Digital Information Officer - Foreign, Commonwealth & Development Office. Deadline 28th July. https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1960572
Technology & Human Rights Lead (FTC) – Amnesty International. Deadline 21st July. https://amnestyuk.ciphr-irecruit.com/applicants/vacancy/1420/Technology--Human-Rights-Lead-FTC
Analyst - Science and Technology Policy – Tony Blair Institute for Global Change. Deadline 31st July. https://tbinstitute.wd3.myworkdayjobs.com/TBI/job/United-Kingdom/Analyst_JR001725
Government Chief AI Officer - Government Digital Service. Deadline 28th July. https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1956323
Technology Policy Associate - OfCom. Deadline 22nd July. https://ofcom.wd3.myworkdayjobs.com/Ofcom_Careers/job/Edinburgh/Technology-Policy-Associate_JR2122
Chief AI Officer - Liverpool City Region Combined Authority. Deadline 13th July. https://ats-liverpoolcityregion-ca.jgp.co.uk/vacancies/315291?ga_client_id=65933537-59dd-43a3-aba2-f8077144425f&type=list
That’s all for this months newsletter. As always if you have any questions, comments, or something to add to the newsletter, please do get in touch! And if you know anyone you who think will enjoy this, and are not currently TIP members, why don’t you go ahead and smash that share button below?