The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"
For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Andres Sepulveda Morales joins me to discuss his journey from three tech layoffs to founding Red Mage Creative and leading the Fort Collins chapter of the Rocky Mountain AI Interest Group (RMAIIG). W…
Will Petillo, onboarding team lead at PauseAI, joins me to discuss the grassroots movement advocating for a pause on frontier AI model development. We explore PauseAI's strategy, talk about common mi…
I am joined by Tristan Williams and Felix de Simone to discuss their work on the potential of constituent communication, specifically in the context of AI legislation. These two worked as part of an …
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?
If you're int…
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max …
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI g…
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed.
After getting some advice and reflecting more on my own personal …
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focuse…
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety po…
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can…
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy of typical papers, and some generally appli…
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon after some preliminary experimentation and ide…
I provide my thoughts and recommendations regarding personal professional portfolios.
00:35 - Intro to portfolios
01:42 - Modern portfolios
02:27 - What to include
04:38 - Importance of visual
05:50 - The…
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of n…
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something similar.
Links to all articles/papers which…
This episode kicks off our first subseries, which will consist of recordings taken during my team's meetings for the AlignmentJams Evals Hackathon in November of 2023. Our team won first place, so yo…
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it i…
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.
Join the Mech Interp Discord se…
We're back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
Check out the About page on the Into AI Safety website for a summa…
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.
TL;DL