1. EachPod

“Call for suggestions - AI safety course” by boazbarak

Author
LessWrong ([email protected])
Published
Thu 03 Jul 2025
Episode Link
https://www.lesswrong.com/posts/qe8LjXAtaZfrc8No7/call-for-suggestions-ai-safety-course

In the fall I am planning to teach an AI safety graduate course at Harvard. The format is likely to be roughly similar to my "foundations of deep learning" course.

I am still not sure of the content, and would be happy to get suggestions.

Some (somewhat conflicting desiderata):

  1. I would like to cover the various ways AI could go wrong: malfunction, misuse, societal upheaval, arms race, surveillance, bias, misalignment, loss of control,... (and anything else I'm not thinking of right now). I talke about some of these issues here and here.
  2. I would like to see what we can learn from other fields, including software security, aviation and automative safety, drug safety, nuclear arms control, etc.. (and happy to get other suggestions)
  3. Talk about policy as well, various frameworks inside companies, regulations etc..
  4. Talk about predictions for the future, methodologies for how to come up with them.
  5. [...]

---


First published:

July 3rd, 2025



Source:

https://www.lesswrong.com/posts/qe8LjXAtaZfrc8No7/call-for-suggestions-ai-safety-course


---


Narrated by TYPE III AUDIO.

Share to: