1. EachPod

AI Persuasion Through Reinforcement Learning and Rhetoric

Author
Neural Intelligence Network
Published
Mon 30 Jun 2025
Episode Link
https://podcasters.spotify.com/pod/show/neuralintelpod/episodes/AI-Persuasion-Through-Reinforcement-Learning-and-Rhetoric-e34pv1e

This research paper examines the ethical and societal implications of Reinforcement Learning from Human Feedback (RLHF) in generative Large Language Models (LLMs), such as ChatGPT and Claude. It argues that RLHF subtly persuades users by embedding human values and motives into AI-generated text. The authors employ procedural rhetoric to analyze how these underlying mechanisms influence language conventions, information-seeking practices, and human-AI relationships. Ultimately, the paper highlights concerns regarding transparency, trust, and bias within these increasingly "human-like" AI systems.

Share to: