Episode 349 of RevolutionZ displays what happened when I asked artificial intelligence to critique my critique of artificial intelligence. In this episode, I share the results of this peculiar experiment—feeding my recent articles about AI dangers directly to ChatGPT and asking for its reaction. What in its view did I get right. What did I get wrong. And I comment, as well. I also ask it about how it operates. How does it answer questions, write a song, and so on. It was very forthcoming and clear. I also asked it its reaction to Noam Chomsky's critical writings about AI. Again, very forthcoming agreeing with most, which it accurately conveyed, but questioning some, also accurately conveyed..
ChatGPT's analysis of my article was nuanced, including emphasize and surprisingly agreeing with my core concern about "infantilization"—that is, that humans might lose distinctively human activities by becoming passive and dependent on AI systems. The "conversation" that followed accessibly clarified creative processes, neural networks, and philosophical perspectives on machine intelligence.
I also asked it to compose a Dylan-esque protest song about the pharmaceutical industry in seconds, which it did, and then methodically explain how it generated such content through pattern recognition rather than genuine understanding, using the first line as focus. This window into AI's functioning—explaining that it doesn't "research" or "look up" information or know anything in our sense, but rather has its trained encoded relationships across neural networks to consult—provides crucial context for understanding both its impressive capabilities and certain fundamental current limitations.
Chomsky in the title isn't clickbait. The conversation explores his critiques of AI, with ChatGPT offering balanced and well informed analysis of what Chomsky says and where he got things right but also where his perspective might be limited and why he might have erred in some respects. The discussion even ventures into territory of scientific bias and how brilliant minds can resist evidence that challenges their philosophical frameworks or practical aims.
This experiment only increased my concerns about AI's potential dangers—in many ways, experiencing its capabilities firsthand was more disturbing than theoretical discussions I was previously aware of. Will we approach AI development with caution, ethical frameworks, and democratic oversight to ensure these powerful tools serve humanity rather than diminish it? Or will we get sucked in by potential benefits, ease of use, etc.?