In this piece, AI entrepreneur Kareem Saleh connects the broader social milieu to the field of Explainable AI in a compelling and enjoyable Forbes read. Borrowing Steven Colbert's neologism "truthiness" invented in his tenure on The Daily Show, Saleh draws a line between conspiracy thinking and explainability in the way that both construct a narrative that "sounds accurate and makes people comfortable but in reality can be far from the truth." In effect, it is a form of algorithmic storytelling, accessible only to domain experts (the developers). He also notes the downstream psychological effect on the scientist – having an explanation itself perpetuates a false sense of trust in the decisions made by AI, no matter their validity. The right mindset should start instead with a healthy dose of skepticism about a monolithic approach to explainability across models, building on themes we previously covered in previous issues.