The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.
The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.
It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.
@intelwire I think most of those products are aimed at accurate or true claims though correct? It's possible the human beings that make them are flawed and have a subjective understanding of what's "true" but that doesn't mean they are placing priority on trying to provide the truth?
@intelwire I guess I'd have to understand an example where it seems the attempt is to deliberately place persuasion above truth seeking.