The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.
The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.
It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.
@intelwire I guess I'd have to understand an example where it seems the attempt is to deliberately place persuasion above truth seeking.