The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

Big shoutout to all the Turing fans out there (and there are many of you). I am not blaming Turing for any of this. I am commenting the modern assimilation of the broad concept as cultural product. You can stand down now.

Follow

@intelwire I think most of those products are aimed at accurate or true claims though correct? It's possible the human beings that make them are flawed and have a subjective understanding of what's "true" but that doesn't mean they are placing priority on trying to provide the truth?

@intelwire I guess I'd have to understand an example where it seems the attempt is to deliberately place persuasion above truth seeking.

Sign in to participate in the conversation
Mastodon

This is a server founded by GSUGambit. Others are welcome to sign up and try out Mastodon!