Summary: "Across 16 experiments with over 27,000 participants, we show that people tend to evaluate creative writing
less favorably when they believe it was written by artificial intelligence (AI), because they see it as less
authentic than if a human author had produced the exact same work without AI. This bias is persistent and
difficult to reduce, even when using techniques that mitigate aversion to AI in other contexts."
More accurate to say that Humans are biased towards valuing "human" activity.
Logically text is text, and the value is independent of the creator. News flash people are racist/bigoted, they love to create arbitrary in/out groups and believe in patterns that don't exist.
Odd that sociologists have known this for hundreds (thousands?) of years but they still do pretend science, arbitrarily chopping up people into Men/Women/Children/Blacks/Whites what have you. Now we are chopping text into Human/AI/Spam/Ham. Were does it end? I'll give you a hint, the most efficient arrangement for all points in time is null. Material harm is much harder to fake than categorical harm.
The claim is always that this is somehow useful (yet to see a lick of evidence) but the reality is always oppression, inefficiency and abuse. The cure is worse than the disease.
Like OOP this ends with someone working full time to enforce segregation and do the required book keeping. Just the other day on hacker news there was a friend-foe extension. Somebody thought that was worthwhile, spent their time to create it, because segregation creates valueless work. Strong typing is just a human forcing the compiler to throw a tantrum because a purely arbitrary category was violated