> Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender.
> The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19.
Cool, so we’ve reached the brain-rot stage where people are not only taking AI summaries as fact (been here for a while now, even before LLMs with Google’s quick answer stuff) but they are _citing_ them as proof. Fuck me. I know that’s a little much for HN but still, just insane, at no point did someone think to get a more primary source before canceling the show.
The complete abdication of thinking and even the most minor research is depressing. I use LLMs daily but always make sure to check the sources, verify the claims. They are great for surfacing info but that’s just the first step. I’ve lost track of how many times an LLM has confidently stated something using sources and I check the sources and they say nothing of the sort.
The legal implications of AI are staggering --- and just now starting to breach the news cycle.
This won't end well in my judgment.
> The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name.
"AI" makes for a clickier story, but you don't need it to have that kinda screw-up.
Actually, you don't even need the web. Back in the 90's, a young coworker of mine was denied a mortgage. Requested his credit report - and he learned that he'd already bought a house. In another city. At age 5. Based on income from the full-time job at Ford Motor he'd had since age 4. And several other laughable-in-retrospect hallucinations.
the difference is that "ai can have errors" absolves anybody of consequences for this sort of thing.
If, instead of the AI summary, the First Nation had come across some little on-line forum where angry users where denouncing "Ashley MacIsaac" as a sex offender, and (just as with the AI) the First Nation had neglected to verify which Ashley MacIsaac it was - then who would be facing consequences for that?
OTOH - yes, I get that "the AI said" is the new "dog ate my homework" excuse, for ignoble humans trying to dodge any responsibility for their own lazy incompetence.
then who would be facing consequences for that?
Your analogy is bad.
"Some little on-line forum" with a few angry users is not really comparable to a mega-corp with billions of users.
Lawyers could but are unlikely to go after a few misguided individual users for slander. As they say, you can't get blood out of a rock. Mega-corp is a much more tempting target.
Legal liability for bad AI is just getting started but I expect lawyers are giddy with anticipation.
Okay, let's say the misguided individual users are posting on Facebook - a mega-corp with billions of users. And 13-digit market cap, to tempt the lawyers.
How does that play out? IANAL, but I'm thinking Facebook says "Sorry, but Section 230 covers our ass" - and that's about it. Still no consequences.