just wow. read this, from Techdirt:
Wired, Business Insider Editors Duped By Completely Bogus ‘AI’ Using ‘Journalist’ Who Made Up Towns, People That Don’t Exist
Here's Wired's mea culpa:
https://www.wired.com/story/how-wired-got-rolled-by-an-ai-freelancer/
Techdirt's Karl Bode correctly concludes: "This country has taken an absolute hatchet to quality journalism, which in turn has done irreparable harm to any effort to reach reality-based consensus or have an informed electorate. The rushed integration of “AI,” usually by media owners who largely only see it as a way to cut corners and undermine labor, certainly isn’t helping. Add in the twisted financial incentives of an ad-based engagement infotainment economy, and you get exactly the sort of journalistic outcomes academics long predicted."
@briankrebs Well, it's hardly surprising, given that even DEFCON was duped by AI slop.
@briankrebs Q: Do they even have fact checkers?
I worked for a publishing company in the late 80s, early 90s. The fact-checking role was relegated to the authors (who were promoted to 'field editors').
The fact checking done in-house was by the lowest-paid, newest employees. It amounted to checking spelling in old yellowed dictionaries, and dialing the phone numbers in every magazine ad to make sure it actually reached the company.
@briankrebs You can follow the TechDirt story to the source at https://pressgazette.co.uk/publishers/digital-journalism/wired-and-business-insider-remove-ai-written-freelance-articles/
@briankrebs Just think, in 1998 this was an Onion headline.
https://theonion.com/u-s-ambassador-to-bulungi-suspected-of-making-country-1819564610/
@briankrebs Adjacent to this, when I research to write an article, I constantly have to dodge fake AI-generated articles, and even sometimes whole websites. It's infuriating.
I've seen some websites that were likely entirely AI-generated, even creating fake journalists personas, names, and pictures.
We have to be so vigilant now to double and triple check every source of information. This pollution is everywhere, at a rate of disinformation completely unprecedented. I don't think most people are as careful as I am on this, and I fear for what this means for the future.
are there no block lists for slop domains for uBlock?
does it need crowd sourcing?
@joriki @briankrebs I think it would be very complex to keep up to date and could backfire with false positives, so I'm not convinced an automated tool would be a good solution for this.
What I would rather see is some sort of vetting system for organizations and independent journalists taking a pledge to never use AI-generative tools in their material (text and image). But even that isn't perfect either, and as Brian's post discusses above, even famous newspapers can get duped.
@Em0nM4stodon @joriki it's becoming pretty difficult to block a lot of the scraping being done by LLMs because it is mostly coming through residential proxies, a major and growing problem that everyone wants to just ignore or wish away.
@briankrebs „residential proxies“ is just the marketing term for botnets. The way they now openly offer to use the network connection of unsuspecting users that happened to have installed an unrelated app whose developer decided to add that library for some extra money is shameful, in my opinion. I wrote about this shady market at https://jan.wildeboer.net/2025/04/Web-is-Broken-Botnet-Part-2/ @Em0nM4stodon @joriki
@corbet Le sigh. And #WhereIsMySurprisedFace :( @briankrebs @Em0nM4stodon @joriki
As @LukaszOlejnik demonstrates in a new paper, these types of operations are easy and cheap to build at scale.
@briankrebs You'd know better than I, but transparently publishing their process for taking pitches might help. Why doesn't Wired require "a proper fact-check process" for an article from a new contributor, that's an optional step? Publishing before paying the writer (which raised suspicions)? This isn't a hot breaking story. Process could fix this but the mea culpa includes no mention of changes made to prevent in future, just "should generally" have been caught. AI isn't the root problem here, a hand crafted version would have worked just as well.
to be clear, I was talking about blocking finished sites at the retail end of things, not about blocking scrapers