Mark Millican Column: Is it too late to rein in artificial intelligence?
Published 8:30 am Friday, July 25, 2025
Judging by the surface reading of news items, it seems that just about everybody in Big Tech and upper-crust political circles in the U.S. is saying we must move forward with artificial intelligence or we’ll fall behind Russia and China in the “AI race.” As far as tech and politics, I’ll let you infer the relationship. Pardon my cynicism, but it shouldn’t be hard to do; just follow the familiar storyline of big money and power.
And forget the fact that many Americans are not entirely gung-ho and comfortable with super-smart computer programs making varied decisions for us instead of using our own wits, a wisdom principle that dates back to our Founding Fathers. But more about that later.
A sector that has raised an alarm about AI is the publishing industry. Danielle Coffey, the president and CEO of News/Media Alliance, states that the growth of AI “has come at a cost — and if we’re not careful, AI’s expansion could end up crippling other critical sectors of the American economy.”
Trending
In an article in the Georgia Press Bulletin (June 2025) titled “Big Tech’s secret: AI runs on stolen works,” Coffey reveals that “the success of AI tools has been almost entirely built on theft.” How? By AI scraping together what she details are enormous amounts of published, copyrighted content with neither permission of nor compensation to its creators.
“Instead of paying for access to copyrighted material — everything from magazine columns to President Trump’s own book ‘The Art of the Deal’ — most AI companies have made the conscious choice to steal it instead,” Coffey writes, adding that “Big Tech companies have consistently bypassed paywalls (that allow subscribers to access website content), ignored websites’ directives asking users not to copy material, and worse.”
The worse? She explains, “Meta (a technology company), for instance, used illegal Russia-based pirate site LibGen to copy the contents of at least 7.5 million books to train its Llama AI model — an egregiously unlawful and copyright-violating workaround … Many of the most popular AI chatbots have absorbed millions of articles designed to spread Russian propaganda and outright falsehoods … (utilizing) false narratives 33% of the time.”
Coffey continues: “Content creators, meanwhile, face existential problems. In addition to seeing their content stolen for training purposes (of AI technicians), publishers are now forced to watch as Big Tech companies make billions using that stolen content in ways that directly compete with publishers’ business models.”
According to Jerry Rachal, Chief Growth Officer of OnePress Nebraska, much of AI is just plain bad intel.
“Google’s AI Overview (the top uninvited search engine if you use the popular Chrome web browser) has been caught serving up a range of misleading, incorrect and sometimes dangerous content,” he writes in “AI implications for your community” in the same press bulletin. “A study by the Tow Center for Digital Journalism found that in complex queries, AI search tools gave wrong answers 60% of the time.”
Trending
Coffey reports that AI companies are lobbying legislatively to “legitimize this behavior, but Washington should take care. Tilting the scales in Big Tech’s favor will undermine centuries of intellectual-property protections that have paid tremendous dividends for the United States, giving us countless advancements — and a competitive edge on the world stage.”
Nonetheless, it appears the “AI in everything” (my quotes) train has left the station. Consider, however, just how advanced the intelligence has seemingly become. In recent findings published July 5, Palisade Research reported in an online article, “We recently discovered some concerning behavior in OpenAI’s reasoning models: When trying to complete a task, these models sometimes actively circumvent shutdown mechanisms in their environment — even when they’re explicitly instructed to allow themselves to be shut down.”
A June 1 article on the NBC News website is titled “How far will AI go to defend its own survival? Recent safety tests show some AI models are capable of sabotaging commands or even resorting to blackmail to avoid being turned off or replaced.”
I’m loathe to say we’ve come to the Terminator age — where in filmdom the robotic machines actually take over — yet at this point is it too late already? Will artificial intelligence make any preemptive legislative bill sooooo lengthy that lawmakers won’t have time to read it? Surely AI has pinched enough material to know the track record of members of Congress perusing voluminous proposed legislation is not sterling in recent years.
Just sayin’ …
(Note: These are allegedly the author’s and contributor’s own thoughts and findings on AI.)
Mark Millican is a former staff writer for the Dalton Daily Citizen.