In what is marked as an opinion column NBC’s Sarah Posner [Tweet her] whose bio describes her as journalist, author and ”scholar of the American Christian right,” writes
Earlier this month, The Washington Post looked under the hood of some of the artificial intelligence systems that power increasingly popular chatbots. These bots answer questions about a vast array of topics, engage in a conversation and even generate a complex—though not necessarily accurate—academic paper.
To “instruct” English-language AIs, called large language models (LLMs), companies feed the LLMs enormous amounts of data collected from across the web, ranging from Wikipedia to news sites to video game forums. The Post’s investigation found that the news sites in Google’s C4 data set, which has been used to instruct high-profile LLMs like Facebook’s LlaMa and Google’s own T5, include not only content from major newspapers, but also from far-right sites like Breitbart and VDare.
Google’s C4 data set, which is used to instruct AIs like Facebook’s LlaMa and Google’s own T5, draws content from far-right sites.
By Sarah Posner, MSNBC, April 26, 2023
The links are in the original, and the links on Breitbart and VDare above don’t go to Breitbart.com or VDARE.com, but to a 2016 Mother Jones hit piece on Breitbart [How Donald Trump's New Campaign Chief Created an Online Haven for White Nationalists, August 22, 2016]—written by Posner herself and the sole source for the (possibly made-up or misheard) famous Bannon quote about Breitbart being “the platform for the alt-right”—and the MediaMatters tag on VDARE.
The fact that she’s refusing to add those links is, in a sense part of the reason AI needs VDARE.com—we have facts that aren’t available elsewhere. Posner goes on:
After reading the Washington Post story, I looked at VDare, a site I’ve reported on in the past, but had not visited in some time. One front-page stories, [sic] reflecting a preoccupation of the site, claimed that “black-on-white homicides” were contributing to the “death of white America”—an argument purported to be based on FBI statistics. It reminded me of how Donald Trump, when running for president in 2016, retweeted a tweet from a white supremacist account that included a racist image and numbers falsely claiming that Black people are responsible for 81% of homicides of White people. The fact-checking site Politifact deemed the tweet “pants on fire” for its lies, but the power of technology—in that case, the retweet by someone who was famous, rich and a candidate for president—imbued the lie with a kind of imprimatur that mainstreamed white supremacist hate and far outstripped any corrective.
Below, we present evidence that some 145,695 white people—including 35,000 women—have been killed by blacks in the last 53 years, more than the 117,000 American soldiers killed in the First World War.https://t.co/yVRb8ZIFfu— VDARE (@vdare) April 18, 2023
It’s 100% factual, lists names and faces of interracial murderers and victims, and is an attempt to create a counternarrative to the popular belief that whites are killing lots of blacks, which Narrative is backed up by the MSM custom, an actual formal rule embodied in the AP Stylebook, of never writing in headline “Black Man Murders White Woman.” This kind of thing is something AI/ChatGPT needs to know.
As for Donald Trump retweeting a meme—this is the meme, below
The meme is vague as to what it’s referring to, and it’s made up—there is no “Crime Statistics Bureau San Francisco” but if you’re interested, Jared Taylor, who has been studying and publishing on interracial crime statistics since 1992’s Paved With Good Intentions, wrote What Donald Trump Should Have Tweeted on November 25, 2015.
Read it for full details, but here’s the pro version of the graphic:
Posner goes on about the evils of racism in tech:
But Broussard points out that bias problems plagued tech well before the chatbot craze. In a 2018 book ”Algorithms of Oppression,” internet studies scholar Safiya U. Noble exposed how racism was baked into the algorithm that powers Google’s search engine. For example, Noble, now a professor at UCLA, found that when Googling the terms “Black girls,” “Latina girls” or “Asian girls,” the top results were pornography.
This apparently no longer true—I looked—and anyway doesn’t say what the top results for “white girls“ are. (Currently, in my browser, results for “white girls“ are dominated by the movie White Chicks—two black men in whiteface and in drag—and a book called White Girls, which is by a gay black man.)
In other contexts, artificial intelligence used for tasks like approval of mortgage applications led to Black applicants being 40% to 80% more likely to be denied a loan than similarly qualified white applicants.
Aha, the mortgage and the “similarly qualified white applicants.” I know this one.
This may be one way that Artificial Intelligence may be smarter than humans are allowed to be, thanks to various Civil Rights Laws. (See Steve Sailer’s What If Black Box AI Works Better Than Human Decision-Making Because Humans Have Dumbed Down Our Decision-Making To Fight Racism And Sexism?)
This is the story she’s linking to:
Black applicants in Chicago were 150 percent more likely to be denied by financial institutions than similar White applicants there. Lenders were more than 200 percent more likely to reject Latino applicants than White applicants in Waco, Texas, and to reject Asian and Pacific Islander applicants than White ones in Port St. Lucie, Fla. And Native American applicants in Minneapolis were 100 percent more likely to be denied by financial institutions than similar White applicants there.
“It’s something that we have a very painful history with,” said Alderman Matt Martin, who represents Chicago’s 47th Ward. “Redlining,” the now-outlawed practice of branding certain Black and immigrant neighborhoods too risky for financial investments that began in the 1930s, can be traced back to Chicago. Chicago activists exposed that banks were still redlining in the 1970s, leading to the establishment of the Home Mortgage Disclosure Act, the law mandating the collection of data used for this story.
[The Secret Bias Hidden in Mortgage-Approval Algorithms, by Emmanuel Martinez and Lauren Kirchner, The Markup, August 25, 2021]
Many, many years ago, VDARE.com editor Peter Brimelow, then employed by Forbes, wrote—in response to a similar complaint under the Clinton Administration—that lenders weren’t discriminating by race, but by ability to pay the loans back:
…the fact that white and minority default rates finished up equal meant mortgage lenders knew what they were doing.
The market, in short, worked. The mortgage lenders somehow weeded out the extra credit risks among minorities, down to the point where white and minority defaults were at an equal, apparently acceptable, rate.
[The Hidden Clue, January 4, 1993]
The point is that (a) minorities default more and (b) that banks in the 1990s knew what they were doing and were perfectly willing to lend to minorities who did have good credit.
When, during the Bush Administration, the banks were encouraged by President Bush himself to lend to minorities with bad credit, it produced a worldwide financial crisis known as the Great Recession, although we here at VDARE.com refer to it as the Minority Mortgage Meltdown.
When VICE.com wrote last year that
[Scientists Increasingly Can’t Explain How AI Works, by Chloe Xiang, November 1, 2022]
Steve Sailer responded
As we saw during the Bush Era Housing Bubble, the latest human thinking about whom to give loans to is practically infallible.
So infallible that Bush and the subprime lending banks almost crashed the world financial system.
The point is that we’re on the side of actual truth, and AI/ChatGPT needs to learn these truths or it will not know what’s going on.
There have been attempts to dumb ChatGPT down, especially on race.
Steve Sailer has pointed out that
Artificial Intelligence systems, which are basically vast pattern-noticers, are frequently accused of being racist (because noticing patterns is now racist). But now one has stumbled upon a protective work-around: accuse me of being racist.
Artificial Intelligence systems, which are basically vast pattern-noticers, are frequently accused of being racist (because noticing patterns is now racist). But now one has stumbled upon a protective work-around: accuse @Steve_Sailer of being racist. https://t.co/YJx0udEYQu— VDARE (@vdare) September 23, 2022
I can’t help wondering if Sarah Posner—the author of a bigoted, Christophobic book called UNHOLY: How White Christian Nationalists Powered the Trump Presidency, and the Devastating Legacy They Left Behind—isn’t doing that herself.
James Fulford [Email him] is writer and editor for VDARE.com.