By Lucia Walinchus for Eye on Ohio.
Broadcast version by Nadia Ramlagan for Ohio News Connection Collaboration reporting for the Ohio Center for Investigative Journalism-Public News Service Collaboration
On May 12, 2021, Chris and Angela Powers submitted a $4,000 bid for a Lawrence County Land Bank property.
It was above the Auditor's appraised value of $3,340. But the Powers really loved the riverfront site. The owner of an adjoining property had let them camp right next to it for years.
So they bid for the wooded parcel, on the northern bank of the Ohio River, as well as an adjoining parcel - not on the river, but just behind it.
The Powers attended a May 18, 2021 land bank meeting. The agenda noted that another bid for the riverfront property had been submitted by Ed Gillespie, who owns more than 30 nearby properties and has close ties to the Mayor.
One official motioned to accept the Powers' bid, but then Mayor Sam Cramblit asked the Powers to leave the public meeting. They did. Upon their return, they learned that officials "tabled" their request.
Knowing there was now someone else interested in the property, the Powers submitted another bid for the riverfront property, this time for $11,100. Chris Powers said he got a phone call from Land Bank Administrator Tom Schneider on June 9 who encouraged him to lower his bid.
That oddity struck Powers as fishy. He asked how much Gillespie had bid. Schnieder said the bids were not public. To be on the safe side, Powers raised his bid again and offered $11,150 on July 14. Powers says he called Schnieder the day before a June 17 meeting to make sure everything was in order.
Schnieder later advised Powers that the board had approved giving the parcel to Gillispie for $11,101.50- $48.50 lower than the Powers' bid, and coincidentally $1.50 higher than the Powers' supposedly secret bid. Schnider explained in an email that "due to an inadvertent oversight" the board did not have their new bid.
When the Powers complained, Lawrence County said there was nothing they could do. They also had a new policy. All bids would be submitted by phone, not email. The Land Bank conveyed the property to Ed Gilliespie and his wife in September 2021.
The Land Bank sent the Powers a notice stating that they should come to the January 2022 Land Bank meeting where they would consider the non-riverfront property. The mailed letter, dated December 23, 2021, asked them to come to a meeting on Jan. 5, 2022. The postmark on the envelope is Jan. 12.
On its website, the organization's mission statement reads, "The Lawrence County Land Reutilization Corporation (The "Land Bank") revitalizes and strengthens Lawrence County Communities and its rural areas by preserving the current property values through the strategic demolition of abandoned, blighted and tax delinquent properties. Rehabilitating abandoned properties of potentially significant value but that are otherwise none [sic] productive due to abandonment and returns or recirculates these properties into productive use through a transparent, public and community oriented process."
A year-long investigation by Eye on Ohio paints a different picture. By combing through hundreds of property and tax records, most not online, and combining results with machine learning methods, reporters found that a combination of the property's size, taxes owed, how long the property had been abandoned, and its distance to the river could predict remediated properties 89% of the time.
A lesser distance to the river was among the most important contributors to the model, not counting properties certified delinquent in the past two years (when covid temporarily altered the foreclosure process). Being closer to the river and owing less money actually made properties more likely to enter the land bank. For example, a tax-delinquent property within 0.27 miles of the Ohio River had a 24% chance of remediation in the prediction versus properties further away, which only had a 9 percent chance.
In hard-hit Lawrence County, there is no shortage of blighted properties. But some land-bank parcels did not need rehabilitation - such as the undeveloped riverfront lot the Powers tried to buy.
Lawrence County has approximately 5,000 properties on its certified delinquent list. Almost 400 properties have gone through the land bank.
About two-thirds of delinquent properties were in arrears for less than $1,000. Most properties are not behind for more than a couple years. Among those owing more, however, property owners failed to pay taxes for 9.6 years, on average. One property had not paid since 1981.
And remediated properties were not among the most egregious debtors. Nearly 90% of land bank properties owed less than $5,000.
Local officials provided responses to public records act requests, such as the county's acquisition policy, but did not respond to several follow up questions. The Gillespies declined to comment by phone. An email they used to request the property is no longer in service.
Ohio passed special legislation for county land banks in 2006, and the mortgage crisis of 2008 made many Ohio counties enthusiastic adopters of the model. Fast-track foreclosure policies have been key to stabilizing housing prices in some areas but critics argue it gives county officials too much leeway in deciding who gets rehabilitated when there will never be enough funding to remediate every decrepit property in the Rust Belt.
Last year, Eye on Ohio published its first look at Land Banks using machine-learning methods. Certain factors such as a parcel's proximity to valuable properties or its school district made it more likely to be picked for remediation in six counties. In some places, officials responsible for economic recovery also are in charge of property remediation.
When Al Jenkins moved north from Georgia, he remembers the date well: it was November 22, 1963, the day President John F. Kennedy was shot. The country was in an uproar, and he was panicking as well, fleeing escalating race violence that then gripped the South.
He remembered with a twinge of regret a friend who owned a gas station. When Jenkins visited home 2 years later, the friend asked him to come back.
"They found his body in that station," Jenkins said, his voice low.
Upon arrival in Cleveland, Jenkins spent a year as a janitor at a hospital then found full-time work at a factory. He bought his first property downtown in 1985 and fixed it up. He eventually invested in several more properties in the area and rented them out. In Cleveland, Jenkins could control his own destiny. Or at least he could until recently.
The one property Jenkins does not own is the side lot next to his house that he cleaned up, landscaped, and fenced. Even a year after the initial story surfaced, the city still will not sell him that side lot.
According to a city official: "We do sell side yards and we have a fairly robust program. That said, there are areas in the city where we aggregate land for future development. The Upper Chester area (between Chester and Hough) is one such area where there has been a lot of recent interest. This is the reason Mr. Jenkins' request was denied in April 2020 - because this side yard is in a location proposed for redevelopment, specifically in-fill housing with some interested parties looking to develop low-income housing."
The city did not answer further questions about the lot, such as who, exactly, is interested in it and why Jenkins cannot be among them. They offered to lease him the property until further development. He declined.
"I think it's a lost cause," Jenkins said. "I'm done, I don't know what to do. I'm baffled by it."
This project was supported by the Data-Driven Reporting Project.
Jim Crowley, Ron Calhoun, Sara Stoudt, and Rich Weiss contributed to this project.
Lucia Walinchus wrote this article for Eye on Ohio.
Reporting by Ohio News Connection in association with Media in the Public Interest and funded in part by the George Gund Foundation.
get more stories like this via email
By M. Mitchell Waldrop for the National Academy of Science.
Broadcast version by Eric Tegethoff for Washington News Service reporting for the Pulitzer Center-Public News Service Collaboration.
The problem has gone by many names since the 2016 US presidential election first brought it to widespread attention-from "fake news," to "infodemic" during the COVID-19 pandemic, to "misinformation" in a slew of academic studies. Regardless, this toxic mix of confusion, misdirection, manipulation, and outright lies now threatens both public health and our collective faith in democratic elections-not to mention any common understanding of truth.
When it comes to solutions, though, "that's a very hard problem," says Jevin West, a biologist-turned-information scientist at the University of Washington in Seattle and co-creator of the widely used data literacy course "Calling Bullshit" (1). It's especially hard when you consider that much of the fight against misinformation has to take place on social media platforms, such as Facebook, YouTube, and X (formerly known as Twitter), which are in private hands.
Typically, these companies attack the problem with fact-checking on the most widely shared posts. But it's extremely difficult to police billions of posts per day when they're up against bad actors who've become skilled at high-tech fakery of all sorts. The toolkit ranges from software "bots" that can generate millions of fake user profiles and phony news posts to the latest artificial intelligence (AI)-powered chatbots that can fool people into thinking they're talking to another human.
And, of course, the platforms have to do their policing in today's culture-war environment, where any attempt to fact-check or moderate the discussion is instantly condemned as a plot to impose some ideological agenda.
The good news is that there has been an enormous outpouring of research into the misinformation problem in recent years. "It's a very fast-moving space," says New York University psychologist Jay Van Bavel. "Every week or two, there's another big paper." And, as researchers understand more about why people believe and share misinformation, they've begun to come up with next-generation solutions that the platforms could deploy at scale-and that don't tell people what to think (3, 4). Examples include crowdsourced accuracy ratings, nudges that encourage users to think more about accuracy before they share something, and even a kind of mental inoculation against common deception techniques.
Thus far, no one's claiming to have a complete solution. But researchers have begun to experiment with combinations of these techniques, allowing the interventions to reinforce each other and become more effective than any one of them alone. And in that work, we can begin to glimpse the outlines of a scalable, evidence-based "public health" system that could keep the misinformation plague from running rampant.
Your Brain on Social Media
At first blush, one might wonder, "Why bother policing?" Why not just let everyone see everything so they can make up their own minds?
A common retort: No one has the time, expertise, or stomach to sort through an endless flow of toxicity, harassment, and nonsense. If a platform doesn't make some effort to moderate the discussion-as virtually all the reputable ones still do-reasonable users will leave.
But another piece of the answer is that today's social media environment is rarely a neutral forum for debate. Often, the platforms actively reward the spread of misinformation-so strongly that some users routinely share posts that actually conflict with their political leanings.
In one recent laboratory experiment, for example, self-described habitual users of social media were found to share as much as eight times more misinformation than the self-described casual users (5). According to team leader Gizem Ceylan, a postdoc at the Yale School of Management, the experiment (which included four studies in all) then took a closer look at these habitual users and found that all the "likes," "reshares," and comments their posts were getting seemed to give them a rush of emotional validation akin to a high-sugar snack. And their craving for these psychic rewards had long since taught them to share the kinds of posts that got the strongest response-which were probably not carefully fact-checked articles from The New York Times. Instead, they had learned to post items that are surprising, emotionally charged, eye-catching, or zingers against people they and their friends despise: "content that exactly traces the characteristics of false information," Ceylan says, and that other researchers have shown will spread further and faster than truth (6).
Social media platforms do attempt to thwart at least some of the grifters, conspiracy theorists, and demagogues who supply those oh-so-clickable fakes. According to Indiana University computer scientist Filippo Menczer, who has spent the past decade developing tools to detect and map the activity of bots and other online deception technologies, big platforms such as Facebook, YouTube, and (pre-Elon Musk) Twitter have routinely disrupted hundreds of the secretive, but highly coordinated "information operations" that are often used by politicians, advocacy groups, and foreign actors to sow discord and misinformation.
Unfortunately, Menczer adds, the platforms haven't been nearly as tough with "superspreaders," who aren't even trying to hide. "A lot of harm and a lot of misinformation is spread not by unknown bots, but by well-known, even verified users," he explains. Often, these are influencers, elites, politicians, or celebrities who are making money, pushing a political agenda, crusading for a cause, or opposing vaccines. "And platforms are very reluctant to take them down because these people have lots of followers, which means they generate a lot of engagement and the platforms make a lot of ad money from them," Menczer says.
Instead, most of the big social media platforms rely heavily on fact-checking, which has become a global industry complete with professional standards for nonpartisanship, as well as for transparency about sources, methodology, and funding. Facebook, for example, will send its most popular and widely shared posts to be evaluated by one of its 90 international fact-checking partners. Posts that these firms judge to be false or misleading are then labeled as such and downgraded in Facebook's news feed, so that fewer users see them.
But fact-checking has its limits and must be done carefully, Menczer notes. If you don't stress what's true instead of repeating the false claim, he says, "then people will remember the claim instead of that it was false," and efforts will backfire. Still, independent researchers have confirmed that carefully crafted debunking labels have a real and lasting effect (7). "Just putting that 'not true' label on something is a profound thing," says David Rand, a cognitive scientist at MIT in Cambridge, MA. "It really reduces belief and sharing of that post." In one recent fact-checking experiment carried out with users in Argentina, Nigeria, South Africa, and the United Kingdom, this belief reduction amounted to at least 0.59 points on a 5-point scale-even after 2 weeks or more (8).
Even so, there aren't enough professional fact-checkers in the world to vet billions of posts per day. And even if there were, notes Gordon Pennycook, a psychologist at the University of Regina in Saskatchewan, Canada, "you're always putting a Band-Aid on things: You can only debunk something once it's already spread and out there."
Cognitive Inoculation
To limit that spread, some researchers are trying to help users recognize misinformation on their own. At a minimum, says Sander van der Linden, a psychologist at the University of Cambridge in the United Kingdom, "we need to empower people to spot manipulation as it happens, so that they can make genuinely informed decisions" about what to share, what to buy, or even what to believe.
Although digital literacy courses like the University of Washington's "Calling Bullshit" do this well, van der Linden says, he and his colleagues have been working on faster, more broadly applicable techniques inspired by decades of research into a psychological form of inoculation.
Also known as preemptive debunking, or "prebunking," this cognitive inoculation works like a real vaccine, he says: "You expose people to a weakened dose of a falsehood, or the techniques that are used to produce falsehoods, so that people can build up mental antibodies to neutralize misinformation in the future" (9). And because it's impossible to prebunk every lie coming out about, say, climate change or immigration, and since the (reputable) social media platforms are loathe to act as arbiters of truth, the best strategy, he argues, is to go for a broad-spectrum vaccine-prebunking that focuses on manipulation techniques. We've known about the bad guys' favorite tricks since the time of Aristotle, van der Linden says. Fake experts, false dilemmas, ad hominem attacks, phony tugs at the heartstrings, fearmongering against convenient scapegoats-"these are things that both the left and right can agree are misleading."
In 2019, van der Linden and Cambridge psychologist Jon Roozenbeek put prebunking to the test with Bad News, a role-playing game in which players were introduced to these manipulation techniques and then were awarded points for crafting a convincing misinformation campaign and attracting followers (10). Before-and-after tests showed that the game gave players a 20 to 25% boost in their ability to recognize real-life misinformation, van der Linden says, with the effect lasting for at least 7 to 10 days. Once their experiments were done, he says, the team made the game free and publicly available online, "and it went viral on Reddit. There were so many users coming to it that we had to buy more cloud space."
When the pandemic hit, the team launched a similar game, Go Viral, designed to inoculate players against COVID-19 misinformation specifically, van der Linden says. Backed by the UK government, the World Health Organization, and the United Nations, the game received more than 200 million views on social media, according to the UK government. As gratifying as that response was, though, the games took at least 5 to 20 minutes to complete-an eternity on social media-and once the games were out of the lab, they mostly attracted people who were already open to their message.
So Roozenbeek, van der Linden, and others devised an intervention with wider reach (11). Working with Google, they produced a series of 2-minute videos that use humor and pop-culture references to illustrate six of the most common manipulation techniques. In one example, they depict the sort of false dichotomy often used to push people toward extremism. In a clip from Star Wars: Episode III: Revenge of the Sith, Anakin says, "If you're not with me, then you're my enemy!" Obi-Wan responds that "only a Sith deals in absolutes!" The video's narrator concludes by intoning, "Nobody wants to be a Sith."
After extensive refinement in the lab, the videos were field-tested by dropping them into time slots normally used for ads on Google's YouTube platform, van der Linden says. This campaign went out to millions of real YouTube users, he says, "and we were able to randomly select about 30% who would either be exposed to our treatment video or to a control video about freezer burn." Then, in a follow-up about 18 hours later, the people who had seen the treatment video were 5 to 10% better than the controls at recognizing that manipulation technique in a test headline.
That isn't a transformative change. But it's good enough that YouTube has implemented the videos widely in parts of Europe and elsewhere. And while it's true that the effect fades over time, van der Linden says, the team has confirmed that giving people a "booster shot" in the form of a short reminder video can keep the cognitive immunity going for at least 3 months (the longest time period they've tested).
Think More, Share Less
Still, inoculation will only have a limited impact on the misinformation problem by itself. If nothing else, Ceylan says, "it's putting a lot of responsibility on the individual," as opposed to the social media algorithms that deliver psychic rewards when users share misinformation. "I don't think user behavior will really change unless we have structural changes in that reward mechanism," she says.
This is why another strand of misinformation research tries to nudge social media users into focusing less on engagement when they share something, and more on accuracy-or, as West puts it in a sticker he gives his students: "Think More, Share Less."
When you ask users to stop and think, as Rand and Pennycook did in 2019, they are surprisingly good at judging a post's accuracy-regardless of political party affiliation. Says Rand: "If you ask 15 to 20 random Facebook users, 'How accurate does this seem?' and then average their responses, they do a good job of reproducing what a professional fact checker would come up with" (12). This is consistent with the extensive literature on the wisdom of crowds, Rand says, and suggests that crowdsourcing could be a reliable way for platforms to label inaccurate content and overcome the scaling challenge to professional fact-checking.
In 2021, Rand, Pennycook, and their colleagues extended this line of thinking to nudges they call "accuracy prompts" (13). Again, Rand says, the idea is consistent with previous work in psychology on the priming effect: "If you can just get people to think about accuracy, then they're less likely to share bad content." And it's not even hard, he says: All you need is a pop-up window asking users to, say, rate the accuracy of a random headline or state whether accuracy is important to them-anything that makes them think about truth.
The group verified their lab results with a real-world trial that asked more than 5,300 users of (pre-Musk) Twitter to rate the accuracy of a single nonpolitical headline. In the 24 hours that followed, tweets shared by these users showed a definite skew toward sources that independent fact-checkers had already rated as more reliable, such as CNN and The New York Times, and away from those deemed less reliable, such as Breitbart and The Daily Caller. Subsequent experiments have confirmed that accuracy prompts reduce bad-content sharing by roughly 10% -not dramatic, Rand says, but worthwhile, and easy for platforms to implement routinely (14).
At the Hospital del Mar Research Institute in Barcelona, meanwhile, neuroscientist Clara Pretus has joined with NYU's Van Bavel and others on preliminary tests of a more direct nudge toward accuracy: giving users a button to tag posts as "misleading" (15). In a lab experiment with some 2,500 US and UK users, Pretus says, 25% reported that a high misleading count would make them less likely to share a post-with a critical caveat. "This probably works really well for the general population," she says, but could easily backfire with users at the extreme ends of the political spectrum.
In an earlier study of more than 1,600 social media users in the United States and Spain (16), Pretus says, she and her coauthors found that these highly politicized individuals were some of the most prolific sharers of misinformation. "It's an identity thing," she says. "They not only use it as a way to campaign for their cause, but to signal that they are part of their group." But this also makes them the most profoundly mistrustful of nonbelievers. "So an important aspect of our current intervention is that users only see 'misleading' tags from other people in their group," she says. And with that restriction, she says, the tags seem to be quite effective. She and her colleagues are now working on the next step: testing the technique with real users in a real social media environment.
Defense in Depth
It's widely agreed that none of these interventions by itself is a silver bullet against misinformation. The way forward is to find methods of combining debunking, prebunking, accuracy nudges, and more.
One preliminary example can be found in unpublished research cofunded by Google, van der Linden says: It suggests that when the inoculation videos are alternated with accuracy nudges, the effects of both interventions are enhanced. In another example, West recently joined with colleagues at the University of Washington to develop a computer simulation that treats the spread of misinformation as a kind of contagion and that's calibrated with real-world data on 10.5 million misinformation tweets during the 2020 election. "It's a very simple model," West says. "But it does a great job of capturing the dynamics that we see in the real data," such as surges of activity when a post goes viral. From there, he and his colleagues used the simulation to model the effect of potential interventions such as fact-checking, accuracy nudges, or outright suspension of the most active superspreaders-and found that, when used in combination, these interventions could reinforce one another and cut the total volume of misinformation by 50% or more (17).
But this coordinated defense is not something that individuals can do by themselves, West says: The need for some sort of coordinated action from governments, companies, and other institutions has long since become unavoidable.
That's especially urgent, given the startling advances in generative AI. This category of algorithms includes incredibly lifelike "deepfakes"-images, videos, and audio files that are almost impossible to tell from the real thing-as well as equally realistic text and conversations powered by the likes of ChatGPT. "AI is now democratizing the creation of propaganda," van der Linden says. "Anyone with access to the internet can have ChatGPT automate a campaign to deceive people at scale," he says-and even do mass microtargeting, by feeding readily available browsing data to the AI and telling it to tailor a manipulative message for each individual.
The big social media platforms are moving cautiously, reluctant to do anything that might hurt their engagement numbers (and thus their advertising income), Ceylan says. But the reputable ones still have a strong incentive to keep things under control, she points out. According to her research, most people say they're hesitant to engage or reshare posts and that it's increasingly hard to know which information is true or false. "If we can create an environment where people can trust that the information they see is going to be accurate," Ceylan says, "I think it's going to impact the engagement positively rather than negatively."
M. Mitchell Waldrop wrote this article for the National Academy of Science.
get more stories like this via email
Car dealers regularly advertise cars with unrepaired safety recall defects in California, even though it is illegal for dealers to sell or even offer for sale vehicles that do not comply with one of the hundreds of Federal Motor Vehicle Safety Standards.
For example, a recent check of CarMax.com revealed a 2020 Lincoln Aviator for sale; a car under recall for intermittent failures of the backup camera which has no permanent fix.
Janette Fennell, president of the nonprofit advocacy group KidsandCars.org, is calling on the Department of Motor Vehicles, which licenses car dealers, to crack down.
"They can issue fines. They can suspend or revoke licenses. They could refer the cases to law enforcement agencies," Fennell outlined. "But the DMV has failed to exercise that authority. By allowing those blatant violations, they've endangered lives."
The rule on backup cameras came about after drivers unwittingly backed up and hit children. The DMV said it investigates credible allegations made via its online complaint form. CarMax, in a statement, said it shares "vehicle-specific open recall information in-store and online to ensure our customers know about open recalls prior to purchase" and said the current recall repair system is geared to require manufacturers pay for repairs at their dealerships.
CarMax guarantees its vehicles must pass a 125-point inspection.
Rosemary Shahan, president of Consumers for Auto Reliability and Safety, urged people to enter the vehicle identification number into the National Highway Traffic Safety Administration database before they buy, to see if the car is under recall.
"Check the VIN, even if the company said that the vehicle passed an inspection, it may have a lethal safety defect that has been killing people," Shahan stressed. "And always insist that they fix it before you buy it. But with some of these recalls, there is no fix available yet."
Consumers can also look up a car's history using the National Motor Vehicle Title Information System.
Disclosure: Consumers for Auto Reliability and Safety Foundation contributes to our fund for reporting on Consumer Issues, Environmental Justice, and Social Justice. If you would like to help support news in the public interest,
click here.
get more stories like this via email
Ohioans seeking help from companies promising to reduce or eliminate debt or fix their credit scores are getting duped, and often are left financially worse off, according to a new report.
Judy Dollison, president of the Better Business Bureau of Central Ohio, said the rise in student loan, medical and credit card debt has increased demand for such services. The problem, she said, is not all companies are legitimate.
"BBB has had more than 12,000 complaints and negative reviews combined about credit and debit assistance companies," Dollison reported. "That just shows you that this is a problem."
The Better Business Bureau advised using a credit report service such as AnnualCreditReport.com, and calling the debt holders yourself to attempt to negotiate a lower payment or interest rate.
Dollison pointed out sketchy debt relief, debt consolidation and credit repair companies offer quick and extensive financial fixes and use high-pressure tactics to get consumers to quickly pay upfront fees.
"What we find is, if it sounds too good to be true, it usually is," Dollison explained. "Credit and debit repair actually take months, if not years, to solve."
If a legitimate company has bad business practices, Dollison advised, people can file a complaint or do a customer review. But if they believe the organization is scam, she said they should use the Better Business Bureau's Scam Tracker site to report it.
"You cannot only report when you've been approached by a scammer," Dollison noted. "You can also look up the situation that you're in, to see if others have reported that as a scam."
She added if you are seeking out a company's services, do not be rushed, and avoid giving away any personal banking information until you have evidence it is operating legitimately.
Reporting by Ohio News Connection in association with Media in the Public Interest and funded in part by the George Gund Foundation.
get more stories like this via email