May 27, 2025
What the Walters v. OpenAI Ruling - and Emerging State & Federal AI Laws - Mean for Affordable Housing
Generative AI holds enormous potential to transform affordable housing. It promises faster workflows, smarter decision-making, and more responsive resident services - all in a space that operates with thin margins and overstretched staff. But with that potential comes real risk.
A recent legal decision - Walters v. OpenAI - has brought these risks into sharp focus. The case centered on a false statement generated by ChatGPT, and the court’s ruling signaled a growing legal reality: if AI makes a mistake, and you were warned it might, that mistake is now your responsibility.
For affordable housing organizations navigating tight regulatory environments and making high-stakes decisions, that shift in liability is not just a legal nuance: it’s a serious operational concern.
And the stakes are rising. Across the US, states are moving forward with AI and data privacy regulations that go beyond simple transparency requirements. Meanwhile, a sweeping federal proposal—nicknamed “the Big Beautiful Bill”—could override state-level efforts entirely, centralizing AI governance in Washington for the next decade. The result? A fragmented, rapidly evolving legal landscape with real consequences for organizations adopting AI.
Let's unpack what Walters v. OpenAI actually means, why the “reasonable user” standard doesn’t work for our sector, and how pending regulation - at both state and federal levels - should shape how we deploy AI in affordable housing.
The Legal Backdrop: What Happened in Walters v. OpenAI
In Walters v. OpenAI, journalist Fred Riehl asked ChatGPT to summarize a real court filing. The AI responded with a fabricated claim that public figure Mark Walters had been sued for embezzling funds. This was entirely false - no such case or set of facts existed.
When Walters sued OpenAI for defamation, the court ruled in favor of OpenAI. Why? Because:
ChatGPT’s disclaimers were clear. OpenAI’s terms warned that ChatGPT “may produce incorrect outputs” and should not be relied upon without human verification.
The user had prior knowledge. Riehl admitted he knew ChatGPT could hallucinate.
The false information was quickly debunked. Riehl checked the real case within 90 minutes and found the AI-generated summary was completely fabricated.
No actual damages were proven. Walters testified that he suffered no real harm.
The court concluded that under these circumstances, a “reasonable user” wouldn’t treat the AI’s output as fact - and OpenAI couldn’t be held liable.
The Rise of the "Reasonable User" Standard
At first glance, Walters v. OpenAI might seem like a case about journalism and tech companies. But it sets a powerful precedent: AI developers may be shielded from liability if they provide adequate disclaimers, shifting the burden of accountability, accuracy, and verification onto users.
In many industries, that might be manageable. But in affordable housing?
The risks are entirely different.
1. We Handle Sensitive, Regulated Data
Affordable housing professionals manage highly sensitive personal information- resident incomes, criminal background data, medical needs, investor information, and more. A hallucinated number, a misclassification of eligibility, or fabricated waterfall doesn’t just create confusion; it could lead to:
Unjust lease terminations
Fair housing violations
Compliance misreporting
Legal action, financial loss, and reputational harm
AI-generated inaccuracies in this space don’t just misinform - they harm.
2. Our Operations Don’t Allow for Constant Verification
In Walters, the user had time and resources to verify the output. But in affordable housing, where our people juggle multiple tasks & processes at a time, there isn’t time to fact-check every line of AI-generated output.
And when AI is integrated into internal tools - resident portals, CRM systems, property management platforms, fund management platforms - the line between “AI-generated” and “system output” blurs fast. Staff may not even realize they’re dealing with AI content.
3. Our “Reasonable User” Is Different
The legal idea of a “reasonable user” presumes a level of technical understanding, context awareness, and attentiveness that simply isn’t universal - especially not in frontline housing roles. Expecting every staff member to know the limitations of generative AI and act accordingly is both unrealistic and unsafe.
State-Level Regulation Is Here - But It's a Patchwork
The Walters case is just one piece of the puzzle. What’s happening in state legislatures may be even more consequential for how AI is adopted - and governed - in affordable housing.
California: The Data Privacy Trailblazer
California’s Consumer Privacy Rights Act (CPRA) already gives consumers substantial control over how their personal data is used, and new regulations focused on automated decision-making are in development. These rules would require:
Disclosures when AI tools are used in decision-making
Rights to opt out of automated decisions
Audits for bias and disparate impact
For affordable housing providers using AI in eligibility screening, communications, or even maintenance triage, this could mean new compliance obligations - especially around transparency and resident rights.
Colorado: A Comprehensive Consumer Protection Regulation on the Books
Colorado recently passed SB24-205, one of the most sweeping AI governance laws in the country. Going into effect in 2026, it includes:
Developer and deployer responsibilities for “high-risk” AI systems
Documentation and impact assessments
Consumer protections, including the right to human review
Housing-related decisions—especially those tied to credit, housing eligibility, or services—would likely fall under “high-risk.” If you use AI to determine leasing priority, screen applications, or allocate resident services, you’ll be subject to these rules.
Other States Are Not Far Behind
Connecticut, Washington, and Illinois are exploring use case-based or outcome-based regulations - laws that focus not just on how AI is built, but on what it does and whether it harms people. These frameworks align more closely with fair housing concerns and civil rights protections.
The trend is clear: AI governance is shifting toward sector-specific, harm-based oversight. Affordable housing is squarely in the crosshairs - not because it’s doing something wrong, but because the stakes of getting AI wrong are too high to ignore.
The Federal Wild Card: Preemption or Protection?
Enter the so-called “One Big Beautiful Bill Act”— the 2025 proposed federal budget legislation weighing in at 1,110+ pages that sneaks in a potential future establishment for a national AI governance framework. A key (and controversial) provision: federal preemption.
If passed, the bill would strip states of the power to enforce their own AI laws for the next 10 years (assuming it survives 10th Amendment scrutiny.) This would create a single regulatory standard and delay stricter protections some states are already preparing to enforce.
This could simplify compliance for large tech firms—but for sectors like affordable housing, it raises real concerns:
Would a general-purpose federal law account for our sector-specific risks?
Would it dilute protections for residents whose data is at the heart of every housing transaction?
Would housing providers be left in regulatory limbo—required to use tools they can’t fully trust, with limited recourse when things go wrong?
Would there be an AI backlash in the vacuum of an unregulated environment?
Preemption might save developers headaches, but it could remove critical levers for resident protection and ethical AI deployment in housing - or potentially cause fear, doubt, and uncertainty in the shadow of existing state & federal regulations.
What Affordable Housing Providers Should Do Now
While we wait for courts, regulators, and Congress to hash it all out, housing organizations can’t afford to sit still. AI is already being adopted in:
Maintenance request triage
Resident FAQs and chatbot support
Policy summarization and document generation
Applicant communication and follow-up
Budget and operating report analysis
The use cases are growing. But without guardrails, they’re also growing risk.
Here’s what we recommend:
1. Develop an Internal AI Governance Framework
Don’t wait for a federal mandate. Build a clear internal policy that covers:
Approved tools and vendors
Use case review procedures
Human-in-the-loop requirements
Error monitoring and incident response
2. Vet Vendors and Tools Like Any Other Critical System
Ask hard questions:
What’s the training data?
How does the tool handle errors?
What disclaimers are shown to users—and are they sufficient?
Can you audit decisions or outputs?
If you're using a tool in a regulated context, your due diligence needs to match that level of scrutiny.
3. Train Your Teams for Risk-Aware Use
Don’t assume that your staff knows what generative AI is—or how to spot when it’s wrong. Provide clear training and simple guidelines:
What AI tools are being used
What those tools shouldn’t be used for
How to verify information
When to escalate concerns
4. Advocate for Regulation That Reflects Our Realities
Join the conversation. Housing professionals need to be part of AI policymaking at the state and federal level. That means pushing for:
Sector-specific rules that recognize the stakes of our work
Protections for data and fair housing compliance
Funding and technical assistance for ethical AI adoption
Final Thought: AI in Housing Is Inevitable. Let’s Shape It, Not Survive It.
The Walters ruling tells us where courts are leaning: if you were warned that AI might be wrong, and you didn’t double-check, the blame is on you. That may fly in journalism, but it doesn’t work in affordable housing.
State regulators are stepping up—but a federal law could silence them.
Meanwhile, the tech is moving faster than the rules can keep up.
In this environment, the housing sector needs to lead - not wait. That means asking tough questions, setting our own standards, and pushing for regulation that reflects the complexity, sensitivity, and humanity of the work we do.
AI will shape the future of affordable housing.
Let’s make sure we’re the ones steering the change.