arrow_back
Back
Podcast episode

Who’s accountable when algorithms define the housing market?

timelapse
30 min
calendar_month
March 4, 2026

Featuring

Host
Maiclaire Bolton Smith
Vice President, Product Marketing
Cotality
Speakers
Amy Gromowski
Head of Data Science 
Cotality

Overview

While AI is transforming real estate by recalibrating markets and pricing risk faster than ever, it faces a significant trust gap.

  • Professionals must be able to see where data comes from, its diversity, and how it is governed.
  • Quality control remains a human responsibility. Cotality’s From House to Home survey reveals homebuyers still overwhelmingly prefer human expertise over AI-only tools.
  • Without human intervention and new data infusion, AI risks becoming a "closed loop".

A conversation with Amy Gromowski and Maiclaire Bolton Smith

For decades, homeownership has been a human-led paper trail. Today, AI is stepping in, recalibrating markets and pricing risk faster than ever. But as algorithms take over the heavy lifting, a critical gap is forming: while the industry chases speed, the public is searching for confidence.

If models begin to define reality rather than reflect it, we risk losing the human thread of transparency that keeps the market functioning. This episode explores how to build trust by securing high-quality data, designing ethical models, and using constant human oversight to keep AI anchored in reality.

Beyond the Buildings host Maiclaire Bolton Smith welcomes Cotality's Head of Data Science, Amy Gromowski, to unpack AI from a human perspective in the homebuying process.

In this episode:

02:25 – How do we balance transparency in AI technologies, especially AVMs?  

4:40 – What does the intersection of confidence and speed look like for people to adopt digital tools?

9:33 – How can we help encourage trust in the market as it changes?  

12:02 – Is there a risk that models will stop reflecting reality and instead define it?

17:27 – How can we ensure people trust they’re seeing an accurate view of today’s market

20:45 – Are regulators ready to incorporate predictive models into their policy decisions?

24:34 – Allie Barefoot breaks down the latest property market numbers in The Sip.

25:18 – Would you trust AI-driven valuation without human review to guide your own homebuying decisions?

Transcript

Amy Gromowski:  

I think in order for us to really transform our markets, we need to go through a journey where we build this kind of trust with AI. And so when we think about running our businesses with it, there's going to be a period of time, you're not just going to turn it over to AI and be like, "Oh, I trust this," right? You're going to look for the supporting evidence as a service provider, right? And a data solutions provider into the property ecosystem. Providing that kind of transparency where the user can see reasoning, the data, that is going to be the difference maker I think for adoption.

Maiclaire Bolton Smith:

Welcome to Beyond the Buildings by Cotality. I am your host, Maiclaire Bolton Smith, and I’m just as curious as you are about everything that happens in the property industry. On this podcast, we satisfy our collective curiosity, explore questions from every angle, and look beyond the obvious. With every conversation, we illuminate what is possible.

AI is changing how property is priced, financed, and insured. It's recalibrating markets faster than humans, and their confidence can follow. As a result, each gain in efficiency exposes big questions about fairness, transparency, and trust. As ownership becomes increasingly digital, the stakes for accountability have never been higher. Because at the end of the day, housing may be powered by data, but it's fundamentally lived by people. And if we lose that human thread of transparency, we risk losing the very thing that makes the market work: trust. So, to talk about AI from a human perspective in the home buying process, we have Amy Gromowski, Cotality’s head of data science here today. Amy, welcome back to Beyond the Buildings.

Amy Gromowski:

Hi Maiclaire, thanks for having me again.

Allie Barefoot:

Before we get too far into this episode, here's a friendly reminder about how to see what's coming up next in the property market. To make it easy, we curate the latest insight and analysis for you online. Find us using the handle @Cotality on all of our social media channels.But now, let's get back to the show.

Maiclaire Bolton Smith:

Okay, well let's get right into it. Let's start with the new housing dynamic. Most people don't know why a model has decided what a home’s worth or how much insurance costs, but what they do know is they want to understand the process better. We've seen in our own survey analysis that people across the U.S. say that they want human guidance along with the convenience of AI-powered systems. So,Amy, how do you see this balancing act of transparency in AI technologies unfolding?

Amy Gromowski:

Transparency is key. So thank you Maiclaire for having me, thank you again. It really comes down to three core components. It's about your data, what is your data, how is it designed? Your model, you know, what technologies are you using in your model? And then the results that you're getting, how are you testing and surveilling them? So I think these are the biggest, you know, kind of categories when it comes to transparency and trust with AI.

So if you just think about AI in the property ecosystem where Cotality sits, right? We should be, everybody using AI should be really looking for solutions that can articulate the characteristics of their data. What are the sources you’re using? How diverse are those sources? Can we trust them? Are they reliable? Do they check and balance each other? Do you have this ability to kind of see holistically across multiple sources in order to really trust that the data that you have is representative and accurate? Is there a robustness in its completeness and accuracy, and how well-governed is it? So that's just the data aspect.

And then in the model design pieces, it's what are the features, what are the techniques and the technologies used? You know, are you using a Claude? Are you using a Gemini? There should be a level of transparency into what's going into that, what data, how it's designed, and what'scoming out of it, and how are you measuring that quality and accuracy.

Maiclaire Bolton Smith:

Gotcha. Okay. Well moving on Amy, now let's talk about balancing speed versus stability. You know, ultimately we feel like everybody wants things done faster: faster listings, quicker loans, instant quotes. But maybe that's not actually the case. Our respondents in our from house to home survey told us a very different story, which was a little bit surprising. It turns out that what people really want more than speed was confidence. So Amy, if you find that intersection between confidence and speed and really getting to the key of what's going to help people adopt digital tools and making more timely decisions, but how do we get there? What is that balancing act between speed and stability?

Amy Gromowski:

Yeah, you know, this is a big topic that we’re tackling at Cotality, both for our customers and for our own employees. I mean, this gets back to that adoption and, you know, just building that trust with AI. And that starts with bringing AI into everything we do. You know, I’vebeen reaching into Gemini and using Gemini like, "help me draft a thank you note." You know, "help me do this competitive research." "Hey, you know, analyze my calendar, my emails, and the work intake that we have here at Cotality and build me a dashboard that every day you can, you know, tell me the five things I need to make sure I get done or the meetings I need to prep for tomorrow." Whatever that is, right? And it can keep going. You can build Gems and all sorts of great things. That’s where, you know, this confidence starts. It's building your own relationship at an individual level I think with the AI. That's how I see it.

Also, when I first started playing with AI, I realized like, oh, it’s very easy to lead the witness here. It tells you what you're already kind of positioning in your question if you’re not very objective in how you ask the question. So it's that kind of learning, right? That we all just need to, we need to be on our own journey with that. And we believe very strongly in providing those opportunities here at Cotality and really pushing that. Because after you get that comfortable, it’s the people, right? It's people who need to understand the strengths and weaknesses. And then we can take a step back, this is also something we're doing here at Cotality, and let's rethink current processes.

The why we do stuff today is, you know, because we've stitched together technology solutions based on the technology that was available to solve the problem. But we’re largely solving it in the same way, we’ve just tried to speed it up. With AI and generative AI, it's really about like, if we were just to design a process today or a solution today to tackle a problem, we would design it completely differently with what we now have, right? With generative AI. So it will look, it looks much different. And so we just need to take the time to do that. And I would love to like kind ofwalk through an example of how to think about that, if you think that would be helpful. But I really think it's this redesign and then a very thoughtful plan on how we move from current state to future state.

Maiclaire Bolton Smith:

I totally agree. Um, yeah, let's hear your example and see how we can put this in action.

Amy Gromowski:

Sure. Great. In a property ecosystem today, you know, you find an agent, you’re listing, and you’re searching. Maybe you'redoing your own search, they’re helping you with a search. Um, you work with a lender then for pre-approval, right? The consumer is in the middle, the home shopper is in the middle of all of these different, you know, kind of property-based entities if you will: the lender, the agents, the appraisers, the inspectors. I mean, we've talked about that in the past. So imagine a world where you can just push a button and it’s tell me how much I can afford. It’s analyzing my own income, my expenses, sending out to lenders for pre-approval letters, right? That just come back and say here's your lenders that have pre-approved you. Showing, by the way, here’s all the houses now on the market that what you can afford, like your dollar amount, and in that range, and then sets up appointments, right? For you based on your calendar and a listing agent's calendarand now you’re set to go. All in one button. Like just reimagine that, right? You’re no longer in the middle of all of this process.

On the lender side, imagine a similar button where there’s like, I just need to validate this borrower. Because that's what what’s the risk, right? Around the borrower. What’s everything I need to know about them? And another button that’s like tell me everything about the collateral. Because that's it. I mean, it really comes down to those two things. Now we have today a ton of people and processes and solutions in the middle of those two questions, but if the AI can all start talking to each other, it really gets much more simplified. It's some day in the future.

Maiclaire Bolton Smith:

Yeah, that’s really helpful Amy. Um, you know we talk about how, you know, technology has changed, but generations are changing too. And I want to kind of get into that the new generation coming up buying houses, Gen Z, which I know you've got some Gen Z children, which you generally identify with when we ask questions like this. So what how can we help encourage trust in these new generations? Because I in this house to home survey we found that, you know, millennials were far more comfortable with technology, especially with this when it came to buying a home, than the Gen Z generation. So how do we kind of adapt the way we’re thinking for maybe this new generation coming up in the market?

Amy Gromowski:

Yeah, you know, I found this to be a really interesting finding of the survey. I would have thought it to be maybe a little bit opposite. But if you think about the generation, I think for myself, Gen X, um, we’ve had to learn how to adopt and adapt, right? And change how we think about things, I think on a very consistent basis as you think back to, you know, how things looked in the 80s, let's say, to where they are today. We've had to go through a tremendous amount of transformation and fully all-in understand how important that adoption is.

Perhaps for Gen Z, you know, the way that they get their information, their news, I know when I think about my kids and even other kids now, my kids are at the end of Gen Z, so they're almost Alpha I think is the next one. So we'll kind of caveat that because the older part of the Gen Z, this might look a little different. But ultimately, they’ve grown up with a massive amount of technology and the world just at their fingertips, right? If you just think about what social media has done to just expose people. Siloed maybe sometimes, sometimes though, but you certainly can get into the middle of the living room of people all over the world. So I think there's something to that in terms of where are they getting their information and will we meet them where they’re at in order for them to have a sense of trust and what do they want from a process? They’reused to AI. It’s not this like wow amazing thing. It was for a minute and then they’re like yeah okay. So, you know, my daughter when ChatGPT first came out, my daughter said, "What's that fancy Google thing again?" Like they still just think about it as like an engine, right? A search engine. Um, I’m sure that will evolve, but they’re just they I think going to want to inform that process and they’re going to dictate to us what that looks like and we need to be listening and that’s where the trust will get built. That’s what I think.

Maiclaire Bolton Smith:

Yeah, well it'll be interesting to see how it kind of unfolds. Amy, so we’re going to move on, now we're going to talk about when the model becomes the market. So we can't be everywhere at once, that’s why we have models. So they simulate everything from natural hazard scenarios to calibrate risk against potential claims to track how market changes affect the value of a home. But what sits underneath these automated feedback loops? So Amy, can we talk a little bit about is there a risk that models will stop reflecting reality and instead just defining what reality is?

Amy Gromowski:

Yes, but I mean just a simple answer is yes, I think there is always that risk. Um, you know, I'm a trained statistician and predictive modeler and, you know, hold out samples and having um random samples that you never kind of treat, right? Like the scientific method is was always an absolute because you can find yourself into selection bias. You’re only selecting maybe, you know, certain populations based on what your model says. Now that's all you're actually observing and you're never, you know, going back and kind of checking a larger population, for example.

So the ground truth matters, right? Really understanding the actual reality and dynamics is very, very important to ensure that your models aren'tbecoming aren't starting to define the reality but are rather reflecting the reality, like the question is, right? So really knowing what is occurring in the real world versus what the model is saying is important. So let me give you an example: wildfire risk, right? If we have hazard risk scores that tell us the amount of risk. That's an observable scientific-based, right? Type of problem to solve.

Now we use a lot of fancy science and a lot of data about vegetation and rainfall and all you know elevation and slope and aspects, I mean you'rea scientist Maiclaire, right? So you know more than I do about how all that great science comes together and all the data that we have to build those models. But at the end of the day, you can go to a spot on earth and you can you can validate what is the vegetation, right? What is the distance to a property of this vegetation? Is it dry? What’s the risk based on what we know about the science?

So I think that’s really important. You can say the same thing in marketing models. Um, you know, it’s really important that you test for disparate impact. You can have nothing in your model that has anything to do with, you know, protected classes, but certainly the way the model performs could still have some unintended, right? Disparate impact. And so analyzing that data, analyzing the outcomes of those models and looking at it again against demographic-type data, um, you know, in in ensuring that you’re not inadvertently creating disparate impact. You need to do those types of analysis. If you don't have a system and a process like that around your models, you absolutely run the risk of of the model becoming and defining the reality.

So one of the things I'm kind of interested in watching like with Gen AI will be, you know, you hear about like hey I used AI to draft up this letter, maybe it’s an email communication, and then on the other side, you know, or it’s a research paper, and then on the other side AI is decomposing that into some simple bullets, just give me the highlight. At some point when the AI is just talking to the AI and we're never infusing, if we got to that point, I’m not saying we are, but if we got to this point where it was never infused new thought leadership, new observations, the way the world is changing, we're going to become very, very narrow, right? In how how we think and how the AI can think. And so right that human oversight, the validation, the surveillance, I keep kind of saying those terms, that’s all going to be really key and ensuring the new data is always coming in and that we're always capturing um and not just making decisions based off of what a model told us but continuing to validate that the model is performing.

Maiclaire Bolton Smith:

Yeah, those are, you know, very important concerns and I think the other thing Amy that you didn't touch on that I question is is there a long-term financial implications of the model becoming the market?

Amy Gromowski:

Yeah, I think I think so. I mean there's absolutely financial implications in getting it wrong. And that’s essentially what will happen. Um, if we're not we’re not treating um, you know, these models and this amazing power um in a I don’t want to say ethical way, but irresponsible in a responsible way, right? We really run the risk of for example with wildfire and insurance not getting if you don't get that right, you don't have the right diversity in your portfolio of business, right? You can be overexposed um in terms of being able to then be financially solvent to pay out claims and make people whole and get them back into their homes. That's a huge financial implication if you get that wrong.

Maiclaire Bolton Smith:

That really leads to the next question Amy: what about the guardrails for trust? So AI technology when done right should presumably help better decision-making, not create hesitation. But fewer than one in three people are comfortable with business’s use of AI, according to Edelman’s 2025 Trust Barometer, which is a really interesting metric. So what do we do about this apparent disconnect? So as automation accelerates in the property market, how can we ensure that people trust what they’re seeing is an accurate view of today’s market?

Amy Gromowski:

Well, um, you know, this goes back to a lot of the the principles I kind of touched on. I think really understanding how, you know, what’s going into the AI, what’s coming out of the AI, that level of transparency that generative AI can offer. In more traditional typesolutions, really understanding the data. I mean data is key, right? So off-the-shelf LLMs, they’re trained on, you know, the universe of internet data um just a massive amount. But for specific problems, there's going to be specific data that it’s reading that it's, you know, generating insight around, and so you really want to understand can I trust these sources? What are the sources? Can I trust it? How do I validate these against each other?

And who's the provider? And what kind of sources do they have? So for example at Cotality, when I think about all the different assets we have, we have real estate listing data, we have appraiser, you know, data that’s coming in through appraisers, home inspectors now, the lending process, right? Mortgage underwriting, claims. That's a lot of data that we can use to assess and and validate, you know, against the sources against each other and really come up with a holistic picture of where do I have high-quality data, where do I have gaps potentially in my data?

For example, you know, we have a partnership now and we have aerial imagery coming in. We can use that to truly understand where every, you know, structure is, their exact location on earth. You can overlay that with things like vegetation, coming back to to wildfire, distance to water. And you can you can now actually evaluate your other sources. How much how accurate do I think my appraiser data is, my inspector data? Sowe do this, right? All day long. This is just part of kind of our core competency. And and that really allows you to understand gaps and quality and sources and ensure, you know, complete and accurate coverage. And then you just want to be able to you want to make sure that whatever your solutions are providing that they can be pressure tested against other solutions and really understand do I have gaps in an area that I don't, you know, do as well? That's okay, and we want to know that. We want to be able to have a two-way communication for us with our customers and get that kind of feedback. You should always be open to this feedback so you can really have have this trust and that all of our customers and everybody in the property ecosystem really can say that we stand on this solid ground of governance and strong data, you know, sources and qualities that we train our models on, that we surveil our models and we’re constantly, you know, assessing that against some ground truth.

Maiclaire Bolton Smith:

I think the thing that triggers from all of that Amy is you can't say trust and confidence without thinking about policy. Soand if we think about the regulatory world, do you think regulators are ready to incorporate predictive models into policy decisions? I mean, I know in the hazard risk base this is a big topic of conversation, they are used already, but across the board for different kind of policy makers not just in the insurance space, do you think that this is the way of the future?

Amy Gromowski:

I think so. I mean, when I think about this, I think about regulators and incorporating models. I mean they have been, right? For some time. I mean actuarial science has introduced, you know, predictive type modeling some time ago. And so our DOIs, right? And the regulators within the insurance industry, they have to have those same skill sets and people in order to really understand those models, assess them, evaluate them, approve them, right? For for all of the insurance industry, right? In every state. You have others like our consumer protection agencies, um, you know, there we have FICO and models that are used today, right? Those are all predictive models. And so those agencies have to understand those models, they have to understand have the resources that oversee the use of those models, that decision-making, you know, how they’re used in decision-making and really ensuring that, you know, there's a prevention of unfair discrimination and that they're protecting consumer rights.

So there's a strong base I would say, right? And foundation within the regulators and regulatory environment for predictive models already. Sothe question really then becomes about generative AI, I think that's the big thing, right? And it’s just moving so fast and it’s so much more complex. So, you know, I'm obviously not in not a regulator myself, but being concerned about, you know, bias, privacy violations, even job displacement, the misuse of data, right? The list goes on and on. But when you think about it, it's still all the same as pre-gen AI, it's all the same kind of concerns. Probably cybersecurity and that kind of risk, right? Has really increased significantly. Um, but I think then what it just comes down to are how are those bodies, those those governing bodies um equipping themselves with the right type of people, right? And with the right tech within the entities themselves so that they can help the industries and the consumers that they serve evolve quickly and safely. I think that that’s really going to be the key is really fully embracing. We need to all just embrace and say this is where it's going. We don't want anybody to go around us, so we need to be ready, right? And get that kind of competence built up and really start to think through, but probably some flexibility and agility in the policy making because I think there's a lot of unknown yet. So how do we structure that in a way that as we learn, go like hey we need to make a different decision here? That's going to be, I think the biggest challenge. But to me, it still starts with the human, right? You we started this with the human, it's like get the right humans in those in those entities.

Maiclaire Bolton Smith:

Yeah, I think that that's the one thing that's clear is that you can never remove the human from any of this. So okay Amy, before we close today, I just want to remind our listeners that we've only just scratched the surface today with this topic and we do go much deeper into the AI discussion in a four-part thought leadership piece that we will be posting to the insights page. But also to make sure you get it by email, make sure to sign up for our newsletter.

Allie Barefoot:

It's that time again. Cotality just dropped new numbers about what's happening in the housing market. Here's what you need to know. Economic uncertainty and a persistent gap between buyer power and ownership costs are fundamentally redrawing the map. The problem is no longer just mortgage rates and home prices. The Cotality Housing Affordability Index reveals a secondary squeeze: rising insurance premiums and escalating property taxes have become the critical variables eroding your ability to buy. In many markets, the escrow burden now exceeds 40% of the total monthly obligation. Find out more about the escrow burden on our insights page. The link is in the show notes. And that's a sip. See you next time.

Maiclaire Bolton Smith:

Okay Amy, let's do a bit of a fun close today and do a lightning round. And I'd love your thoughts and maybe I'll throw some in of my own as well. Let's just do two questions here. Would you personally trust AI to guide your home buying experience?

Amy Gromowski:

Yes, the keyword being guide. Um, I would still very much want to be and will be in the middle of it. Um, and you know trust but verify. I think that’s good advice right for trusting our teenagers, trust but verify, same for AI.

Maiclaire Bolton Smith:

I think I would agree with that. Um, having just purchased a home two years ago myself, two and a half years ago when we did the upgrade to this house. Um, I would have loved something to help me guide me through it, but ultimately I'm going to make those decisions myself, so trust but verify, I totally agree with you. Um, I also definitely needed a lot more control in the situation myself, so I think I'mwith you on that one. Okay, final question: if more data means more accuracy, but it also would mean more decisions along the way, how would we think about that?

Amy Gromowski:

I actually would just challenge this question altogether. Right? Because I don't know that more data necessarily means more accuracy. More high-quality data would maybe mean more accuracy. And and I think underlying this question is also then there's more problems that the AI can tackle, right? And solutions that it can provide, and so does that mean more decisions? Or perhaps it's more transparency with the process it goes through in its reasoning and the ability to to execute in a very autonomous way. And so that actually could remove some of the decisions but require more oversight. So, um, not really sure I answered the question other than to challenge that we want better data, like good data, and also design the right process and provide the right transparency around that process. So that ultimately you don't have to make more decisions.

Maiclaire Bolton Smith:

Yeah, I'm going to challenge as well too and say more data doesn't necessarily mean more accuracy. I think to me it means more questions. So yeah, I'm with you on that one. I think ultimately transparency comes down to what we've talked about today is transparency and trust. All right Amy, thank you so much for joining me today on Beyond the Buildings by Cotality. And thank you for listening, I hope you've enjoyed our latest episode. Please remember to leave us a review and let us know your thoughts and subscribe wherever you get your podcasts to be notified when new episodes are released. And thanks to the team for helping bring this podcast to life. Producer Jessie Devenyns, editor and sound engineer Romie Aromin, our facts guru Allie Barefoot, and social media duo Sarah Buck and Mikaila Brooks. Tune in next time for another conversation that illuminates the ideas that will define the future.

Allie Barefoot:

You still there? Well thanks for sticking around. Are you curious to learn more about our guest today? Amy Gromowski is the head of data science at Cotality, leading teams of data scientists and machine learning scientists in developing artificial intelligence and machine learning solutions, including computer vision and generative AI for property-related solutions in the real estate, mortgage, and insurance markets. Over the course of her career, Amy has held various AI-related roles, including data scientist, client executive, analytics product manager, and most recently as a leader of AI/ML business development. With 25 years of experience, Amy enjoys working with C-suite leaders on AI/ML strategy, technology leaders, product leaders, and clients to innovate in the property ecosystem.

Related Resources (0)

Button Text
No items found.