Choosing a public AI system for your business?
Because of high data exposure risk, businesses must make a concerted effort to prioritize private AI, over public (and unsecure) AI. Cotality expert, Ethan Bailey, weighs in on the mitigation efforts that are required such as strict device security and system hardening to maintain compliance and avoid liability.

Ethan Bailey is part of an expert team at Cotality, which is focused on AI enablement and innovation. Through our work, we’ve developed and refined several good practice guidelines regarding responsible AI use and data security. Before making the AI leap, we suggest that real estate industry leaders and practitioners carefully assess the risks and benefits of both private and public AI environments. This assessment should specifically address how proprietary and licensed MLS data — and critical information housed in systems like CRM, AMS, and transaction platforms — are protected in this rapidly evolving, AI-enabled world.
Public AI systems, such as those powering general AI models, offer accessibility and rapid innovation at a lower initial cost. However, a significant drawback is the risk your data could accidentally be used or seen by others. User-entered or sensitive data may be swept into the model's ongoing training, potentially breaking your data agreements and exposing your information in other users' queries. Conversely, a private, controlled AI environment provides greater privacy and control, helping to ensure your licensed or sensitive data remains siloed and used only for its intended purpose. At Cotality, we chose to use private AI environments to secure our in-house capabilities — a decision that demanded a larger investment but significantly improved data security and compliance.
The New Risk: AI Browsers and Assistants
This delicate balance of data security is further complicated by the emergence of new AI browsers and in-browser AI assistants. These tools are designed to read and interpret content the user is viewing, even if it’s secured behind a login. A major risk is that an AI assistant, often operating without clear corporate oversight, could be tricked by malicious code or simple user mistakes, causing it to summarize, upload, or externally transmit your sensitive or licensed information.
This risk is heightened because the AI operates with your active login privileges. An attacker can hide commands on a webpage that trick the AI into performing unauthorized actions on your logged-in accounts (e.g., banking, email, or corporate systems) or to automatically steal your private data. The "browser" acts as an unmonitored bridge that can compromise data intentionally kept secure within your private ecosystem.
This evolving risk needs to be addressed in two critical areas:
- User Device Security: You should establish a firm policy to only permit company-vetted secure browsers and AI tools. This involves blocking unapproved personal AI tools from running on corporate devices, limiting the point of access.
- Sensitive System Hardening: Your critical internal applications should be hardened to resist manipulation. This means ensuring high-impact actions (like major data changes) require secondary, human-validated confirmation—a mandatory second-step approval that cannot be automated by an AI assistant.
As AI tools become universal, organizations should urgently establish stringent data governance policies to mitigate the combined risk from both the public AI training models and the AI browsing interface. The perceived convenience of public-facing tools must be weighed against the legal and competitive liability posed by the potential, even accidental, loss of licensed data, making a private AI environment a necessity for protecting your data and staying compliant.