This skill hasn’t been rendered yet — check back after the next sync, or view the source on GitHub.
AI Fraud Defense Playbook
Build the agent's and the brokerage's inbound-facing defense against AI-enabled fraud — voice-cloned "urgent" calls from the seller, deepfake video messages from a title officer, AI-written spoofed emails from "the lender," AI-generated identity documents on a buyer pre-qualification, AI-impersonated chat threads pretending to be an MLS admin or a cooperating agent. Produce, in one pass, a tailored defense package for a given transaction or brokerage: a transaction-level threat map, a four-tier verification ladder, a set of original live-challenge prompts a deepfake is unlikely to pass, a passphrase protocol, a wire-instruction-change standard operating procedure, a client-facing one-pager, a brokerage training cadence, and — if a suspected-fraud incident has already started — a time-boxed incident response checklist. The skill exists because the compliance stack (`ai-marketing-compliance-audit.md`) governs *outbound* AI content; nothing in the repo covers *inbound* AI impersonation. April 2026 industry coverage crystallized the gap: FBI reported $275M in real-estate-related cybercrime losses in 2025, Business Email Compromise ranked #2 at $3.04B, voice-clone and deepfake-video scams are now routinely reported against closings, and 60%+ of deepfake attempts cluster in the 72 hours before wire instructions are sent. The playbook treats inbound fraud as a predictable operational risk with predictable choke points, not an exotic event.
This skill is kept in sync with KRASA-AI/real-estate-ai-skills — updated daily from GitHub.