At a conference hosted by the Case Western Reserve Journal of Law, Technology, and the Internet, Cory Scott—Executive Director of the Center for Cybersecurity and Privacy Protection at CSU Law and a former CISO at companies including LinkedIn, Google, and Confluent—delivered a focused presentation on data privacy risks emerging in the AI “infrastructure” layer.
Scott distinguished between frontier model developers and the “pipes” that route, host, or broker AI traffic, arguing that infrastructure providers are increasingly attempting to access and analyze user content in ways that erode trust and depart from established enterprise expectations. Using OpenRouter as a case study, Scott walked the audience through how changing terms and “anonymized” categorization claims can mask meaningful data use—highlighting the practical risk of re-identification through metadata and the lack of technical disclosure about how anonymization is performed—and he pointed to similar trends among other providers such as RunPod.
In the discussion, Scott emphasized that organizations need clearer policies and stronger oversight of third‑party AI services (including prosumer tools purchased on corporate cards), and he outlined risk-management approaches ranging from negotiating commercial agreements to building internal AI routing infrastructure to preserve control over sensitive data.