The Blind Spots Business Technology Leaders Are Missing About AI and AI Security
By Zack Huhn, Enterprise Technology Association
AI adoption is accelerating across industries, and for good reason. Business technology leaders are turning to AI to drive innovation, efficiency, and competitive advantage. But beneath the enthusiasm lies a growing risk: organizations are overlooking critical blind spots in AI security and governance that could undermine their investments.
Below are some of the most common gaps I see in boardrooms and technology teams today.
AI is Not Just a Tool. It’s a New Attack Surface.
Too often, AI is seen purely as a business enabler. In reality, every model, system, and AI-powered process expands the organization’s attack surface. From model inversion attacks to data poisoning and adversarial manipulation, AI introduces vulnerabilities that traditional cybersecurity strategies may not cover.
To understand more about emerging AI threats, read more.
Shadow AI is Already in Your Organization
Just as shadow IT grew when teams adopted unsanctioned apps, shadow AI is taking root as employees use generative AI tools, APIs, and external models without oversight. This can lead to unmanaged risks related to data privacy, intellectual property, and compliance.
Explore examples of shadow AI risks here.
Explainability Tools Are Not the Same as Accountability
Many AI solutions offer explainability dashboards, but these often provide surface-level transparency without addressing deeper questions of fairness, accountability, or bias. Leaders may feel reassured by these features without realizing their limitations.
For more on the gap between explainability and true governance, read more.
AI Supply Chain Risks Are Being Overlooked
Businesses often focus on their own AI models and pipelines while neglecting the risks hidden in third-party data sets, pre-trained models, and open-source components. A vulnerability in any part of the AI supply chain can introduce significant exposure.
Learn about AI supply chain security concerns here.
Security Teams Are Not Yet AI-Ready
AI security requires new skills, frameworks, and mindsets. Many security teams are still catching up on securing cloud and IoT environments, leaving them underprepared for AI-specific threats. Without upskilling, these gaps will widen as AI adoption grows.
See recommendations for building AI security skills here.
Overconfidence in Vendor Solutions Can Be Risky
Vendors often promise turnkey AI and security solutions. But no external tool can fully address an organization’s unique risks without internal validation, continuous monitoring, and clear accountability. Leaders need to ask hard questions and maintain oversight.
For guidance on evaluating AI vendors, read more.
AI Governance Is Treated Like a Checkbox
AI governance should not be reduced to a compliance exercise. Done right, governance is a strategic driver that builds trust, supports regulatory alignment, and ensures AI aligns with organizational values. Too often, it is addressed late in the process or not at all.
For a practical guide to AI governance frameworks, read more.
What Leaders Can Do Now
Leaders can take immediate steps to address these blind spots:
Inventory your AI systems, dependencies, and shadow AI use cases
Train security and compliance teams on AI-specific risks
Develop and operationalize an AI security and governance framework
Monitor and assess AI systems continuously
Foster a culture of responsible and intentional AI adoption
At ETA, we’re working with business, technology, and government leaders across the country to help close these gaps and build more secure, resilient AI strategies.
If you’re interested in collaborating or learning more, join us at joineta.org.