AI Adoption Surges, but Security Measures Straggle Behind
As organizations race to leverage the capabilities of artificial intelligence (AI), a concerning gap is emerging in the realm of security. A recent survey conducted by Wiz highlights that while nearly 90% of businesses are utilizing AI services, a mere 13% have put AI-specific security protections in place. This disconnect poses significant risks as enterprises navigate the complexities of hybrid and multi-cloud architectures.
The Skills Gap: A Key Concern
The findings from the report titled AI Security Readiness: Insights from Cloud Professionals reveal that over 31% of respondents identified a lack of AI security expertise as their primary worry—a stark reminder of how quickly the adoption of AI technologies can outpace the development of corresponding security measures. “Security teams often must safeguard systems they don’t fully comprehend, which increases the risk profile,” the report suggests. Consequently, a critical skills and tooling gap is becoming a mounting challenge.
Traditional Security Tools Fall Short
Interestingly, despite the widespread integration of AI services like OpenAI and Amazon Bedrock, security teams predominantly lean on traditional controls. The report states that most organizations rely on outdated methods, including:
- Secure Development Practices: 53%
- Tenant Isolation: 41%
- Audits for Shadow AI: 35%
While these strategies bolster general security, they fall short in addressing AI’s unique vulnerabilities, including threats like lateral model access and poisoned training data.
Complex Cloud Environments Compound Risks
As organizations increasingly adopt hybrid and multi-cloud structures—45% operate in hybrid environments and 33% in multi-cloud settings—security becomes even more difficult to manage. Alarmingly, around 70% of respondents still depend on endpoint detection tools that are better suited for traditional, centralized architectures, leaving them ill-equipped to handle the intricacies of AI-driven operations. A surprising 25% admitted they are unaware of which AI services are currently active in their environments, illustrating significant visibility challenges.
Looking Ahead: Bridging the Security Divide
To address these security vulnerabilities, the Wiz report emphasizes the importance of proactive strategies. Key recommendations for IT and security teams include:
- Adopt Continuous Discovery Tools: Monitoring AI models and shadow services is crucial.
- Integrate Security Early: Incorporate security measures into the software development lifecycle (SDLC).
- Policies across Environments: Ensure security policies are adaptable in multi-cloud and hybrid systems.
- Train Security Teams in AI: Specialized training is vital for equipping security professionals to face these new challenges.
A Maturing Landscape
Mapping out a security maturity model, Wiz outlines four stages of AI security readiness, ranging from minimal visibility in “Experimental AI” to proactive measures for “Proactive AI SecOps.” Most organizations currently find themselves in the earlier phases, indicating that there’s a long way to go before AI can be used securely and effectively.
The overarching sentiment of the report is clear: as AI continues to evolve, so too must our approaches to security. The call to action is urgent—security cannot afford to be a reactive process, especially in an era where the landscape is continually shifting. Proactive and continuous efforts must be the foundation of any organization’s strategy to safely harness the power of AI.

Writes about personal finance, side hustles, gadgets, and tech innovation.
Bio: Priya specializes in making complex financial and tech topics easy to digest, with experience in fintech and consumer reviews.