The Tech Edge: Creating Trust in AI — Security Factors

Post Date: 05/16/2025
feature image

Subscribe to our blog

Fields with * are required

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Working in tech for the past decade(s), I’ve seen countless innovations touted as revolutionary before organizations stumble during implementation. AI feels different though, both more promising and more perilous. It’s not just another tool; it’s fundamentally changing how we interact with our data.

I remember in an episode of the Tech Edge with Dana Simberkoff, chief risk, privacy and information security officer at AvePoint. Dana always gives me the best phrases to remember things by and this is no exception. She perfectly captured the current anxiety around AI implementation when she said, “AI can either be your best friend or worst enemy,” — and isn’t that the truth? 

The Double-Edged Sword of AI

What struck me most during Dana’s interview was her observation that AI essentially removes “security by obscurity.” Think about it: how many sensitive documents are lurking in your organization’s systems that nobody remembers existing? In the pre-AI world, these forgotten files posed minimal risks because nobody was looking for them or didn’t know they existed.

Enter AI — suddenly that forgotten strategy document from 2018 with confidential projections is “served on a silver platter” (Dana’s words, not mine) to anyone with access permissions. The AI doesn’t discriminate between what should and shouldn’t be shared; it simply finds what you ask for with frightening efficiency.

It’s like having a super-efficient assistant who doesn’t understand when to stop talking. Sure, they’ll find exactly what you need in record time, but they might also blurt out your salary details during a team meeting because technically, that file wasn’t marked confidential.

Building Your AI Landing Field

Implementing AI isn’t about launching a rocket and hoping for the best. You need to consider everything when it comes to getting this rocket into space, including safety while you’re up there, but most importantly, how you get back down again. It’s about creating “a good landing field” where AI can safely touch down and operate within your organization.

Just like you wouldn’t land a plane on a runway littered with debris, you shouldn’t deploy AI into an environment cluttered with unsecured data. But what does this landing field look like? From my perspective (and echoing Dana’s expertise), it has several key elements:

Clear Runway Markings: Data Classification

Every piece of information needs appropriate tags and classifications. This isn’t just good practice for AI — it’s Security 101 (i.e., you should do this regardless of an AI implementation!). Your organization should have clear visibility into what types of data exist, where they’re stored, and who should have access.

I’ve seen companies rush AI implementation only to pull back in horror when confidential information started appearing in AI responses. The problem wasn’t the AI — it was the lack of proper data classification. It was the AI that shone the light on poor data management practices.

Traffic Control: Access Management

Dana repeatedly emphasized this: AI will leverage whatever access permissions exist. If your file-sharing practices are loose, AI will inadvertently amplify those security gaps.

I’ve always thought of access control like invitations to different rooms in your house. Some people get access to the living room (general company information), others to the kitchen (departmental data), but very few should have the key to your safe (highly confidential information). AI needs the same exact boundaries.

Stop Gap: Encryption

Perhaps Dana’s most actionable advice was about encryption. Encrypted documents won’t be ingested by AI systems like Microsoft 365 Copilot, creating a straightforward mechanism to protect sensitive information — if you don’t want AI touching it, encrypt it.

It’s like putting documents in a safe that AI can see but can’t open. Simple, effective, and doesn’t require complicated configurations.

Starting Small: The Pilot Approach

What reassured me the most about Dana’s perspective was her advocacy for pilot programs. Technology implementation often falls victim to all-or-nothing thinking, but AI doesn’t have to follow that pattern.

I’ve seen organizations freeze in place, torn between embracing AI’s potential and fearing its risks. Dana’s recommendation to “test the waters with a small group” provides a balanced middle path.

A pilot program is like sending a scout team ahead before moving your entire company across unknown terrain. They’ll discover the pitfalls, identify unexpected benefits, and help map the journey for everyone else.

Recommendations for Safe AI Implementation

After reflecting on all of the above considerations, here are my three concrete recommendations for organizations considering AI implementation:

1. Conduct a Comprehensive Data Discovery Exercise

Before you even consider implementing AI tools, invest time and resources in understanding what data you currently have. This means scanning your environment for sensitive information, identifying classification gaps, and documenting where your high-risk information resides.

Why this matters: As Dana emphasized, AI will surface any information it has access to, regardless of whether that information should be widely shared. Without thorough data discovery, you’re essentially inviting AI to rummage through your digital junk drawer — with potentially serious consequences. Set aside at least 4 - 6 weeks for this exercise, focusing particularly on shared drives, collaboration platforms, and email archives where sensitive information often lurks unprotected.

2. Implement Encryption Strategically 

Rather than trying to build complex rules and permissions, use encryption as your first line of defense against inappropriate AI access. Identify your most sensitive data categories (i.e., financial forecasts, HR information, strategic planning documents) and implement encryption solutions for these specific categories.

Why this matters: Encryption creates a hard barrier that AI cannot cross without explicit decryption permissions. It’s a binary solution (either the AI can read it or not) in a world of nuanced risks. Start with your crown jewels — the 20% of information that would cause 80% of potential damage if inappropriately shared — and expand your encryption strategy from there.

3. Launch a Controlled Pilot with Clear Success Metrics

Select a department or team with moderate (not high) sensitivity data and good technology adoption practices. Define specific success metrics beyond user satisfaction — include security incident tracking, productivity improvements, and content quality assessments. Run this pilot for at least 60 days before expanding.

Why this matters: As Dana suggested, pilot programs allow you to identify risks in a controlled environment. By selecting the right group and measuring the proper outcomes, you learn not just whether AI works technically, but whether it enhances or compromises your security posture. The pilot should include regular check-ins with participants and real-time adjustments to access controls and permissions as needed.

CONCLUSION

Remember, AI implementation isn’t a race — it’s a journey toward more intelligent, efficient operations. Taking these measured steps doesn’t mean you’re falling behind; it means you’re building a foundation for sustainable, secure AI adoption that won’t leave you scrambling to contain preventable data exposure incidents down the road.

What steps has your organization taken to prepare for AI implementation? 

Check out more episodes here: The Tech Edge — Ticker

Alyssa Blackburn is the Director of Records & Information Strategy at AvePoint, where she helps organisations achieve business value from their information. In her role, Alyssa provides records and information consulting services as well as system implementations, allowing customers to optimise the structure of their information to maximize business benefits while meeting data governance and compliance objectives. With 20 years of experience in the information management industry, Alyssa has worked with both public and private sector organisations to deliver guidance for information management success in the digital age. She is responsible for the development of AvePoint’s information management solution, and has been involved with implementing our records management solution with government agencies and commercial clients. Alyssa is actively involved in the information management industry and has spoken at a number of events including at Inforum 2016 in Perth. She has been published in the RIMPA IQ magazine and recently won the 2016 award article of the year for the RIMPA IQ magazine for her article titled, "Why you need to think differently about information management."

View all posts by Alyssa Blackburn
Share this blog