I’ll admit it — I recently had one of those “seemed like a good idea at the time” moments with AI. When someone told me I needed new headshots, my immediate thought was: “Perfect! I’ll use AI to generate them.” After all, I hate having my photo taken with the passion of a thousand suns, and the technology is there, right? Well, the results were interesting to say the least.
This little experiment clearly demonstrates AI’s double-edged sword that my recent conversation with Steve Odegaard, chief information security officer (CISO) at NEAT, brought into sharp focus. We’re living in an era where AI is both incredibly powerful and subtly dangerous, and businesses need to navigate this landscape with their eyes wide open.
AI and Productivity: We’re Not Even Scratching the Surface
Steve painted a picture that should both excite and concern every business leader: 63% of Australian businesses are using some form of generative AI. He told me, “Thanks to tools like Microsoft 365 Copilot and Gemini that are extremely user-friendly and accessible — almost too accessible for my liking.” That last part should make your ears perk up.
Think of AI adoption like giving everyone in your office a sports car. Sure, it’s incredibly powerful and can get you places fast, but without proper training and guardrails, you’re bound to have some crashes. AI technology has become so accessible that people are diving in without fully understanding the implications, much like my headshot experiment.
From a security perspective, Steve was clear: “There’s absolutely no way you can do security, especially detection and response to cyberthreats, without something like AI. It’s just not humanly possible.” This technology isn’t just a nice-to-have anymore — it’s become as essential as having locks on your office doors.
The Reality of Deepfake Threats: When Seeing Isn’t Believing
Here’s where things get really interesting — and scary. Steve shared a sobering example: “Earlier this year, we heard about cybercriminals posing as a CFO, and they stole $25 million.” Let that sink in. Someone used AI to perfectly mimic a known person’s voice and appearance convincingly. The problem? We’re still using the same old “trust, then verify” methods from when impersonating required serious acting skills and expensive equipment.
Steve’s advice hit home:
Rather than waiting, companies should invest in existing technologies like secure identity platforms that can help provide more assurance that users are who they are, especially when using real-time video collaboration.
In other words, don’t wait for the perfect deepfake detection tool — start building better authentication systems now.

The Hybrid Work Challenge: Still Figuring It Out
What struck me was Steve’s honesty about hybrid work: “I think this is still a challenge with many companies, and it continues to be a challenge.” Even as occupancy levels in Australian Central Business Districts reached 76% (up from 67% last year), we’re still figuring out how to make this work effectively.
He adds: “It starts with guidelines and policy. You’ve got to establish the rules of engagement,” emphasizing the importance of identifying which roles are eligible for hybrid work, setting communication expectations, and crucially investing in technology that’s actually easy to use. “If it’s too complex, forget about it. You’re not going to get user adoption.”
Building Trust in an AI-Powered World
One of the most important insights from our conversation was about transparency. Steve shared that within his organization they want to be really upfront about how they use AI in their products. Steve told us they use AI for high-quality video and audio capabilities, but they sit down with customers to provide assurance that their data and privacy aren’t at risk. By all means, use AI — just make sure you’re really transparent about it.
However, Gartner research found that 40% of large enterprises will use AI to measure employee behavior and sentiment by 2028. AI will be able to perform sentiment analysis on workplace interactions and communications. That’s like having a digital supervisor constantly watching over your shoulder, analyzing your mood and productivity. Companies need to communicate the purpose of this AI to build trust between employees and employers.

The Future of Collaboration: Practical AI Action Plan
The AI transformation is here, and it’s accelerating. Here are the concrete steps every business should take:
- Establish clear AI governance. Don’t wait for the perfect policy — start with basic guidelines about what AI tools employees can use and how. Create a simple approval process for new AI-powered software, even if it’s just adding AI features to existing tools.
- Invest in identity verification systems. With deepfakes becoming increasingly sophisticated, traditional authentication methods aren’t enough. Implement multi-factor authentication (MFA), secure identity platforms, and consider biometric verification for sensitive operations. Think of it as upgrading from a standard lock to an intelligent security system.
- Be transparent about AI usage. Tell your customers and employees exactly how you use AI in your products and operations. Create clear documentation about data privacy, AI decision-making processes, and the purpose behind any AI monitoring. Transparency builds trust, and trust is your competitive advantage.
- Strengthen third-party risk management. Implement a robust process for vetting AI-powered solutions, especially updates to existing software. Just because you approved a tool last year doesn’t mean this year’s AI-enhanced version is automatically safe. Regular security assessments should be as routine as financial audits.
- Design for hybrid success. Invest in collaboration technology that’s user-friendly, not just feature-rich. Set clear expectations about availability, communication tools, and when face-to-face interaction is necessary. Remember, if it’s too complex, people won’t use it — unused technology is worthless.
The businesses that thrive will be those that embrace AI’s productivity benefits while building robust safeguards against its risks. As Steve reminded me, we haven’t even discovered all the possibilities with AI yet, which means we need to be prepared for whatever comes next.
As for my AI headshot experiment, actually one image was usable out of several attempts, but not really business professional. This is why we need to approach AI with human oversight, clear guidelines, and backup plans to deliver remarkable results.
The future is now, and it’s both exciting and challenging. The key is navigating it with proper guardrails and a healthy respect for AI power and potential pitfalls.
Check out this episode and more here: The Tech Edge — Ticker.

