AI and machine learning expert Dr. Josh Harguess appears in our event series Delivered to discuss how to successfully navigate the post-hype AI landscape.
Two years ago, the launch of ChatGPT kicked off a generative AI frenzy that had everyone in awe. Now, the initial excitement has worn off, and AI tools have become the new normal.
In other words, the honeymoon is over. AI has shown great potential, but now we’re at the stage where the real work begins — managing both the risks tied to AI systems and our expectations of how the technology can improve our businesses and society.
Currently the AI Security Chief at Cranium, an AI security and trust platform spun out of KPMG, Dr. Josh Harguess has been on the frontlines of AI safety and effectiveness for over a decade. With 60+ publications, three best paper awards, and five issued patents, his contribution to the field is invaluable.
Appearing as a guest on our show Delivered, Josh discussed AI security, the shifting expectations around AI, and its potential to address humanity’s major challenges in the future.
Securing AI vs. securing traditional software
AI security is a new and rapidly evolving field, and Josh is among the few experts worldwide with the title of AI Security Chief. But what makes securing AI so different from more conventional software systems?
In traditional software development, security primarily focuses on identifying and addressing code vulnerabilities. However, AI systems can be compromised in ways that older software paradigms cannot.
Manipulating the data AI models process or modifying the inputs given to them can cause the models to behave in unintended or harmful ways. These actions open the door to new types of attacks like data poisoning and prompt injection.
Each type of AI also has unique vulnerabilities, which makes securing these systems even more complex. Unlike traditional software, which has a well-documented set of vulnerabilities and established methods for testing and mitigation, AI is still a relatively new and evolving technology. Many potential threats remain unknown or not fully understood. This is where AI red-teaming comes into play – a practice that involves adversarial testing of AI models where experts intentionally try to expose weaknesses in AI models. It’s an emerging field in which Josh is one of the few bona fide experts.
Before joining Cranium, Josh led the AI Red Team at MITRE Labs, where he contributed to the development of MITRE ATLAS (Adversarial Threat Landscape for Artificial Intelligence Systems). This global, living resource is a knowledge base that catalogs adversary tactics, techniques, and real-world demonstrations of AI system vulnerabilities based on actual attacks and red-team exercises.
From RAG to riches
Deploying secure AI products isn’t just a regulatory requirement or a technical challenge. It’s a strategic advantage. By prioritizing AI security, organizations not only protect themselves against breaches and adversarial attacks but also help preserve their reputation. So, what are the most common security oversights when building AI-enabled products, and how can they be avoided?
According to Josh, data is key. Companies need to closely evaluate the data they use for training and control who has access to it. Poor data handling can lead to data leakage, where sensitive information unintentionally appears in AI model outputs.
To prevent data leaks, Josh recommends using retrieval-augmented generation (RAG).
“Instead of training the model directly with sensitive data, you use an intermediate retrieval system – a search tool that pulls the needed information from secure documents or databases,” Josh explains.
Another key point is having a solid AI policy – clear guidelines that govern how AI is used within the organization. An incident response plan is also essential, outlining procedures for handling security incidents involving AI systems.
Finally, Josh believes that AI security should be a shared responsibility across the company. “At Cranium, we have an AI council with representatives from every department. We recommend this approach for other companies too, and it’s gaining traction,” he says. This ensures a holistic, cross-functional approach to AI security.
Will AI save the world, or at least Q4?
The popularity of artificial intelligence, especially Large Language Models (LLMs), has reached a peak like no other technological wave before it. However, the initial hype is beginning to subside as people become more aware of AI’s capabilities and limitations.
Just as our fears around AI shifted from science-fiction scenarios (think AI taking over the world) to more practical concerns like job displacement and misuse, our expectations are evolving, too.
Businesses rushed to implement AI, hoping for quick wins and high ROI, but many are now struggling because their approach lacked clear strategy and alignment with real business needs.
Some businesses aren’t seeing the expected ROI because of misaligned strategies and unrealistic expectations. They thought they could just plug in this chatbot to what they were doing, and it would just magically solve their problems.
AI implementation is an ongoing effort, not a one-time deployment. AI systems require regular updates and strategy adjustments to continue delivering value. This makes AI adoption a continuous, iterative process—far removed from the traditional, linear product development cycle.
“Ultimately, successful AI adoption still comes down to hard engineering challenges, clearly defining your problem, decomposing that, and figuring out where AI fits into the solution,” Josh concludes.
He defines a three-step process that can guide a successful AI journey:
1
Identify a clear use case: define how the technology will be used within a specific system and where it fits your broader business strategy.
2
Build a working prototype: test the AI with sample data to demonstrate its viability in a controlled environment.
3
Ensure security: implement safeguards to control access, protect sensitive data, and prevent misuse of AI.
While AI might not solve all of humanity’s problems or every business challenge, Josh remains optimistic about its future potential. He sees AI as a powerful tool for enhancing human efforts and believes those who adopt it now will have an advantage over those who don’t. We are already living in the “augmented present” of AI, where humans and machines work together to achieve more.
Josh believes AI can significantly accelerate breakthroughs in healthcare, pointing to recent progress in solving a major conundrum in science, the so-called protein folding problem, which is essential for drug development and could lead to faster vaccine and treatment discoveries. In engineering and design, Josh highlights digital twins – virtual models of real-world systems that allow extensive testing before real-world deployment, reducing costs and risks associated with real-world testing.
For more insights on where we stand with AI and where it will take us next, watch or listen to the full conversation with Josh. And if you’re interested in integrating AI into your digital product, check out our AI business solutions page.