AI security leader Jonathan Dambrot appears in our event series Delivered to discuss how to successfully navigate the complexities of the AI landscape in 2024.
At Infinum, we interviewed industry leaders about their experience with digital transformation and future investment plans with a special focus on AI. We found an interesting statistic: 78% of companies plan to invest in AI tools in 2024, yet 73% are underprepared to integrate them into their operations.
Based on his extensive experience, AI security pioneer Jonathan Dambrot estimates that in reality, this figure is even higher, closer to 90%. That’s how the idea for his company, Cranium AI, was born.
With over 20 years of experience in information technology, cybersecurity, and third-party risk management, Jonathan is a visionary and industry leader dedicated to promoting confidence and trust in secure artificial intelligence.
Appearing as a guest on our show Delivered, Jonathan discussed how to tackle the cybersecurity and regulatory challenges that come with AI adoption and shared key business decisions that have made Cranium a leader in AI security.
The brains behind Cranium
Before becoming CEO and co-founder of Cranium AI, Jonathan handled sensitive data for large companies, utilizing AI and ML systems to manage third-party risks more efficiently. This led him to realize there was a huge opportunity to improve security of AI and ML pipelines. At that time, generative AI had not yet become mainstream, and regulatory pressures were minimal, so these issues were not on people’s radars.
I started recognizing there’s so much innovation happening so quickly, but people aren’t necessarily ready to have conversations about the security of AI pipelines. So, I wrote my business plan and my pitch deck, says Jonathan.
The rest is history. In 2023, Cranium spun out of KPMG Innovation Studio with the mission of securing the AI revolution.
From the internet to the iPhone to the AI revolution
The AI revolution has prioritized AI in a practical, real-world sense, making its power accessible to individuals on the devices they use daily. This shift is comparable to other tech milestones, such as the dawn of the Internet or the introduction of the first iPhone.
When it comes to the latest ground-breaking, world-changing tech revolution, Jonathan is a huge believer that AI is going to change our lives for the better. Already, we’re seeing a rapid transformation happening all around us – it seems every business is now AI-enabled or at least trying to be. However, Jonathan cautions that this rapid adoption lacks adequate governance in many cases, posing significant challenges around security and ethical considerations.
“It’s the collision between large-scale transformation, emerging technology, and the need to get the returns we want. However, if we don’t do this right, we’re going to potentially face real problems in the future,” claims Jonathan.
Who’s afraid of the Big Bad AI?
Pop culture, through movies like The Terminator, has instilled a fear of AI in all of us, often portraying it as a potential threat to humanity. But today, we face a paradox where individuals and companies are simultaneously too afraid and too trusting of AI, according to Jonathan.
This is evident in stories from the early days of machine learning, when people blindly followed their GPS systems and drove into lakes or a study where a crowd followed a malfunctioning robot during a simulated fire, ignoring clear exit signs. Recently in the US, a Chevy dealer mistakenly sold a car for a dollar using an AI-driven chatbot, and Air Canada had to honor a refund policy made up by AI.
However, what we should be really afraid of is the fact that we’re putting sensitive data into AI models without thinking twice about what happens with that data and the threats it’s exposed to.
We need to think about things like poisoning. We need to think about back doors that are getting put into AI systems. In addition to all the standard cyber risks that we’ve been talking about for a long time, we now have unique threats, explains Jonathan.
In some cases, AI systems are open and we know how they’re trained. But in a lot of cases, especially with organizations that depend on third-party AI services, it’s more of a black box and we need different techniques to open them up and understand them.
On the one hand, companies are feeling immense pressure to jump on the AI train before their competitors do. On the other hand, there’s a growing recognition that doing this incorrectly or with an architecture that leaves you open to threats and risks is almost as bad as getting left behind.
“How do you take an AI system, understand what its unique vulnerabilities are, check if it can be penetrated, and report on that? You need a special skillset. You need people who understand cybersecurity, people who understand the AI, and then you need tools that can bridge the gap there”, concludes Jonathan.
The world doesn’t know how to act around the AI Act
In order to keep up with technology advancements, the regulations in the AI space need to be just as fast-moving.
In March 2024, the European Parliament adopted the Artificial Intelligence Act — the first-ever comprehensive legal framework for AI worldwide. For comparison sake, GDPR has had a massive impact on data privacy globally, and organizations continue to struggle with its obligations. While the fine for infringing GDPR regulations was 4% of the violating company’s turnover, the AI Act imposes even stiffer penalties. The fines for non-compliance reach up to 35 million EUR or 7% of a company’s annual turnover.
Now, every major country is developing its version of the AI Act. Each country is increasingly concerned about where AI is being built, how it enters its borders, and its impact on its citizens’ data. As a result, companies are under immense pressure to understand how AI works within their environments, balancing the demands of compliance with the drive for innovation.
Building a culture of innovation
To be at the forefront of an innovative niche, a business needs to foster a culture of innovation, something Jonathan is experienced at.
While some companies focus on perks like custom-made coffee machines, ping pong tables, and lavish team-building events, Jonathan believes the key to building a culture of innovation is providing an environment where smart, talented people can solve meaningful problems together.
I’ve always found that we want to create an environment where people are looking to solve big problems. When we see something we want to solve and have the capacity to solve it, we say yes, and we go after it.
Creating a culture of innovation involves setting high standards and finding people whose values align with the company’s values and who can thrive in that environment. That’s why Cranium has a ‘no jerks’ policy, which translates to employing respectful people and promoting diversity and inclusivity in the workspace.
For more advice on successful leadership, innovation, and navigating the security and regulatory challenges that come with AI adoption — watch or listen to the full conversation with Jonathan. And if you’re interested in integrating AI into your digital product, check out our AI business solutions page.