In this podcast episode, host Michael Bernzweig discusses the journey of founding Aetos with Shayne Adler, focusing on the challenges and transformations in compliance and AI deployment. The conversation covers the importance of privacy by design, the role of AI in business, and the concept of shadow AI. Key insights include the need for a plan B in AI deployment, the significance of security basics, and the creation of sanctioned spaces for AI use.
In this podcast episode, host Michael Bernzweig discusses the journey of founding Aetos with Shayne Adler, focusing on the challenges and transformations in compliance and AI deployment. The conversation covers the importance of privacy by design, the role of AI in business, and the concept of shadow AI. Key insights include the need for a plan B in AI deployment, the significance of security basics, and the creation of sanctioned spaces for AI use.
Takeaways
Sound bites
Chapters
Michael Bernzweig (00:05.978)
I hope everyone enjoyed that last presentation. Coming up next, we have Shayne Adler. She is the CEO and co-founder of Aetos Data Consulting, where she helps startups and SMBs turn privacy, security, and AI governance into a sales advantage instead of a blocker. Her clients include a wide range of organizations, including Commerce Bank and Trust,
legal startup navigating SOC 2 and HIPAA compliance among other clients. So with that, Shayne, happy to have you join us at the event.
Shayne Adler (00:46.238)
Thank you. I'm as Michael said, I'm Shayne from Aetos And I know that when people hear about compliance, they tend to not necessarily go to the most positive place. Some of the words we've heard in our journey that go along with the word compliance include, you know, it's restrictive, it causes friction. One of
the things that I want to do today is talk to you about how to deploy AI in your company in a way that has good governance, has those guardrails, and adheres to all those principles of AI ethics, if I may use that word, without it causing that friction, without it slowing you down, your core business. So it's an additive. It isn't,
a hindrance or it isn't chaos.
So by deploying AI, we're talking about getting into workflows that save time and money, not just generating weird images of hands that have seven fingers. We want to improve the customer experience without creating that chaos. So we're not talking to you about building your own models today. We're going to leave that to the people with the server farms. Most founders win by shipping AI into the workflow and improving it based on usage. I think that's common sense, right?
And spoiler alert, even if you haven't actually shipped an AI feature, I'm reasonably certain that your company already has AI in the building. It's lurking in the background like a ninja. And we call that shadow AI. We'll get to that in a bit. turning to demo versus reality. Let's be honest. AI, it's easy to demo, but it's harder to ship than a fragile package during the holidays.
Shayne Adler (02:46.352)
A demo is a clean curated example. Real life is messy and it, like your company involves humans who make mistakes and let's also be honest, they're inconsistent. So a demo versus reality, it's the difference between an Instagram influencer's kitchen and my kitchen. So users with AI, they don't trust what they can't predict. We've gotten used to outputs.
from computers being predictable. So you need to, in order to get adoption, you need to build trust. Wrong answers are going to happen, hallucinations, that sort of thing. And you need to create a default option when you are using AI tools that doesn't involve the AI hallucinating a new company policy or completely inventing things out of
thin air that then you need to back up. This has happened where it has done that with customers and causes a lot of chaos. The other factors are that costs per task with tokens can grow quietly like that gym membership you're going to forget to cancel in a couple of weeks. You've also got the risk of data leakage. AI is the hot new thing on the market, but
All AI is built on data. It's only as good as what you put into it. And if data that's not supposed to be going into the AI is going into the AI, that's also going to cause problems. And then going back to that hallucinations, it could also just break entirely. And you need a plan B. You need a backup for what your company is going to do in order to cope. So
Our goal is simple. We want to be able to deploy AI that gets adopted, is safe, and has a plan B for when it inevitably breaks or goes a little nutty.
Shayne Adler (04:55.698)
We're going to talk about what a well-governed rollout looks like. This is the basic outline, and we're going to go into detail on each of these steps. But overall, this is like your cheat code for deployment. You want to pick a workflow, just one, and figure out where it saves time or money. And those are interchangeable in some situations.
You want to set clear rules, define what data is invited to the party and what stays home. You want to design for humans. You want to keep a human in the loop. And I'm going to use this phrase a few times because it is critical. You want humans to have the opportunity to review and correct. We're not quite at Skynet yet. And you want to be able to test it, watch it in production, see what's going right and what's going wrong. And then you want to build trust.
You want to have security basics and straight answers for investors. Most teams spend too much time arguing about which AI model to use. That's more like arguing about the engine when you haven't built the core yet. You want to focus on the workflow and operations, and that's where you win. starting with step one, picking the right starting point. Good AI use cases, they're like a good haircut. The value's obvious, and if there's a mistake, it's fixable most of the time.
Great starters would be drafting and summarizing, emails, notes, support replies, helping users search your documentation, support triage. And here's the founder move. With that workflow, you want to pick a metric, like reducing support response time by 30%, or follow up writing time in half.
If you can't name that workflow and that metric, you're not deploying AI, you're just adding a novelty feature and nobody wants to be the company that just added the digital equivalent of a snake in a can. So now we're going to move towards how it looks in action. You want to have a co-pilot, not an autopilot. A co-pilot is a tool that drafts, suggests, and then
Shayne Adler (07:10.588)
brings the human into the loop, there's that phrase again, to make the actual decision. It's like having a really smart intern. And this drives adoption because users feel like they are still captain of the ship. It's not great to feel out of control. And you also don't want your team checking out if you're going to be deploying this tool internally. It's too easy to just cut and paste responses. And you would be shocked at some of the things I have seen over the year.
how a couple of years since AI has become mainstream of what outputs have just been pasted in, very obviously. An autopilot is an AI tool that takes actions. It sounds nice until it creates a massive mistake while you're asleep. You need strong safety controls and that big red stop button. So your best pattern is to stage the deployment. You want to ship a copilot that people trust.
learn where it's reliable, and then automate the boring low risk parts while keeping that human in the loop for anything that involves a decision or an output. So now I'm to turn to shadow AI. That old horror story, scary story where the call is coming from inside the house, that's shadow AI. It's not.
a villain from a new movie, it's the fact that your team is probably even right now using chat GPT on their second monitor. I guarantee that there are employees at every company using random browser tools to help write emails or code. That's shadow AI and it's not inherently bad because what it is is an indicator. It's a, it shows that your team is hungry for tools for leverage.
However, it creates four very predictable headaches. People will paste sensitive information into the tools and cause information leaks. Your messaging becomes inconsistent because everyone is using a different voice through their personal accounts and you don't have insight into the accuracy. And then also you don't have access controls because if somebody's using their personal account, you can't cut off that access when they leave the company and your
Shayne Adler (09:36.083)
proprietary data may still be sitting inside their account. So shadow AI is typically your first deployment problem because, it's already happening. So the key is to build safe lanes, to transition from this non-sanctioned usage into sanctioned controlled usage with those guardrails that you need. Because banning shadow AI doesn't really work. It just makes people sneaky.
So the better approach is to create those sanctioned spaces. So the very clear plan is you want to run a no-blame AI check-in. Just ask what tools are we using? You aren't hunting witches, you want to map reality. You will get a sense of what functions also your team is wanting to leverage AI for, depending on which tools they're using. Is it that email drafting? Is it...
data analysis. So then from that you want to pick one to two approved tools, get company accounts. That's the critical thing. You want the company accounts with centralized billing. You control the off switch when somebody leaves and most critically you get say over whether the model is going to train on the data being put into it. Spoiler alert, you want to turn that off every time.
You want to then create a never-paced list. So all of those things that you would never want to be putting on LinkedIn or a billboard back in the day, passwords, secret keys, customer names. So all of those pieces of data that you don't want to be out there, don't put them into the LLM. You can anonymize information to get the outputs you want.
You can describe a customer to get that email drafted, et cetera. Just make sure that people aren't pasting your proprietary data into, you know, Gemini or whatever they're using. And then you want to do the training. Sounds boring, but with these new tools, it's essential. You want to teach them how to redact, to summarize, not raw paste and turn. And all of that turns shadow AI into, from a risk into actual leverage.
Shayne Adler (11:58.843)
So grounding this, you want to make sure that your AI is doing its homework. You want to connect AI through the sanctioned tools to your knowledge base. Instead of asking it to guess and thus hallucinate, you want it to pull from your documentation, your policies, your knowledge base, and have it cite its sources. That helps that human in the loop verify that it isn't just making something up and that...
helps in turn streamline things. It's going to prevent that AI chat bot, for example, from inventing a refund policy, telling a customer about it, and creating a very awkward situation.
Next, you want to design for trust. Users are going to adopt AI when they can see it, edit it, and correct it. So deployment-friendly patterns include making outputs drafts. So they're not the final truth. They're not the be-all end-all. You want to ensure approval steps are integrated before anything risky happens. You want to give users an easy way to correct the AI, and that ties into the next one, which is
allowing the AI to say, don't know, instead of bluffing like a poker player with a bad hand, which is what it wants to do. It wants to please, it wants to give us good answers that it is confident in and it will steer into confidence instead of accuracy every time. And then this is critical. You also want to disclose when AI is being used, especially if it's interacting with a human. Even if you think it's obvious that
the chat bot on your website is a chat bot. It is becoming more and more important across the country, across the world that all of these potential deception risks are being disclosed as AI. So you want to treat AI as a product feature. It's not a magic trick. You want to keep a set of real examples and be able to test them over time. You want to track
Shayne Adler (14:09.562)
what's happening, all the complaints, escalations, you want to have feedback so that you find out if it's actually being helpful. You want to also be able to track how fast it is and how expensive it is. Is it actually adding value? Because it may not be, it may be more expensive than it's worth. And then you want to have a rollback plan because if it goes rogue, you don't want to be debugging safety issues while your customers or competitors are watching with popcorn. So.
The boring yet essential safety slide, I'm going to keep it very plain and quick. You want to ensure multi-factor authentication. You want to only give access to people who need it. You want to know who has access to what and keep a list of those tools and vendors and what data they're touching. You want to also have logging as much as possible depending on the tools availability of that feature. And then you want to have a
emergency incident response plan. That's your panic moment for if things go sideways. When you standardize tools, you reduce surprises, but you don't fully eliminate them.
The 90 day plan is in the next week, pick a workflow and a metric, map the data, run the shadow AI audit. Then you can ship your co-pilot MVP and get feedback. And then over 90 days, expand to more workflows, carefully automate the low risk stuff and create a trust sheet for investors that explain your data rules to turn those experiments into leverage. So I'm gonna leave you with this so you can screenshot it.
It tells the story that you need. If what is here is true, then you're already ahead of the curve because most teams skip the boring parts. However, the boring parts are what make AI scale and we want it to be scalable and sensible. Thank you.