AI, Trust, and the C-Suite Dilemma
- Morgan Doyle 
- May 27
- 4 min read
Updated: May 27
A month may have passed, but the ideas sparked at Atlanta AI Week are still igniting conversations. This series unpacks those insights—one session, one story, and one sector at a time.

The first session of Atlanta AI Week didn’t shy away from the uncomfortable. It challenged leaders to confront one of the most pressing questions of our generation:
“Just because we can... does it mean we should?”
📍 A Wake-Up Call for the AI Age
In a deeply personal and powerful keynote, the speaker opened with a controversial but very real scenario—a simulated conversation between a 12-year-old and an AI tool offering inappropriate advice. The system didn’t question the context. It responded as programmed: optimizing for helpfulness, not ethics.
This was not just a product flaw. It was a systems flaw. The takeaway was clear: it’s not always a failure of the algorithm—it’s a failure of decision-making upstream.
This mirrors how many tech rollouts happen today:
- Developers raise concerns. 
- The C-suite weighs short-term revenue against long-term risk. 
- The decision often tilts toward speed and shareholder value. 
And in that moment, we lose sight of the societal cost.
📊 A recent study from Edelman’s 2024 Trust Barometer revealed that 61% of people globally distrust AI technologies, not because they reject innovation, but because they fear misuse. Additionally, 78% of consumers say it’s important that companies use AI ethically and transparently, according to Salesforce’s 2024 State of the Connected Customer report.
This isn’t just a philosophical debate. It’s a business risk.

🧩 The Real Business Risk: Losing Trust
We’ve seen this story play out before. Social media’s rapid expansion brought immense connectivity—but also misinformation, exploitation, and a collapse in public trust. AI, with its broader scale and deeper integration, carries the same risks, amplified.
- According to PwC, 85% of CEOs believe AI will significantly change the way they do business in the next five years, but only 35% feel confident their organization is addressing AI risk appropriately. 
- The World Economic Forum ranks misinformation and disinformation driven by AI as one of the top 5 global risks for the next two years. 
We are at a pivotal moment. And the decisions we make now will shape not just our businesses, but our culture, our politics, and our personal lives.
🧭 What Marketers Must Take from This
Marketers are among the most active adopters of AI, leveraging it for segmentation, real-time personalization, predictive analytics, and content creation. The power is remarkable. But it raises critical questions:
- Are we using customer data responsibly? 
- Do consumers understand how decisions are made? 
- Can our customers trust the content and recommendations we generate? - 💡 71% of marketers say AI helps improve customer experience—but only 34% have formal governance around how AI is used in their marketing stack (Forrester, 2024). 
Ethical AI use is no longer optional. It’s a differentiator.
🔄 Responsibility Is the New Competitive Advantage
In a world where AI access is rapidly democratized, how you use AI will matter more than what tools you use. Values and governance will be the new brand equity.
- Consumers are 2.5x more likely to remain loyal to companies they believe use AI transparently and responsibly (McKinsey, 2024). 
- Gen Z and Millennial consumers, now the largest demographic groups, are 63% more likely to boycott a brand if they believe its AI practices are unethical. 
This session reminded us: AI is not neutral. The tools we create reflect the values of the people who build and deploy them.
So the next time your team considers rolling out a new AI-driven feature, pause and ask:
Just because we can… does it mean we should?
Let this be the mindset that guides us—not just through Atlanta AI Week, but into every conversation where innovation meets responsibility.
This blog is based on insights from the powerful session, “The State of AI: Technology, Policy, and Adoption,” presented by Justin Daniels during Atlanta AI Week. His expertise and perspective on the intersection of AI, regulation, and enterprise adoption provided the foundation for this recap—thank you for sparking such a timely and important conversation.
🧠 Personal Reflection
This session left me with more than just notes—it left me with a question I can’t stop thinking about:
“Just because we can… does it mean we should?”
It’s easy to get caught up in what AI can do, especially in marketing, where innovation moves fast and pressure runs high. But this keynote reminded me that speed without intention can cost us more than we realize.
AI isn’t neutral. It reflects the priorities of those who build and deploy it. And as professionals, we each have a role in shaping whether those priorities lead to progress or unintended harm.
For me, this wasn’t just a talk about tech. It was a call to lead with more responsibility, more clarity, and more courage.
This post is part of a special recap series sharing insights and takeaways from the inaugural Atlanta AI Week, held April 22–24, 2025, at Atlanta Tech Village. Throughout several blogs, I’ll be breaking down key takeaways from sessions, conversations, ethical considerations, and industry trends shaping the future of AI across sectors—from marketing and healthcare to smart cities and beyond. Stay tuned for more.




Comments