top of page

Ethical and Responsible AI: Building a Future Rooted in Humanity

  • Writer: Morgan Doyle
    Morgan Doyle
  • Jul 1
  • 3 min read
ree

Insights from an Insightful panel at Atlanta Tech Week 2025


At a time when AI tools are being integrated into everything from banking to education, conversations around ethics, equity, and fairness have never been more important. During Atlanta Tech Week 2025, a timely panel titled “Ethical and Responsible AI: A Fairytale or a Future We Can Build?”.

Moderated by Grant Wainscott, Principal Partner at ABF Consulting, the panel featured Dr. Richmond Wong, Associate Professor at Georgia Tech and founder of the Creating Ethics Infrastructure Lab, and Alexandria De Aranzeta, lawyer, startup advisor, and host of the Culture of Machines - a responsible AI podcast. Together, they guided attendees through the moral, technical, and policy-oriented landscape of AI in today’s world.


Key Themes and Takeaways


1. AI Isn’t Conscious, But It’s Not Neutral

AI systems lack human consciousness, but that doesn’t mean they’re impartial. Like automatic doors that open based on proximity, AI systems respond based on inputs, but those inputs are often laced with human bias, historical inequities, and incomplete data. As Dr. Wong noted, these systems are “not conscious, but not neutral,” underscoring the importance of intentional design.


2. Ownership Is Shared, But Accountability Must Be Clear

Responsibility for ethical AI doesn’t fall on any one person or role. From developers to designers, legal teams to product managers, ethical AI demands a cross-functional approach. Governance frameworks, internal ethics boards, and inclusive design teams are critical to minimizing harm and maximizing fairness.


3. AI Literacy Is Foundational

If people aren't equipped to understand how AI systems work, they can't effectively participate in shaping them. Both panelists emphasized the need for AI literacy, not just for technical teams but for policymakers, marketers, and everyday users. Empowered teams create safer systems.


4. Bias Is Human, and Machine Bias Starts There

Bias in AI starts with bias in data, which starts with bias in human history. From facial recognition to credit scoring, biased training data can reinforce harmful stereotypes or discriminatory outcomes. A USC study cited by Alex revealed that nearly 38% of large language data sets included semantic bias, linking ethnic and gender identity to slurs or stereotypes. The ripple effects of that are global and severe.


5. Critical Thinking Is a Non-Negotiable Skill

Ethical AI requires strategic foresight. Alex outlined a framework for responsible deployment:

  • Align with Principles: Tie AI decisions to company and societal values.

  • Plan for Outcomes: Consider both intended and unintended consequences.

  • Test and Repeat: Use iterative experimentation to reduce harm and validate utility.


6. Partnerships Between Academia, Industry, and Government Are Key

Developing ethical AI is not possible in silos. The panel emphasized creating "flywheels" of collaboration between universities, startups, global regulators, and non-profits. These partnerships can bridge gaps in funding, scale, and insight, especially in international contexts where laws, cultures, and data norms vary widely.


7. The Role of Policy and Regulation Is Evolving, But Incomplete

Panelists acknowledged the fragmented regulatory landscape, especially in the U.S., where AI laws differ by state. Yet there's optimism: more states are adopting privacy and children's design codes, and agencies like the FTC are hiring technologists to guide enforcement. Still, the group agreed: regulation must catch up quickly.


8. Ethics Boards Must Be More Than PR

Asked how to build authentic AI ethics boards, the panel suggested:

  • Including diverse voices across functions (product, marketing, legal, UX)

  • Assigning clear accountability, ideally via a dedicated role like Chief AI Officer

  • Avoiding dual roles that dilute ethical oversight

  • Ensuring community-led, independent, or citizen-based advisory models are part of the mix


Final Reflections & Tools for Evaluation

Attendees were encouraged to leave with more than just ideas; they left with tools. Dr. Wong shared a framework called Timelines, a creative method to imagine the social impact of AI through simulated news headlines and stakeholder perspectives. Alex urged builders to go “back to basics,” even recommending classical texts like On Duties by Cicero to reflect on how values inform decisions.


Other suggested tools and takeaways:

  • Use stakeholder models in early design phases

  • Leverage existing governance structures (privacy, security)

  • Include marketing and comms in ethical conversations

  • Ask: What problem are we solving? Is AI the right tool?


💬 My Closing Reflection

Every time I attend panels like this, I walk away energized, grounded by the thought leadership and inspired by the pioneers at the forefront of building a better future. Listening to this panel and participating in the surrounding conversations reaffirmed my belief that rooms like these are where change is made. Real, measurable, values-driven change. These aren’t just conversations. They’re blueprints for a future we all have a hand in shaping.

 
 
 

Comments


  • LinkedIn

©2019 by M.Doyle Marketing. Proudly created with Wix.com

bottom of page