where responsible AI meets culture
The Culture of Machines is your next must-listen AI podcast, exploring key questions about responsible AI and its impact and intersections with people, products, and communities.
S1|E1
Can AI be built to benefit society and the individual?
Alex and Sharon Zhang, CTO and Co-Founder of Personal AI, discuss building a consumer AI company before the ChatGPT tsunami, explainability, AI applications for good, and impact on society and individuals.Guest favorite(s): “Originals: How Non-Conformists Move the World” by Adam Grant
S1|E2
Is there a human cost to [mitigating] AI bias?
Alex chats with Gerald Carter, CEO and Founder of Destined AI, about the importance of de-biasing data, the power of language, and what it means to generate diverse, consented data at scale.Guest favorite(s): Coded Bias documentary and
"Weapons of Math Destruction" by Cathy O'Neil
S1|E3
Does data governance and responsible AI intersect?
Alex and Sabrina Palme, CEO and Co-Founder of Palqee, discuss the relationship between AI and privacy compliance, her unique approach to data governance, navigating responsible AI foundations in light of recent legislation, and share predictions about the future of work.
S1|E4
Is trust the currency of AI?
Alex explores the intersections of ethics, privacy, and AI with Josh Schwartz, CEO and Founder of Phaselab. In this episode, Josh shares the challenges and risks around operationalizing ethics in a for-profit business, prioritizing responsible AI in early stage companies, and responsibility in the context of privacy x AI.
S1|E5
Is it enough for AI to "do no harm"?
Alex speaks with Nana Nwachukwu, AI Governance and Ethics Consultant at Saidot, about evaluating and operationalizing AI governance, the call for AI to "do good," and the challenges to transparency in AI systems, particularly in the creator economy and healthcare.
S1|E6
How can AI augment civic spaces?
Alex chats with Sarah Lawrence, Design Director at Design Emporium and Founder of Tallymade, about driving civic engagement through the creation of AI-powered collective art, preserving creativity in the age of AI art, and designing for everyday utility, like generating recipes for eliminating food waste, or efficient errand mapping.
S1|E7
Does social ethical AI exist?
Alex chats with Emmanuel Matthews, AI Product Leader (ex-DeepMind, Google, Course Hero, Spring Health) about the opportunities and challenges of leveraging AI in mental health products, go-to-market and consumer perception of AI, and predictions for future society considering the impact of AI.
Emmanuel also shares the biggest question he believes is facing AI– can we have social AI that is truly ethical?Guest favorite(s): "The Innovators Dilemma" by Clayton Christensen; "Automating Inequality" by Virginia Eubanks; and "The Alignment Problem" by Brian Christian
about the show
The world's first responsible AI x culture podcast
Welcome to The Culture of Machines, the podcast where we unpack the biggest questions facing responsible AI today.Hosted by Alex de Aranzeta, we'll explore the implications of AI across business, privacy, ethics, art, industry, and human rights.We'll also uncover the fascinating stories of people and organizations building, using, and advancing AI for social good and driving positive change.Explore our library of thought-provoking episodes, framed as questions, that reveal the biggest opportunities facing responsible AI today. Whether you're a founder, product leader, culture maven, or simply AI-curious, you'll find a question worth exploring.
Meet the host
Alex de Aranzeta, MA, JD (she/her) is a former Civil Rights practitioner and award-winning Language Access policy builder. With a background in regulatory enforcement and anti-discrimination policy, she's built and led civil rights and diversity compliance and governance across, government, higher ed, and enterprise for teams of 50 to 50,000. For the last 5 years, Alex has advised venture-backed startups and CEOs on GTM and storytelling strategy, and operationalizing equitable practices across teams, products, and communities.
join the conversation
Want to be a TCOM insider?
Sign up to receive occasional updates about launches, virtual / IRL events, and more. No spam or sale-mail. Only responsible emails.
© 2023-2024 Accessity LLC. All rights reserved.
COMMITMENT TO RESPONSIBLE AI USE
Updated 9/2024As a responsible AI speaker, startup advisor, and professional in AI bias and discrimination, Alex is committed to modeling transparency by explaining the core AI tools she uses and how she uses them in producing The Culture of Machines podcast.Understanding that AI tools are rapidly maturing— and that Alex is continuously learning about and testing new tools—she takes an iterative approach regarding the use of AI tools when marketing and producing this podcast. This means that a tool she may have used in the past or is using at present may no longer be used tomorrow, next week, or in the future. From time to time, Alex may use ChatGPT, Gemini, or Claude for brainstorming or reference. She also uses Riverside to record the podcast and generate summaries and show notes, which she will adapt in her own voice. Alex performs extensive due diligence and leverages her investigative skills by cross-referencing outputs with verifiable sources, as needed.At no time does Alex utilize the assistance of AI to draft or co-draft guest scripts. Further, Alex does not input personal or proprietary information into ChatGPT or other LLMs.Listen to The Culture of Machines episodes
Georgia Tech MBA TechForward Conference
"Creating Your Culture of Ethical and Responsible AI" Workshop
October 25, 2024 | 2:45 - 3:45 p.m. ET | Scheller 203
Facilitator: Alex de Aranzeta
WORKSHOP DESCRIPTION
The integration of generative AI in business, work, and life is transforming how humans and machines interact. This shift creates both significant disruptions and unique opportunities across various industries. As future business leaders, understanding ethical and responsible AI is crucial for navigating these changes effectively.In this interactive workshop, we'll explore the core principles of responsible AI, and examine real-world examples offering practical insights into the impact of AI on products, customers, and teams. You'll also have the opportunity to practice a framework designed to help you apply ethical and responsible AI strategies.
✨For attendees✨ WORKSHOP ACTIVITIES AND MATERIALS
Below are links to the workshop activity, a brief feedback survey, and a list references and resources for further reading.📂 Group Activity: WellTrace AI
💬 Feedback Survey
💡 Workshop Re/Sources
ABOUT THE FACILITATOR
Alex de Aranzeta, MA, JD is a startup advisor, founder coach, and responsible AI podcast host specializing in responsible AI, culture, and strategy for early-stage and venture-backed startups.Prior to tech, Alex held key roles in government, higher ed, and enterprise, guiding over 250 workforces in equity and compliance. With nearly a decade of experience in civil rights compliance and regulatory enforcement, Alex has led over 300 investigations, trained thousands of professionals, contributed to policy and regulation, and built an award-winning Language Access Program.Alex’s work now focuses on operationalizing ethical practices in AI, advising leaders on communication, and strategy. As the former founder of a compliance-tech SaaS and operator within a generative AI startup, she combines her expertise in policy and equity with practical insights from the tech industry to help startups scale effectively across teams and products. She’s also a trusted voice on the intersection of AI, society, and culture, hosting “The Culture of Machines” podcast with top technologists and ethicists, and has spoken on equity, policy, and communication at Kapor Center, U.S. Department of Labor Women's Bureau, World Congress of Bioethics, and the Women in Product Conference.