Student Spotlight: Lindsay Gross Builds the Technical Muscle Behind Responsible AI 

3/16/26 AI for Product Innovation 4 min read

With a background in public policy and AI ethics, Gross came to Duke’s Master of Engineering in Artificial Intelligence for Product Innovation to understand how models are trained, where bias emerges and how product decisions shape real-world outcomes.

Duke AIPI MEng student Lindsay Gross
Student Spotlight: Lindsay Gross Builds the Technical Muscle Behind Responsible AI 

When Lindsay Gross talks about responsible AI, she doesn’t start with abstract principles; she starts with the systems underneath them. As AI products move faster from research to everyday use, she sees a growing need for professionals who can connect the technical realities of model development with the governance questions that determine how those systems affect people at scale. That conviction brought her back to Duke, this time to deepen her engineering toolkit and sharpen the way she helps teams build, test and deploy AI more thoughtfully. 

Gross arrived at Duke’s AI for Product Innovation (AIPI) Master of Engineering program with a strong grounding in public policy and ethics, shaped during her undergraduate years at Duke studying how AI should be governed and where safeguards are most needed. But she also recognized what was missing. “I spent a lot of time studying how AI should be governed,” she said, “but I felt like I was missing the technical component of the societal impact: how models are trained, why certain product decisions are made, and where bias shows up.” She came to the Duke AIPI program looking to become someone who can move comfortably between technical and non-technical teams, bringing clarity and rigor to decisions that often live at the intersection of engineering, risk and public trust. 

“I wanted to build a stronger technical foundation to complement my background in public policy and AI ethics. AIPI allowed me to better understand how models are trained, why product decisions get made, and where bias shows up.”

Before and during the program, Gross built experience across trust and safety, ethical AI and applied research. At Tremau, a trust and safety consultancy, she worked on risk assessments and mitigation strategies for large technology platforms navigating regulations such as the EU Digital Services Act. Her work required a close look at internal processes where she identified gaps in AI risk classification and helped teams create proactive safety measures before issues reached users. She also worked at Metaphysic.ai, a generative AI visual effects company, where she translated ethical principles into product workflows, collaborating with engineers, legal teams and leadership to embed responsible AI practices into real client-facing contexts. 

Throughout the program, courses such as AIPI 590 Explainable AI and AIPI 520 Modeling Processes & Algorithms reshaped how she approaches AI systems in practice. Rather than treating models as black boxes, she learned to evaluate why they behave the way they do, where they fail and how design choices can introduce or reduce risk. That technical grounding strengthened her ability to weigh tradeoffs across data quality, model selection, interpretability and bias. Just as importantly, she said, AIPI strengthened the way she communicates about complex AI systems to different audiences, a skill she has found essential in product-facing work. 

Gross also sought out opportunities that combined hands-on building with deeper research questions. At the 2025 Duke AI Hackathon, her team created Alba, a browser extension designed to surface the environmental impact of AI usage in real time. This app turned broad conversations about AI sustainability into a concrete user-facing signal that could shape everyday behavior. In parallel, she conducts research through Duke’s Deep Tech Lab in partnership with OpenAI, focusing on AI agents and failure modes related to memory, autonomy and misalignment. Her work examines how poorly defined guardrails can lead to unpredictable agent behavior, and specificallywhat that means for trust, safety and product design. 

Across both experiences, she said, the core skills were the same: navigating ambiguity, thinking in systems and connecting technical choices to downstream impact. 

Looking ahead, Gross plans to work on trust and safety or responsible AI teams where she can help shape how AI systems are designed, evaluated and deployed in real products. She intends to pair technical work such as model analysis and risk assessment with product decisions around safety, usability and scale. Over time, she hopes to grow into a product-facing leadership role, helping teams move quickly without sacrificing transparency, safety or user trust. For Gross, AIPI has been less about choosing between policy and engineering and more about learning how to hold both at once, in the moments that matter most.