Lead Compliance Consultant Privacy and Responsible AI
Lead Compliance Consultant Privacy and Responsible AI
To apply for this job, please find the formal link here.
ROLE
– Validate that business/tool owners have implemented appropriate monitoring mechanisms.
– Review monitoring dashboards, logs, red-teaming results, and performance trends.
– Assess control effectiveness and identify degradation, drift, or risk creep.
– Ensure incident escalation pathways are clear and tested.
– Aggregate observability findings across AI use cases, identify systemic trends or recurring control weaknesses, and prepare executive-ready reports for senior leadership.
– Partner with Legal, Technology, and Risk leadership to ensure monitoring mechanisms and lifecycle evaluations align with Responsible AI and Privacy-by-Design standards.
– Define and own reporting and presentations for department leadership, compliance committees, and governance bodies to support decision-making, influence process changes, and track Responsible AI and privacy program activities.
– Act as a trusted advisor to business leaders to arm them with the information they need to make informed decisions on AI risk and personal information collection, use, and sharing.
– Facilitate across functional governance forums and influence decision-making among senior stakeholders.
– Drive continuous improvements and automation opportunities to enhance monitoring maturity and operational efficiency.
– Engage with external RAI and Privacy networks to stay informed of emerging risks, standards, and best practices.
– Other duties as assigned.
REQUIREMENTS
– Four-year degree or equivalent combination of education and experience in fields such as Law, Computer Science, Data Science, Public Policy, Ethics, or related discipline.
– 10+ years of experience within Data Science/AI, Responsible AI, Privacy, Risk, Governance, or related Compliance functions, including direct exposure to AI governance, model lifecycle management, and privacy regulations.
– Experience mapping RAI controls across the AI lifecycle.
– Deep understanding of model risk, algorithmic risk, and AI failure modes.
– Experience in leading cross-functional projects or workstreams at the intersection of compliance, technology, and business strategy.
– Experience in evaluating Accuracy degradation and retrieval errors, fairness and bias patterns, explainability and traceability gaps, robustness and adversarial failure modes, prompt injection and misuse, privacy leakage and harmful outputs.
– Familiarity with AI/ML systems, generative AI platforms, and third-party vendor tools, including how compliance and risk frameworks apply to these technologies.
BENEFITS
– Healthy work-life balance
– Opportunity to work autonomously
