Objective
Lead the organization's global Quality Assurance strategy, ensuring that implemented software and AI models meet the highest criteria for reliability, security, and ethical compliance.
Responsibilities
- Design and implement scalable automation frameworks in continuous integration and delivery (CI/CD) environments
- Define testing protocols for non-deterministic systems, focusing on LLM output validation, hallucination detection, and risk mitigation
- Establish AI Governance guidelines, ensuring explainability, auditability, and absence of bias in AI solutions
- Act in the system design phase (Shift-Left) to ensure testability of complex architectures
Requirements and Profile
- Minimum 10 years experience in quality engineering and test automation
- Deep mastery of tools like Playwright, Cypress, or Selenium, with proficiency in Python or TypeScript
- Solid knowledge in AI Governance and model evaluation frameworks (e.g., Ragas, Watsonx.governance)
- Experience in performance, security, and infrastructure resilience testing
- English fluency; mentor profile with technical leadership capacity and knowledge sharing abilities
Apply Now