About me
Xiang Xu is a Senior Applied Scientist and Tech Lead at AWS AI Labs, where he leads cutting-edge research and development in multi-modal digital trust and safety. His work is at the forefront of building secure, scalable, and trustworthy AI systems that underpin the next generation of digital identity and content authenticity solutions. Xiang has led foundational work on reinforcement fine-tuning, multi-modal red-teaming, post-training safety alignment, and the design of specialized expert models for robust and scalable content verification. His research explores how to make large multi-modal language models more robust, aligned, and efficient, especially in high-risk applications such as identity verification and content moderation. At AWS scale, the systems he designed and deployed serve millions of users worldwide, establishing new benchmarks in biometric authentication, liveness detection, and privacy-preserving AI. His core focus lies in multi-modal digital trust and safety, where he builds intelligent, resilient systems that safeguard digital identities, counter deepfakes, spoofing, and adversarial threats, all while upholding user privacy and ensuring compliance with global security and data protection standards. He is passionate about making AI not just smarter, but safer—building models that see, read, and reason with security, accountability, and human alignment at their core.
Before joining AWS, he obtained his Ph.D. degree from the University of Houston under the mentorship of Professor Ioannis A. Kakadiaris in 2019. With over a decade of experience in biometric research and digital identity systems, he has contributed to significant advancements in face recognition, liveness detection, 3D face reconstruction, and adversarial robustness. His expertise extends to multi-modal computer vision, domain adaptation, and the security challenges of deploying AI systems at scale. He serves as a reviewer for top-tier conferences including CVPR, ECCV, ICCV, and has been recognized for his contributions to AI safety and digital trust.
Digital Trust Leadership: He leads cross-functional teams developing next-generation digital identity and trust solutions, with his work directly impacting AWS's approach to secure AI deployment and user protection across multiple services.
Prospective interns: If you are interested in internship positions in digital trust, AI safety, or multi-modal systems (Ph.D. students preferred), please email me your CV and a research statement highlighting relevant experience.
Core Expertise
-
Post-Training & Safety Alignment
Expert in post-training techniques for multi-modal models with focus on safety and trust. Specialized in reinforcement learning on preference optimization to ensure models are aligned with safety requirements for digital trust and safety applications.
-
Digital Identity & Trust
Leading research in digital identity verification, biometric authentication, and liveness detection systems. Expertise in developing robust identity solutions that protect against presentation attacks, deepfakes, and sophisticated spoofing attempts using multi-modal AI.
-
Deepfake Detection & Anti-Spoofing
Pioneering research in deepfake detection, presentation attack detection, and adversarial robustness. Developing production-scale systems that protect digital platforms from synthetic media and sophisticated attack vectors.
-
Multi-Modal AI for Security
Developing large-scale vision-language models specifically designed for security and trust applications. Expertise in cross-modal understanding, temporal consistency, and real-time processing for digital identity verification systems.
Recent News
-
[2025/05] 2 patents granted recently related to digital identity and trust systems.
-
[2025/03] Three papers accepted @CVPR 2025, covering model diagnosis and improvement, source-free adaptation for security patching and customization, and grounding in vision-language models [Link]
-
[2024/07] 1 patent granted titled "Evaluating biometric authorization systems with synthesized images"
-
[2024/04] Honored to serve as keynote speaker for 5th Chalearn Face Anti-Spoofing Workshop and Challenge @ CVPR2024 [Link]
-
[2024/04] Paper "Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability" accepted at AdvML@CVPR2024 [Link]