Account
Orders
Advanced search
Practical Techniques for Detecting, Preventing, and Managing AI Vulnerabilities
Louise Reader
Read on Louise Reader App.
As artificial intelligence becomes embedded in everything from healthcare diagnostics to financial systems and autonomous vehicles, the stakes for AI security have never been higher. Adversarial AI Threat Response and Secure Model Design is your essential guide to understanding, defending against, and designing resilient machine learning systems in the face of growing adversarial threats.
Written by a leading expert in AI security and policy, this book delivers a combination of technical depth, practical implementation, and strategic insight. It begins by mapping the full landscape of adversarial threats—evasion, poisoning, model extraction, backdoors, and more—across diverse data modalities and real-world applications. From there, it equips readers with a robust toolkit of detection and defense techniques, including adversarial training, anomaly detection, and formal robustness certification.
But this book goes beyond code. It explores the organizational, ethical, and regulatory dimensions of AI security, offering guidance on risk quantification, explainability, and compliance with frameworks like the EU AI Act. With hands-on projects, open-source tools, and case studies in high-stakes domains, readers will learn to design secure-by-default systems that are not only technically sound but socially responsible.
Whether you're an AI engineer deploying models in production, a cybersecurity professional defending intelligent systems, or an educator preparing the next generation of AI talent, this book provides the clarity, rigor, and foresight needed to stay ahead of adversarial threats. It’s not just a reference—it’s a roadmap for building trustworthy AI.
What You Will Learn:
Who This Book is for:
Written for technical professionals and researchers who are building, deploying, or securing machine learning systems in real-world environments. The primary audience includes machine learning engineers, AI developers, cybersecurity professionals, and graduate-level students in computer science, data science, and applied AI programs. It is also relevant for technical leads, architects, and academic instructors designing secure AI curricula or systems in regulated or high-stakes domains.
Les livres numériques peuvent être téléchargés depuis l'ebookstore Numilog ou directement depuis une tablette ou smartphone.
PDF : format reprenant la maquette originale du livre ; lecture recommandée sur ordinateur et tablette EPUB : format de texte repositionnable ; lecture sur tous supports (ordinateur, tablette, smartphone, liseuse)
DRM Adobe LCP
LCP DRM Adobe
This ebook is DRM protected.
LCP system provides a simplified access to ebooks: an activation key associated with your customer account allows you to open them immediately.
ebooks downloaded with LCP system can be read on:
Adobe DRM associates a file with a personal account (Adobe ID). Once your reading device is activated with your Adobe ID, your ebook can be opened with any compatible reading application.
ebooks downloaded with Adobe DRM can be read on:
mobile-and-tablet To check the compatibility with your devices,see help page
Dr. Goran Trajkovski is Director of Data Analytics at Touro University, a Fulbright Scholar, and author of over 300 scholarly works, including 20 books. With over 30 years of experience in artificial intelligence, data analytics, and educational technology, he leads AI curriculum design, assessment innovation, and academic program development. He teaches graduate courses in AI and machine learning, and is a Pluralsight course author focused on adversarial AI and AI ethics. His research and instructional work center on AI model vulnerabilities, human-centered AI design, and practical adversarial defense strategies—making him a leader in the secure implementation of generative and adversarial AI systems.
Sign up to get our latest ebook recommendations and special offers