Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery (1)
By:
Sign Up Now!
Already a Member? Log In
You must be logged into Bookshare to access this title.
Learn about membership options,
or view our freely available titles.
- Synopsis
- As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.Delve into DNNs and discover how they could be tricked by adversarial inputInvestigate methods used to generate adversarial input capable of fooling DNNsExplore real-world scenarios and model the adversarial threatEvaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial dataExamine some ways in which AI might become better at mimicking human perception in years to come
- Copyright:
- 2019
Book Details
- Book Quality:
- Publisher Quality
- Book Size:
- 246 Pages
- ISBN-13:
- 9781492044901
- Related ISBNs:
- 9781492044925, 9781492044956, 9781492044918
- Publisher:
- O'Reilly Media
- Date of Addition:
- 02/06/25
- Copyrighted By:
- Katy Warr
- Adult content:
- No
- Language:
- English
- Has Image Descriptions:
- No
- Categories:
- Nonfiction, Computers and Internet, Business and Finance
- Submitted By:
- Bookshare Staff
- Usage Restrictions:
- This is a copyrighted book.