logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Adversarial Machine Learning Yevgeniy Vorobeychik

  • SKU: BELL-232189404
Adversarial Machine Learning Yevgeniy Vorobeychik
$ 31.00 $ 45.00 (-31%)

4.0

6 reviews

Adversarial Machine Learning Yevgeniy Vorobeychik instant download after payment.

Publisher: Morgan & Claypool Publishers
File Extension: EPUB
File size: 6.57 MB
Author: Yevgeniy Vorobeychik
ISBN: 9781681733982, 1681733986
Language: English
Year: 2018

Product desciption

Adversarial Machine Learning Yevgeniy Vorobeychik by Yevgeniy Vorobeychik 9781681733982, 1681733986 instant download after payment.

This is a technical overview of the field of adversarial machine learning which has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation.

After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a...

Related Products