logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Distributed Machine Learning And Gradient Optimization Jiawei Jiang

  • SKU: BELL-46750040
Distributed Machine Learning And Gradient Optimization Jiawei Jiang
$ 31.00 $ 45.00 (-31%)

0.0

0 reviews

Distributed Machine Learning And Gradient Optimization Jiawei Jiang instant download after payment.

Publisher: Springer
File Extension: PDF
File size: 4.35 MB
Pages: 178
Author: Jiawei Jiang, Bin Cui, Ce Zhang
ISBN: 9789811634192, 981163419X
Language: English
Year: 2022

Product desciption

Distributed Machine Learning And Gradient Optimization Jiawei Jiang by Jiawei Jiang, Bin Cui, Ce Zhang 9789811634192, 981163419X instant download after payment.

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.

Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.


Related Products