logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Pretrained Transformers For Text Ranking Bert And Beyond Synthesis Lectures On Human Language Technologies Lin

  • SKU: BELL-36429104
Pretrained Transformers For Text Ranking Bert And Beyond Synthesis Lectures On Human Language Technologies Lin
$ 31.00 $ 45.00 (-31%)

4.0

56 reviews

Pretrained Transformers For Text Ranking Bert And Beyond Synthesis Lectures On Human Language Technologies Lin instant download after payment.

Publisher: Morgan & Claypool
File Extension: PDF
File size: 3.5 MB
Pages: 325
Author: Lin, Jimmy, Nogueira, Rodrigo, Yates, Andrew
ISBN: 9781636392288, 1636392288
Language: English
Year: 2021

Product desciption

Pretrained Transformers For Text Ranking Bert And Beyond Synthesis Lectures On Human Language Technologies Lin by Lin, Jimmy, Nogueira, Rodrigo, Yates, Andrew 9781636392288, 1636392288 instant download after payment.

The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications. This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.

Related Products