logo

EbookBell.com

Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.

Please read the tutorial at this link:  https://ebookbell.com/faq 


We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.


For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.

EbookBell Team

Apache Flume Distributed Log Collection For Hadoop Steve Hoffman

  • SKU: BELL-4681654
Apache Flume Distributed Log Collection For Hadoop Steve Hoffman
$ 31.00 $ 45.00 (-31%)

4.7

36 reviews

Apache Flume Distributed Log Collection For Hadoop Steve Hoffman instant download after payment.

Publisher: Packt Publishing
File Extension: PDF
File size: 3.69 MB
Pages: 108
Author: Steve Hoffman
ISBN: 9781782167914, 1782167919
Language: English
Year: 2013

Product desciption

Apache Flume Distributed Log Collection For Hadoop Steve Hoffman by Steve Hoffman 9781782167914, 1782167919 instant download after payment.

Stream data to Hadoop using Apache Flume

Overview

  • Integrate Flume with your data sources
  • Transcode your data en-route in Flume
  • Route and separate your data using regular expression matching
  • Configure failover paths and load-balancing to remove single points of failure
  • Utilize Gzip Compression for files written to HDFS

In Detail

Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its main goal is to deliver data from applications to Apache Hadoop's HDFS. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with many failover and recovery mechanisms.

Apache Flume: Distributed Log Collection for Hadoop covers problems with HDFS and streaming data/logs, and how Flume can resolve these problems. This book explains the generalized architecture of Flume, which includes moving data to/from databases, NO-SQL-ish data stores, as well as optimizing performance. This book includes real-world scenarios on Flume implementation.

Apache Flume: Distributed Log Collection for Hadoop starts with an architectural overview of Flume and then discusses each component in detail. It guides you through the complete installation process and compilation of Flume.

It will give you a heads-up on how to use channels and channel selectors. For each architectural component (Sources, Channels, Sinks, Channel Processors, Sink Groups, and so on) the various implementations will be covered in detail along with configuration options. You can use it to customize Flume to your specific needs. There are pointers given on writing custom implementations as well that would help you learn and implement them.

  • By the end, you should be able to construct a series of Flume agents to transport your streaming data and logs from your systems into Hadoop in near real time.
  • What you will learn from this book

    • Understand the Flume architecture
    • Download and install open source Flume from Apache
    • Discover when to use a memory or file-backed channel
    • Understand and configure the Hadoop File System (HDFS) sink
    • Learn how to use sink groups to create redundant data flows
    • Configure and use various sources for ingesting data
    • Inspect data records and route to different or multiple destinations based on payload content
    • Transform data en-route to Hadoop
    • Monitor your data flows

    Approach

    A starter guide that covers Apache Flume in detail.

    Who this book is written for

    Apache Flume: Distributed Log Collection for Hadoop is intended for people who are responsible for moving datasets into Hadoop in a timely and reliable manner like software engineers, database administrators, and data warehouse administrators.

    Related Products