,

Apache Sqoop Cookbook

Unlooking Hadoop for Your Relational Database

Specificaties
Paperback, 75 blz. | Engels
O'Reilly | 1e druk, 2013
ISBN13: 9781449364625
Rubricering
Hoofdrubriek : Computer en informatica
O'Reilly 1e druk, 2013 9781449364625
Verwachte levertijd ongeveer 16 werkdagen

Samenvatting

Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.

Sqoop is both powerful and bewildering, but with this cookbook's problem-solution-discussion format, you'll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems.

- Transfer data from a single database table into your Hadoop ecosystem
- Keep table data and Hadoop in sync by importing data incrementally
- Import data from more than one database table
- Customize transferred data by calling various database functions
- Export generated, processed, or backed-up data from Hadoop to your database
- Run Sqoop within Oozie, Hadoop's specialized workflow scheduler
- Load data into Hadoop's data warehouse (Hive) or database (HBase)
- Handle installation, connection, and syntax issues common to specific database vendors

Specificaties

ISBN13:9781449364625
Taal:Engels
Bindwijze:paperback
Aantal pagina's:75
Uitgever:O'Reilly
Druk:1
Verschijningsdatum:20-7-2013

Inhoudsopgave

Foreword
Preface

1. Getting Started
-Downloading and Installing Sqoop
-Installing JDBC Drivers
-Installing Specialized Connectors
-Starting Sqoop
-Getting Help with Sqoop

2. Importing Data
-Transferring an Entire Table
-Specifying a Target Directory
-Importing Only a Subset of Data
-Protecting Your Password
-Using a File Format Other Than CSV
-Compressing Imported Data
-Speeding Up Transfers
-Overriding Type Mapping
-Controlling Parallelism
-Encoding NULL Values
-Importing All Your Tables

3. Incremental Import
-Importing Only New Data
-Incrementally Importing Mutable Data
-Preserving the Last Imported Value
-Storing Passwords in the Metastore
-Overriding the Arguments to a Saved Job
-Sharing the Metastore Between Sqoop Clients

4. Free-Form Query Import
-Importing Data from Two Tables
-Using Custom Boundary Queries
-Renaming Sqoop Job Instances
-Importing Queries with Duplicated Columns

5. Export
-Transferring Data from Hadoop
-Inserting Data in Batches
-Exporting with All-or-Nothing Semantics
-Updating an Existing Data Set
-Updating or Inserting at the Same Time
-Using Stored Procedures
-Exporting into a Subset of Columns
-Encoding the NULL Value Differently
-Exporting Corrupted Data

6. Hadoop Ecosystem Integration
-Scheduling Sqoop Jobs with Oozie
-Specifying Commands in Oozie
-Using Property Parameters in Oozie
-Installing JDBC Drivers in Oozie
-Importing Data Directly into Hive
-Using Partitioned Hive Tables
-Replacing Special Delimiters During Hive Import
-Using the Correct NULL String in Hive
-Importing Data into HBase
-Importing All Rows into HBase
-Improving Performance When Importing into HBase

7. Specialized Connectors
-Overriding Imported boolean Values in PostgreSQL Direct Import
-Importing a Table Stored in Custom Schema in PostgreSQL
-Exporting into PostgreSQL Using pg_bulkload
-Connecting to MySQL
-Using Direct MySQL Import into Hive
-Using the upsert Feature When Exporting into MySQL
-Importing from Oracle
-Using Synonyms in Oracle
-Faster Transfers with Oracle
-Importing into Avro with OraOop
-Choosing the Proper Connector for Oracle
-Exporting into Teradata
-Using the Cloudera Teradata Connector
-Using Long Column Names in Teradata

Net verschenen

Rubrieken

Populaire producten

    Personen

      Trefwoorden

        Apache Sqoop Cookbook