tHCatalogOutput Standard properties
These properties are used to configure tHCatalogOutput running in the Standard Job framework.
The Standard tHCatalogOutput component belongs to the Big Data family.
The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.
Basic settings
Property type |
Either Built-in or Repository Built-in: No property data stored centrally. Repository: Select the repository file in which the properties are stored. The fields that follow are completed automatically using the data retrieved. |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. When the schema to be reused has default values that are integers or functions, ensure that these default values are not enclosed within quotation marks. If they are, you must remove the quotation marks manually. For more information, see Retrieving table schemas. |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:
|
HCatalog version |
Select the version of the Hadoop distribution you are using. The available options vary depending on the component you are using. |
Use kerberos authentication |
If you are accessing the Hadoop cluster running with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your username to authenticate against the credentials stored in Kerberos. This check box is available depending on the Hadoop distribution you are connecting to. |
Use a keytab to authenticate |
Select the Use a keytab to authenticate check box to log into a Kerberos-enabled system using a given keytab file. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend JobServer. Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the username you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used. |
NameNode URI |
Type in the URI of the Hadoop NameNode, the master node of a Hadoop system. For example, we assume that you have chosen a machine called masternode as the NameNode, then the location is hdfs://masternode:portnumber. If you are using WebHDFS, the location should be webhdfs://masternode:portnumber; WebHDFS with SSL is not supported yet. |
File name |
Browse to, or enter the location of the file which you write data to. This file is created automatically if it does not exist. |
Action |
Select a DB operation in HDFS: Create: Creates a file with data using the file name defined in the File Name field. Overwrite: Overwrites the data in the file specified in the File Name field. Append: Inserts the data into the file specified in the File Name field. The specified file is created automatically if it does not exist. |
Templeton hostname |
Fill this field with the URL of Templeton Webservice. Information noteNote:
Templeton is a webservice API for HCatalog. It has been renamed to WebHCat by the Apache community. This service facilitates the access to HCatalog and the related Hadoop elements such as Pig. For further information about Templeton (WebHCat), see https://cwiki.apache.org/confluence/display/Hive/WebHCat+UsingWebHCat. |
Templeton port |
Fill this field with the port of URL of Templeton Webservice. By default, this value is 50111. Information noteNote:
Templeton is a webservice API for HCatalog. It has been renamed to WebHCat by the Apache community. This service facilitates the access to HCatalog and the related Hadoop elements such as Pig. For further information about Templeton (WebHCat), see https://cwiki.apache.org/confluence/display/Hive/WebHCat+UsingWebHCat. |
Database |
Fill this field to specify an existing database in HDFS. |
Table |
Fill this field to specify an existing table in HDFS. |
Partition |
Fill this field to specify one or more partitions for the partition operation on the specified table. When you specify multiple partitions, use commas to separate every two partitions and use double quotation marks to quote the partition string. If you are reading a non-partitioned table, leave this field empty. |
Username |
Fill this field with the username for the DB authentication. |
File location |
Fill this field with the path to which source data file is stored. |
Die on error |
This check box is cleared by default, meaning to skip the row on error and to complete the process for error-free rows. |
Advanced settings
Row separator |
The separator used to identify the end of a row. |
Field separator |
Enter a character, a string, or a regular expression to separate fields for the transferred data. |
Custom encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database data handling. The supported encodings depend on the JVM that you are using. For more information, see https://docs.oracle.com. |
Hadoop properties |
Talend Studio
uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related
systems such as HDFS and Hive, see the documentation of the Hadoop distribution you are
using or see Apache's Hadoop documentation and then select the version of the
documentation you want. For demonstration purposes, the links to some properties are listed
below:
|
Retrieve the HCatalog logs | Select this check box to retrieve log files generated during HCatalog operations. |
Standard Output Folder |
Browse to, or enter the directory where the log files are stored. Information noteNote:
This field is enabled only when you selected Retrieve the HCatalog logs check box. |
Error Output Folder |
Browse to, or enter the directory where the error log files are stored.
Information noteNote:
This field is enabled only when you selected Retrieve the HCatalog logs check box. |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level as well as at each component level. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it. For more information about variables, see Using contexts and variables. |
Usage
Usage rule |
This component is commonly used together with an input component. HCatalog is built on top of the Hive metastore to provide read and write interface for Pig and MapReduce, so that the latter systems can use the metadata of Hive to easily read and write data in HDFS. For further information, see Apache documentation about HCatalog: https://cwiki.apache.org/confluence/display/Hive/HCatalog. |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction with Talend Studio . The following list presents MapR related information for example.
For further information about how to install a Hadoop distribution, see the manuals corresponding to the Hadoop distribution you are using. |