tJDBCInput properties for Apache Spark Batch
These properties are used to configure tJDBCInput running in the Spark Batch Job framework.
The Spark Batch tJDBCInput component belongs to the Databases family.
This component also allows you to connect and read data from a RDS MariaDB, a RDS PostgreSQL or a RDS SQLServer database.
The component in this framework is available in all Talend products with Big Data and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. |
|
Built-In: No property data stored centrally. |
|
Repository: Select the repository file where the properties are stored. |
Use an existing connection |
Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined. |
JDBC URL |
The JDBC URL of the database to be used. For example, the JDBC URL for the Amazon Redshift database is jdbc:redshift://endpoint:port/database. If you are using Spark V1.3, this URL should contain the authentication information, such as:
jdbc:mysql://XX.XX.XX.XX:3306/Talend?user=ychen&password=talend |
Driver JAR |
Complete this table to load the driver JARs needed. To do this, click the [+] button under the table to add as many rows as needed, each row for a driver JAR, then select the cell and click the [...] button at the right side of the cell to open the Module dialog box from which you can select the driver JAR to be used. For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for the Redshift database. For more information, see Importing a database driver. |
Class Name |
Enter the class name for the specified driver between double quotation marks. For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the name to be entered is com.amazon.redshift.jdbc41.Driver. |
Username and Password |
Enter the authentication information to the database you need to connect to. To enter the password, click the [...] button next to the password field, and then in the pop-up dialog box enter the password between double quotes and click OK to save the settings. Available only for Spark V1.4. and onwards. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. |
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
Table Name |
Type in the name of the table from which you need to read data. This field is only available when you select Table from the Read from drop-down list. |
Read from |
Select the type of the source of the data to
be read.
|
Query type and Query |
Specify the database query statement paying particularly attention to the properly sequence of the fields which must correspond to the schema definition. If you are using Spark V2.0 onwards, the Spark SQL does not recognize the prefix of a database table anymore. This means that you must enter only the table name without adding any prefix that indicates for example the schema this table belongs to. For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only. You can use pushdown predicate in the query to filter the data from the
database. Spark supports the following operators:
These fields are only available when you select Query from the Read from drop-down list. |
Guess Query |
Click the Guess Query button to generate the query which corresponds to your table schema in the Query field. |
Guess schema |
Click the Guess schema button to retrieve the table schema. |
Advanced settings
Additional JDBC parameters |
Specify additional connection properties for the database connection you are creating. The properties are separated by semicolon and each property is a key-value pair, for example, encryption=1;clientname=Talend. This field is not available if the Use an existing connection check box is selected. |
Spark SQL JDBC parameters |
Add the JDBC properties supported by Spark SQL to this table. For a list of the user configurable properties, see JDBC to other database. This component automatically set the url, dbtable and driver properties by using the configuration from the Basic settings tab. |
Trim all the String/Char columns |
Select this check box to remove leading whitespace and trailing whitespace from all String/Char columns. |
Trim column |
This table is filled automatically with the schema being used. Select the check box(es) corresponding to the column(s) to be trimmed. |
Enable partitioning |
Select this check box to read data in partitions. Define, in double quotation marks, the following parameters to configure the
partitioning:
For example, to partition 1000 rows into 4 partitions, if you enter 0 for the lower bound and 1000 for the upper bound, each partition will contain 250 rows and so the partitioning is even. If you enter 250 for the lower bound and 750 for the upper bound, the second and the third partition will each contain 125 rows and the first and the last partitions each 375 rows. With this configuration, the partitioning is skewed. |
Usage
Usage rule |
This component is used as a start component and requires an output link. This component should use a tJDBCConfiguration component present in the same Job to connect to a database. You need to drop a tJDBCConfiguration component alongside this component and configure the Basic settings of this component to use tJDBCConfiguration. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |