Double-click this new Map/Reduce Job to open it in the workspace. The Map/Reduce
components' Palette is opened accordingly
and in the workspace, the crossed-out components, if any, indicate that
those components do not have the Map/Reduce version.
Right-click each of those components in question and select Delete to remove them from the workspace.
Drop a tHDFSInput component, a tHDFSOutput component and a tJDBCOutput component in the workspace. The tHDFSInput component reads data from the Hadoop
distribution to be used, the tHDFSOutput
component writes data in that distribution and the tJDBCOutput component writes data in a given database, for
example, a MySQL database in this scenario. The two output components
replace the two tLogRow components to
output data.
If from scratch, you have to drop a tSortRow component and a tUniqRow component, too.
Connect tHDFSInput to tSortRow using the Row >
Main link and accept to get the schema of tSortRow.
Connect tUniqRow to tHDFSOutput using Row >
Uniques and to tJDBCOutput
using Row > Duplicates.
Did this page help you?
If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!