Skip to main content
  • New archived content: Talend MDM, Talend Data Catalog 8.0, and Talend 7.3 products reached their end of life in 2024. Their documentation was moved to the Talend Archive page and will no longer receive content updates.
Close announcements banner

Writing and reading data from Azure Data Lake Storage using Spark (Azure Databricks)

In this scenario, you create a Spark Batch Job using tAzureFSConfiguration and the Parquet components to write data on Azure Data Lake Storage and then read the data from Azure.

This scenario applies only to subscription-based Talend products with Big Data.

The sample data reads as follows:
01;ychen

This data contains a user name and the ID number distributed to this user.

Note that the sample data is created for demonstration purposes only.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!