CSV
Creating a data generator for CSV. You will have the ability to generate and validate CSV files via Docker.
Requirements
- 10 minutes
- Git
- Gradle
- Docker
Get Started
First, we will clone the data-caterer-example repo which will already have the base project setup required.
git clone git@github.com:data-catering/data-caterer-example.git
git clone git@github.com:data-catering/data-caterer-example.git
git clone git@github.com:data-catering/data-caterer-example.git
Plan Setup
Create a new Java or Scala class.
- Java:
src/main/java/io/github/datacatering/plan/MyCSVJavaPlan.java
- Scala:
src/main/scala/io/github/datacatering/plan/MyCSVPlan.scala
Make sure your class extends PlanRun
.
import io.github.datacatering.datacaterer.java.api.PlanRun;
public class MyCSVJavaPlan extends PlanRun {
}
import io.github.datacatering.datacaterer.api.PlanRun
class MyCSVPlan extends PlanRun {
}
This class defines where we need to define all of our configurations for generating data. There are helper variables and methods defined to make it simple and easy to use.
Connection Configuration
Within our class, we can start by defining the connection properties to read/write from/to CSV.
var accountTask = csv(
"customer_accounts", //name
"/opt/app/data/customer/account", //path
Map.of("header", "true") //additional options
);
Additional options such as including a header row, etc can be found here.
val accountTask = csv(
"customer_accounts", //name
"/opt/app/data/customer/account", //path
Map("header" -> "true") //additional options
)
Additional options such as including a header row, etc can be found here.
Schema
Depending on how you want to define the schema, follow the below:
- Manual schema guide
- Automatically detect schema from the data source, you can simply enable
configuration.enableGeneratePlanAndTasks(true)
- Automatically detect schema from a metadata source
Additional Configurations
At the end of data generation, a report gets generated that summarises the actions it performed. We can control the output folder of that report via configurations. We will also enable the unique check to ensure any unique fields will have unique values generated.
var config = configuration()
.generatedReportsFolderPath("/opt/app/data/report")
.enableUniqueCheck(true);
execute(myPlan, config, accountTask, transactionTask);
val config = configuration
.generatedReportsFolderPath("/opt/app/data/report")
.enableUniqueCheck(true)
execute(myPlan, config, accountTask, transactionTask)
Run
Now we can run via the script ./run.sh
that is in the top level directory of the data-caterer-example
to run the class we just
created.
./run.sh
#input class MyCSVJavaPlan or MyCSVPlan
#after completing, let's pick an account and check the transactions for that account
account=$(tail -1 docker/sample/customer/account/part-00000* | awk -F "," '{print $1 "," $4}')
echo $account
cat docker/sample/customer/transaction/part-00000* | grep $account
It should look something like this.
ACC29117767,Willodean Sauer
ACC29117767,Willodean Sauer,84.99145871948083,2023-05-14T09:55:51.439Z,2023-05-14
ACC29117767,Willodean Sauer,58.89345733567232,2022-11-22T07:38:20.143Z,2022-11-22
Congratulations! You have now made a data generator that has simulated a real world data scenario. You can check the
CSVJavaPlan.java
or CSVPlan.scala
files as well to check that your plan is the same.
Validation
If you want to validate data from a CSV source, follow the validation documentation found here to help guide you.