Skip to content


Data testing for Iceberg. You will have the ability to generate and validate Iceberg tables.


  • 5 minutes
  • Git
  • Gradle
  • Docker

Get Started

First, we will clone the data-caterer-example repo which will already have the base project setup required.

git clone
git clone
git clone

Plan Setup

Create a new Java or Scala class.

  • Java: src/main/java/io/github/datacatering/plan/
  • Scala: src/main/scala/io/github/datacatering/plan/MyIcebergPlan.scala
  • YAML: docker/data/customer/plan/my-iceberg.yaml

Make sure your class extends PlanRun.


public class MyIcebergJavaPlan extends PlanRun {
import io.github.datacatering.datacaterer.api.PlanRun

class MyIcebergPlan extends PlanRun {

In docker/data/custom/plan/my-iceberg.yaml:

name: "my_iceberg_plan"
description: "Create account data in Iceberg table"
  - name: "iceberg_account_table"
    dataSourceName: "customer_accounts"
    enabled: true

This class defines where we need to define all of our configurations for generating data. There are helper variables and methods defined to make it simple and easy to use.

Connection Configuration

Within our class, we can start by defining the connection properties to read/write from/to Iceberg.

var accountTask = iceberg(
        "customer_accounts",              //name
        "account.accounts",               //table name
        "/opt/app/data/customer/iceberg", //warehouse path
        "hadoop",                         //catalog type
        "",                               //catalog uri
        Map.of()                          //additional options

Additional options can be found here.

val accountTask = iceberg(
  "customer_accounts",              //name
  "account.accounts",               //table name
  "/opt/app/data/customer/iceberg", //warehouse path
  "hadoop",                         //catalog type
  "",                               //catalog uri
  Map()                             //additional options

Additional options can be found here.

In application.conf:

iceberg {
  customer_accounts {
    path = "/opt/app/data/customer/iceberg"
    catalogType = "hadoop"
    catalogType = ${?ICEBERG_CATALOG_TYPE}
    catalogUri = ""
    catalogUri = ${?ICEBERG_CATALOG_URI}

  1. Go to Connection tab in the top bar
  2. Select data source as Iceberg
    1. Enter in data source name customer_accounts
    2. Select catalog type hadoop
    3. Enter warehouse path as /opt/app/data/customer/iceberg


Depending on how you want to define the schema, follow the below:

Additional Configurations

At the end of data generation, a report gets generated that summarises the actions it performed. We can control the output folder of that report via configurations. We will also enable the unique check to ensure any unique fields will have unique values generated.

var config = configuration()

execute(myPlan, config, accountTask, transactionTask);
val config = configuration

execute(myPlan, config, accountTask, transactionTask)

In application.conf:

flags {
  enableUniqueCheck = true
folders {
  generatedReportsFolderPath = "/opt/app/data/report"

  1. Click on Advanced Configuration towards the bottom of the screen
  2. Click on Flag and click on Unique Check
  3. Click on Folder and enter /tmp/data-caterer/report for Generated Reports Folder Path


Now we can run via the script ./ that is in the top level directory of the data-caterer-example to run the class we just created.

./ MyIcebergJavaPlan
./ MyIcebergPlan
./ my-iceberg.yaml
  1. Click on Execute at the top

Congratulations! You have now made a data generator that has simulated a real world data scenario. You can check the or IcebergPlan.scala files as well to check that your plan is the same.


If you want to validate data from an Iceberg source, follow the validation documentation found here to help guide you.