GuideWorkflowStep By Step

Your First DataMorf Workflow

Last update on Jul 25, 2024

7 minutes

This guide will walk you through setting up a simple workflow in DataMorf, designed especially for non-technical users. We’ll break down each step and provide visual aids to ensure you have no questions about the process. Let’s dive in and transform some data together!

Before deep diving, make sure to check out friendly guide to DataMorf.

Step 1: Navigating to the Workflows Section

First things first, let’s get to where the magic happens.

  1. Login to DataMorf: Start by logging into your DataMorf account. If you’ve forgotten your password, it might be a good time to check if your pet’s name and birth year are still working as your security answers.
  2. Access Workflows: From the left-side menu, click on the “Workflows” section. This is where all your workflows will live.
  3. Create New Workflow: In the top right corner, click on the “Add” button to create a new workflow. It’s like adding a new recipe to your cookbook.
Create new workflow

Create new workflow

 

Step 2: Workflow Setup

Now let’s set up the basic structure of your workflow. Think of this as laying the foundation of your house – we need to make sure it’s sturdy and well-defined.

In the pop-up window, you’ll be asked to fill in the following details:

  • Name: Enter a user-friendly name for your workflow. Choose something that makes sense at a glance – “Customer Data Sync” is much better than “Workflow 123.”
  • Description: Provide a detailed description of the workflow’s purpose. This helps everyone (including future you) understand what this workflow is supposed to do.
  • Triggered From: List all systems that can trigger this workflow. It’s like leaving a note for your teammates so they know who’s calling whom and why.
  • Endpoint: Define the unique endpoint for the workflow. Make it human-readable and unique – no one likes mysterious URLs.
  • Type: Indicate the type of entity this workflow handles (e.g., contact, company). This is more for organizational purposes.
  • Mode: Select the workflow mode (development, test, production, deprecated). It’s like putting up a “Work in Progress” sign while you’re still tinkering with things.
  • Workflow Active: Toggle to activate or deactivate the workflow. You can keep it inactive while you build it, just remember to flip the switch when you’re ready to go live.
     

 

Step 3: Configuring Triggers

In DataMorf, triggers are like the alarm clock that starts your day. They wake up your workflow and set everything in motion. Here’s a quick guide to understanding the essential parts of configuring triggers.

  • Unique Endpoint: The Front Door of Your House (1)

The Unique Endpoint is like the front door of your house. This is where data comes knocking. DataMorf provides you with a unique URL – your address. For example, if you're collecting form submissions, this URL is where all that information arrives. It is always formatted as: https://core.datamorf.io/v2/YOUR_WORKSPACE_ID/run/WORKFLOW_ENDPOINT

Think of this as the entry point for your workflow, just like guests arriving at your front door.

  • Incoming Payload: Your Mail Delivery (2)

The Incoming Payload is the data that arrives at your unique endpoint, like mail delivered to your door. Each piece of mail represents data being sent to your endpoint. Loading a sample payload (3) helps you see what kind of data your workflow will handle, ensuring you're prepared for what’s inside. It is formatted like: https://core.datamorf.io/v2/YOUR_WORKSPACE_ID/sample/WORKFLOW_ID

This is crucial for understanding and organizing the data your workflow will process.

  • Data Fetch: A Quick Run to the Store (4)

Sometimes, the initial data isn’t enough – it’s like starting to bake and realizing you need more sugar. Data Fetch is your quick trip to the store to get what’s missing. It allows your workflow to make additional API calls to fetch more data, ensuring you have everything you need to complete the task.

This step helps gather all the necessary ingredients for your workflow.

  • Data Providers: Adding Special Spices (5) 

Data Providers enrich your data, just like adding special spices enhances a recipe. They bring in additional details from services like Apollo or Clearbit, based on identifiers in your incoming payload. This enrichment makes your data more valuable, just like adding vanilla or cinnamon makes your cake more delicious.

These extra details add depth and flavor to your data.
 

Workflow trigger tab

Workflow trigger tab

 

Step 4: Setting Up Computations

Now we get to the fun part – transforming your data! Computations are where we mold and shape your data into exactly what you need. This is essentially the computation layer.

Computations are the heart of your workflow. They take the raw data from the fetch layer and process it into something useful. Think of computations as the steps in a recipe that turn raw ingredients into a delicious dish.

  1. Create a Group: Groups help you organize your computations. It’s like setting up different stations in your kitchen for chopping, mixing, and baking. This organization makes it easier to manage and understand your workflow.
    Why Grouping Helps: Groups help you stay organized, especially when you have multiple computations. You can create groups based on data types (e.g., contact data, company data) or processing stages (e.g., raw data, processed data)
     
  2. Add Computations: Each computation is a specific operation performed on your data. Adding a computation is like adding a step to your recipe.
    Types of Computations: There are various built-in computations, such as “clean HTML tags,” “create random ID,” “extract email domain,” etc.
    How to Add a Computation: Click the plus sign next to the group name to add a computation, then specify its details.
     
  3. Define Inputs: Inputs are the sources of data for each computation. You need to tell the computation which data points to use. This is like specifying the ingredients for each step in your recipe. Select the type of input and specify the path to the data point
    Types of Inputs:
    • Incoming Payload: Data from the initial payload.
    • Data Fetch: Results from additional API calls.
    • Data Providers: Enriched data from services like Apollo, Clearbit, etc.
    • Previous Computations: Results from other computations within the same workflow.
       
  4. Set Computation Details: Fill in the details like name, description, output path, and any additional configurations. This ensures your computation is set up correctly and clearly.
    Computation Configurations:
    • Name: Give a clear and descriptive name.
    • Description: Provide details about what this computation does.
    • Output Path: Specify where the result of this computation will be stored.
    • Additional Configurations: Set default values, trim whitespaces, conditional computations, and fail-safes.

You can find more details about computations here.

Computations group

Computations group

 

Step 5: Defining Destinations

Finally, we decide where our beautifully processed data will go. Destinations are the endpoints where the computed data will be sent.

  1. Choose Destination Type: Select from options like responding to a webhook or starting another DataMorf workflow. It’s like deciding whether to serve your cake at a party or send it to a friend.
    Common Destinations:
    • Webhook: Send the computed data back to the HTTP caller.
    • Another DataMorf Workflow: Trigger another DataMorf workflow for further processing.
       
  2. Configure Integrations: Link your integrations and map out computations to the destination data structure. Make sure your data is presented in the way the receiving system likes it.
    Integration Examples: Integrate with CRM systems like HubSpot, Salesforce, or marketing platforms. Map your computations to the required fields in these systems.
     
  3. Additional Settings: Set conditions, retries, delays, and list processing if applicable. This ensures your data gets to where it needs to go, even if there are a few bumps along the way.
    • Conditional Destination: Set conditions based on previous computation results to decide whether or not to activate the destination.
    • Retries: Configure how many times DataMorf should retry sending data if the first attempt fails.
    • Delays: Set a delay before activating the destination, useful for scenarios where the receiving system needs a few seconds to process previous data.
    • List Processing: Automatically iterate over a list property in the payload, executing a separate workflow for each element. Perfect for handling bulk data efficiently.
Data destinations

Data destinations


Playground and Runs

Want to test your setup before going live? The Playground is your sandbox where you can load sample data, run computations, and see the results in real-time. And once your workflow is live, keep an eye on its performance by checking the logs and debugging any issues that pop up.

Data pipeline schematic

Data pipeline schematic

 

Setting up a workflow in DataMorf is like following a recipe – just a bit more high-tech. By following these steps, you can automate your data processes efficiently and keep all your systems in sync. Remember, you can always reach out to our support team if you need any help. Happy Automating!

Feel free to add your own personal flair and adjustments to this guide to make it even more user-friendly and engaging. Happy DataMorf-ing!