Understanding Computations: An In-Depth Guide
Last update on Jul 3, 2024
4 min
Today, we’re diving deep into the heart of DataMorf workflows – computations. Whether you’re a tech-savvy individual or someone with a basic understanding of technology, this guide will help you grasp the essentials of computations in DataMorf. By the end of this post, you’ll be well-versed in how to manipulate and transform your data efficiently.
What Are Computations?
Computations are the building blocks of DataMorf workflows. They allow you to transform, manipulate, and enrich your data, turning raw inputs into valuable insights and actionable outputs. Think of computations as the individual steps in a recipe, each one contributing to the final dish. The computation layer is where all these transformations occur, acting as the core engine that processes data in a workflow.
The Anatomy of a Computation
Every computation in DataMorf has several key components, each playing a vital role in ensuring the data is processed correctly. Understanding these components will help you create effective workflows. Let’s break down each part in more detail.
Computation Name and Description
The computation name and description serve as identifiers and explanatory notes. The name should be descriptive enough to identify its purpose at a glance, such as “Extract Email Domain,” which clearly indicates that this computation will pull out the domain part of an email address. The description offers a brief explanation of what the computation does, which is particularly useful for anyone reviewing the workflow later. This helps maintain clarity and makes it easier to manage complex workflows.
Output Path
The output path is where the result of the computation will be stored. Think of it as the address where the computed data will live. The path is defined using dot notation (e.g., contact.emailDomain), which organizes data hierarchically, making it easy to access later. This structured format is essential for keeping your data organized and ensuring that each piece of information can be easily retrieved and used in subsequent steps.
Computation Mode
The computation mode indicates the type of operation the computation will perform. Different modes are available for various tasks, such as string manipulation, mathematical calculations, data extraction, and more. For instance, you might use a mode to convert text to lowercase, extract a domain from an email, or calculate the sum of several values. Understanding the mode is crucial because it determines what kind of input the computation will require and how it will process that input.
Inputs
Inputs are the data points that the computation will operate on, and they can come from various sources. This is where the flexibility of DataMorf shines, as you can draw from a wide array of data inputs to feed into your computations. Let’s explore the types of inputs and how to use them.
- Incoming Payload: The incoming payload consists of data directly from the initial trigger event. This is the raw data that kicks off the workflow. For example, if your workflow is triggered by a form submission on a website, the incoming payload will contain all the form data.
- Data Fetch: Data fetch inputs involve pulling in additional information from external APIs or databases. Sometimes the initial data isn’t enough, and you need to fetch more details from other sources. For instance, if your payload contains a customer email, you might use a data fetch to query your CRM for more information about that customer, such as their purchase history or preferences.
- Data Providers: Data providers are external services that enrich your data. Integrating with services like Apollo, Clearbit, or RocketReach allows you to gather additional details about a contact or company. For example, you can use a data provider to get company size, industry, or additional contact information based on an email address or domain.
- Previous Computations: Previous computations allow you to use the results from earlier computations within the same workflow. This is particularly useful for chaining together multiple operations. For example, you might first extract the domain from an email address and then use that domain to fetch additional company details. By using the result of one computation as the input for another, you can build complex data transformations in a step-by-step manner.
Several computation input types
Setting Up a Computation
Setting up a computation involves several steps, starting with creating a group to organize your computations. Groups help you keep related computations together, making it easier to manage complex workflows. For instance, you can create groups based on data types (e.g., contact data, company data) or processing stages (e.g., raw, intermediate and processed data).
To add a computation, click the plus sign next to your group. Name your computation descriptively (e.g., “Extract Email Domain”) and provide a brief description (optional). Next, define the inputs. Specify the source of the data, such as the email field in the incoming payload. Use the autocomplete feature to help you select the correct data paths, making this step as smooth as butter.
After defining the inputs, choose the computation mode that suits your needs and set the output path. This ensures the computed data is stored in the right place. For instance, if you’re extracting the email domain, set the output path to contact.emailDomain (just an example). Finally, configure any additional settings, such as default values, whitespace trimming, or conditional computations. These configurations ensure your data is processed accurately and efficiently.
Computations layer example
Additional Configurations
Computations provide several additional configurations to fine-tune your computations:
- Default Value: The default value sets a fallback if the computation doesn’t produce a result. This acts as a safety net, ensuring that your workflow continues smoothly even if some data points are missing. For example, if an email address isn’t provided, you can set a default email to avoid errors.
- Trim Whitespaces: Trimming whitespaces removes leading and trailing spaces from strings, ensuring your data is clean and uniform. This is particularly useful when dealing with text data, as it prevents unnecessary spaces from causing issues in subsequent processing steps.
- Conditional Computation: Conditional computations add a layer of logic to your workflow. This feature allows the computation to execute only if a specified condition is met. For example, you might only extract the email domain if the email address is valid. This makes your workflow more dynamic and adaptable to different data scenarios.
- Fail Safe: The fail-safe feature stops the workflow if the computation fails. It acts like an emergency brake, preventing incorrect data from proceeding and ensuring that only valid data moves forward. This is crucial for maintaining data integrity and preventing errors from cascading through your workflow.
Computations are the powerhouse of DataMorf workflows, enabling you to transform raw data into meaningful insights and actions. By understanding the anatomy and types of computations, you can build efficient, flexible, and powerful workflows tailored to your specific needs. Remember, DataMorf is designed to make data transformation as easy as possible, even if you’re not a tech wizard.
Ready to get started? Dive into your DataMorf account and start building those computations. And as always, feel free to reach out to our support team if you need any help!