Changelog

Here you'll find the 2026 changelog for the Maia. Just want to read about new features? Read our New Features blog.

For an explanation of minimum agent version, read Agent version tracks.

May 8, 2026

🚀 New in Maia: AI Agent API endpoints, enhanced root cause analysis, and more!

Tags:
New features
Connectors
Maia
API
Streaming

Welcome back to the Maia New Features Blog! This week, we're excited to deliver updates that enhance your observability experience, make troubleshooting more intelligent, and unlock new programmatic possibilities with Maia. Let's explore what's new in our changelog updates.

🤖 Maia AI Agents' public API is here

We're excited to introduce the Agent Tasks API — Maia's first public API that lets you programmatically create, monitor, and manage tasks. This new capability allows you to send instructions to Maia AI Agents and have them executed as background processes. You now have the same level of control and agentic power via the API as you do when chatting to Maia AI Agents in Designer.

The API enables you to integrate Maia AI Agents’ problem-solving capabilities into your own workflows, CI/CD pipelines, or custom scripts—allowing Maia AI Agents to work through complex instructions as a background service.

What's new:

  • From chat to API: Transition from manual 1-to-1 conversations to programmatic task creation. Define an objective for Maia AI Agents and manage the entire work lifecycle via the API.
  • Asynchronous background tasks: Unlike a chat session that requires you to stay in the window, API tasks run independently. Send a request, receive a task ID, and check back whenever you're ready to review the output.
  • Programmatic decision handling: Maia AI Agents still pause for safety when high-impact actions are needed. You can approve tool usage or answer clarifying questions through the dedicated endpoint to keep the task moving.
  • Full Designer parity: You maintain the same level of granular control you have in the Designer—including the ability to manage permissions, track reasoning, and iterate on work—giving you complete command over Maia AI Agents' actions via the API.

How do I use it?

To get started, create a task by providing an initial instruction—this serves as your first prompt to Maia AI Agents. Once the task is created, you will receive a task ID which allows you to monitor the task’s status, handle decisions, and iterate on the task using follow-up messages.

Check out our API reference to try out the new endpoints, or read our guide to Using the Agent Tasks API for more details.

🔍 Root cause analysis gets smarter with enhanced pipeline visibility

Root Cause Analysis (RCA) has received significant improvements to make it more accurate and useful when pipeline failures occur. The enhanced RCA now provides comprehensive visibility across your entire pipeline infrastructure.

Key improvements include:

  • Complete pipeline visibility: RCA now analyzes failures inside iterators and orchestration steps, not just top-level pipeline issues.
  • Enhanced failure detection: Pipeline-level failures such as specific SQL errors are now captured and analyzed.
  • Richer contextual analysis: Warning-level agent task logs are now included to provide the lead-up context needed to understand why failures occurred.
  • Structured, actionable output: Every identified issue now includes a category and a clear indication of whether it's fixable within your pipeline.

For more information about Root Cause Analysis, check out our documentation.

💾 Saved views in Pipeline Run History

Pipeline Run History just got a major productivity boost! You can now save your filter sets and set a default view, eliminating the need to reapply filters every time you visit this page. Simply apply and save a view, and it’s there every time you need it.

💬 We'd love to hear from you!

Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.

Want to get involved? Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations.

May 1, 2026

🚀 Enhanced Iterator Controls and Improved Maia AI Agents Troubleshooting

Tags:
Agents
Maia
Improvements

This week, we're excited to bring you powerful new controls for pipeline execution and significant improvements to Maia's troubleshooting and Git capabilities. These changelog updates are designed to give you more flexibility in managing your data workflows and a smoother development experience overall.

🛑 Stop on condition for iterator components

We're excited to announce the addition of Stop On Condition functionality across our iterator components, giving you greater control over your pipeline execution.

What this means for you:

  • Stop iteration mid-loop based on a configurable condition, across the File, Fixed, Grid, Loop, and Table iterator components.
  • Build flexible conditions using a range of comparators including Less Than, Equal To, Greater Than, Blank, and more.
  • Combine multiple conditions using And/Or logic to handle more complex stopping criteria.

If you’re upgrading from Matillion ETL to Maia, read our Iterators upgrade guide for more information about how Stop On Condition works in Maia.

🤖 Maia AI Agents update: Enhanced pipeline troubleshooting and Git workflow

Maia AI Agents continue to evolve with several key improvements that enhance your development experience. The latest updates include smarter pipeline troubleshooting capabilities that can drill into nested pipelines and iterator runs to identify failing steps, page through long step lists, and filter by status.

The Git workflow has been significantly improved thanks to Maia AI Agents’ new ability to directly inspect branch changes before committing, commit selected files individually, and separate push operations from commits. You'll now see dedicated "Git status" and "Git diff" actions, each with their own approval prompts.

💬 We'd love to hear from you!

Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.

Want to get involved?

Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations.

April 10, 2026

🚀 New at Matillion: Public API Endpoints for Shared Pipelines, Terraform for Streaming Pipelines, and Enhanced Maia Variables

Tags:
API
Maia
New features
Streaming

This week, we're excited to introduce powerful new API capabilities for shared pipelines, enhanced variable support for Maia AI agents, and Terraform for Streaming runners and pipelines, making your data workflows more automated and intelligent.

🔌 New Public API endpoints for shared pipelines

We're excited to announce the availability of a new set of public API endpoints, allowing you to manage the full lifecycle of your shared pipelines programmatically.

What you can do:

  • Publish versions: Publish new shared pipeline artifact versions directly from CI/CD pipelines
  • List and filter: List and filter published versions by source project or specific pipeline
  • Retrieve details: Retrieve detailed information about any version, including the latest by default
  • Enable or disable versions: Enable or disable specific versions to support rollback and controlled rollouts
  • Browse pipelines: Browse all shared pipelines in an account, including their latest enabled version and description

See the API documentation and how-to guides for further details:

🤖 Maia AI agent update: Improved variable support

We're happy to introduce system variable and project variable support, expanding your Maia AI agent's understanding of the variables available in your project.

Why is this helpful?

When building pipelines, knowing which variables are already defined—and what system variables are available at runtime—saves time and prevents duplication. Maia AI agents can now check what exists before creating something new.

What's new:

  • System variables reference: Maia AI agents now know the available system variables (${sysvar.})—environment defaults, pipeline context, component metadata, and more. Ask your Maia AI agent which variables are available for your platform and it'll guide you to the right one.
  • Project variable awareness: Maia AI agents can list your project-level variables, showing their name, type, behavior, and default value—so it can reference existing configuration instead of creating redundant pipeline variables.

How do I use it?

Maia AI agents will automatically use system variables and check your project variables when building pipelines—no special prompting needed. For more control, you can use context files to define standards like which project variables to use, helping Maia AI agents stay consistent across your pipelines.

🏗️ Public Terraform provider now available for Streaming runners and pipelines

Creating multiple Streaming runners and pipelines just got significantly more scalable! The new public Terraform provider eliminates the need to manually create runners and pipelines one at a time through the UI—a process that can be extremely time-consuming when dealing with dozens of databases.

With this Terraform provider, you can now programmatically create, update, and delete Streaming runners and pipelines at scale. This is particularly valuable for organizations managing large numbers of data sources, transforming what was once a lengthy, repetitive manual process into an automated, efficient workflow.

The provider includes comprehensive examples and covers both runner and pipeline configuration. While client-side and local infrastructure requirements may vary, this solution provides a solid foundation that can be customized to meet your specific needs.

Check out the Streaming pipelines documentation to get started.

💬 We'd love to hear from you!

Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.

Want to get involved? Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations.

April 8, 2026

Matillion Launches Maia's Migration Agent

Tags:
Maia
New features

Autonomous Migration for Legacy ETL Platforms

New capability converts legacy ETL pipelines from 14 platforms to modern cloud data warehouses — compressing multi-quarter migration programs into weeks.

Denver and Manchester, UK – March 26 2026 — Matillion today announced the public preview of Migration Agent, a new capability within Maia, its flagship AI Data Automation platform, that autonomously converts legacy ETL pipelines into native, warehouse-optimized pipelines on Snowflake, Databricks, and Amazon Redshift.

Migration Agent addresses one of the most persistent and costly challenges in enterprise data modernization: the inability to move off legacy ETL platforms without committing to expensive, multi-quarter consulting engagements. With Migration Agent, teams can convert pipelines from 14 legacy platforms — including Informatica PowerCenter, Alteryx, IBM DataStage, SSIS, Oracle ODI, SAS Enterprise Guide, and dbt — with no manual rewrite and no GSI dependency.

How it works

Unlike conventional migration tools that replicate legacy logic into a new environment, Migration Agent performs a predictable, structured conversion. Maia parses the original transformation logic, dependency graphs, and pipeline metadata, then reconstructs each pipeline as a native, warehouse-optimized ELT pipeline. Unsupported or ambiguous constructs are flagged explicitly for human review, ensuring transparency and correctness throughout.

Engineers move from builders, manually recreating pipeline logic step by step — to managers, inspecting generated pipelines that already preserve the original system's logic and structure.

Read the full news article here.

March 28, 2026

🚀 Shared Pipeline Improvements, Maia Updates, and Enhanced Security Features

Tags:
API
Connectors
Improvements
Maia
New features

This week, we're thrilled to roll out several updates designed to enhance pipeline flexibility, improve collaboration, and make your experience with Maia smoother than ever. Check out the changelog updates for the full details.

🖥️🖥️ Shared pipeline usability improvements and text mode for grid variables

We're excited to announce a series of enhancements that will allow shared pipeline creators to provide greater clarity and usability to their shared pipeline consumers! These improvements include:

  • The ability to provide additional context to variables through optional display names
  • Reorder variables based on their priority
  • Assert if variables are required or optional
  • Display variable descriptions to the shared pipeline consumer.

We have also now added text mode for grid variables in our Run Orchestration, Run Transformation, and Run Shared Pipeline components, which will help users quickly set a large number of values.

ℹ️ Improved webhook notification support

Pipeline failure notifications just got a lot more flexible. Alongside email and Slack, you can now route pipeline failure notifications to any webhook endpoint, meaning you can connect your Data Productivity Cloud agent notifications to services like Microsoft Teams, ServiceNow, or any internal tooling that accepts a webhook.

How it works

When setting up a notification, select Webhook as your delivery method. Provide a URL and a name for the webhook, then build your payload using a simple template.

You can customize the payload using any of the available template variables:

  • ${pipelineName} - Pipeline name
  • ${status} - Execution status
  • ${finishedAt} - Completion timestamp
  • ${pipelineExecutionId} - Execution ID
  • ${projectId} - Project ID
  • ${accountId} - Account ID

This update gives teams the flexibility to pass the context they need to downstream systems—not just a generic alert.

Read the documentation for more details.

🤖 Maia update: documentation search

You can now ask Maia questions about the Data Productivity Cloud, and it will search our official online documentation in real time to provide accurate, up-to-date answers.

What's new:

When you ask "How do I..." or "What is...", Maia now actively browses our latest documentation to find the specific technical details you need.

Why is this helpful?

  • Always up to date: As soon as new documentation is published for a feature, Maia has access to it.
  • Source of truth: By pulling directly from the docs, Maia provides reliable, technical guidance on how different parts of the platform work together.
  • Efficiency: No more switching tabs to search the docs yourself—Maia does the digging and summarizes the answer right in your chat.

How do I use it?

There's nothing to toggle—just ask! Try asking Maia specific technical questions like:

  • "How do I set up a schedule?"
  • "How do I manage secrets?"

Give the new search capabilities a try and let us know what you think!

📄 Maia Update: Custom Connector now supports pagination

We've rolled out pagination support for Maia’s Custom Connectors, making it easier than ever to handle APIs that return data across multiple pages—which is most of them.

This update unlocks several key capabilities:

  • Maia will suggest what pagination method is needed and the recommended configuration
  • The ability to pull large datasets across multiple pages
  • Simpler, more scalable connector builds for complex APIs

🌲 Flattening for unstructured data across Flex and Custom Connectors

Flattening for unstructured data across Flex connectors and custom connectors is now live in the Data Productivity Cloud.

What this means:

  • Users can now flatten nested JSON structures while configuring these connectors
  • Reduces the need for manual post-processing or workarounds
  • Makes Flex and custom connectors far more viable for enterprise-grade API ingestion

🔩 Re-run from pipeline run history

Three new re-run actions are now available to help with running pipelines:

  1. Re-run pipeline (top left button) - re-runs the entire scheduled or API-triggered execution from the beginning.
  2. Play button (per step) - re-runs that individual step only, as a new standalone execution.
  3. Play with arrow (per step) - re-runs from that step onwards, continuing execution from that point in the pipeline as a new standalone execution.

Read the documentation to learn more about pipeline observability features.

🔧 Enhanced project APIs for automated configuration management

You can now manage project variables, environment overrides, and project provisioning directly through the Data Productivity Cloud REST API. This means teams can automate configuration at scale—setting up projects, applying environment-specific settings, and managing credentials without manual UI interaction. Whether you're provisioning new workspaces or keeping environments in sync, the API now gives you full control over your project management workflows.

Read the documentation to learn more about these enhanced API capabilities.

🔒 IP allow list now available for Enterprise customers

Enterprise customers can now enhance their security posture by restricting account access to only trusted IP addresses. When enabled, any request from an unauthorized IP address—whether through the UI or API—receives a 403 error, providing robust network-level access control.

Users with the "Manage IP Allow List" permission can configure this feature from Profile & AccountIP Allow List, where they can add individual IPs or CIDR ranges, enable or disable entries individually, and search by IP, range, or description. The system supports both IPv4 and IPv6 addresses and includes built-in safeguards to help prevent admin lockout by auto-detecting and pre-populating the current user's public IP when adding the first address.

This feature is particularly valuable for organizations with strict security requirements, as it applies to both UI and API access across the entire account. Remember to include any IPs used by API clients, automation, or CI/CD tooling before enabling restrictions.

Read the full documentation for setup instructions, supported formats, and troubleshooting guidance.

📊 Richer, role-based sample data in onboarding

When you start a new project with Maia, you'll now get larger, more realistic sample datasets tailored to your job role—with over 1,000 rows of fact data alongside relevant dimension tables. This means you can explore pipeline features with meaningful data that reflects your real-world use case, whether you're in sales, marketing, finance, operations, or any other function.

Previously, onboarding generated just 3–4 small tables with 10–20 rows, taking 2–3 minutes. Now you get a rich dataset in under a minute, getting you to Maia faster and making an impact from the very first interaction.

⌨️ Improved "Add to Canvas" usability

You can now navigate the Add to Canvas modal entirely via keyboard:

  • Search for your component as usual and press Enter to add the top result to the canvas
  • Or search, then navigate the filtered results using the Up/Down arrow keys, select your component, and press Enter to add it

This update keeps your hands on the keys and your focus on the logic, making the process of building pipelines faster and more intuitive.

💬 We'd love to hear from you!

Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.

Want to get involved? Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations.

March 20, 2026

📦 Environment System Variables and File Load Components for Databricks

Tags:
Improvements
New features

This week, we're excited to introduce updates that will streamline your data workflows and expand your pipeline capabilities. We've enhanced system-level variable access and expanded file loading support for Databricks users. For a full list of recent changes, be sure to check our changelog updates.

📊 Environment defaults as system variables

We've added the ability to reference project environment defaults, including role, database, warehouse, and schema, as system-level variables. This eliminates the need to create separate variables when [Environment default] was unavailable, such as in SQL and Python scripts.

📦 File load components now available for Databricks

5 new file load components are now enabled for Databricks, making it easier to bring file-based data straight into your pipelines.

New file load components for Databricks:

These components allow you to load data from files in your source location directly into Databricks tables, with automatic handling of schema inference and table creation where applicable.

Supported from agent version: 11.154.0.

💬 We'd love to hear from you!

Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.

Want to get involved?

Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations