🚀 Shared Pipeline Improvements, Maia Updates, and Enhanced Security Features
This week, we're thrilled to roll out several updates designed to enhance pipeline flexibility, improve collaboration, and make your experience with Maia smoother than ever. Check out the changelog updates for the full details.
🖥️🖥️ Shared pipeline usability improvements and text mode for grid variables
We're excited to announce a series of enhancements that will allow shared pipeline creators to provide greater clarity and usability to their shared pipeline consumers! These improvements include:
- The ability to provide additional context to variables through optional display names
- Reorder variables based on their priority
- Assert if variables are required or optional
- Display variable descriptions to the shared pipeline consumer.
We have also now added text mode for grid variables in our Run Orchestration, Run Transformation, and Run Shared Pipeline components, which will help users quickly set a large number of values.
ℹ️ Improved webhook notification support
Pipeline failure notifications just got a lot more flexible. Alongside email and Slack, you can now route pipeline failure notifications to any webhook endpoint, meaning you can connect your Data Productivity Cloud agent notifications to services like Microsoft Teams, ServiceNow, or any internal tooling that accepts a webhook.
How it works
When setting up a notification, select Webhook as your delivery method. Provide a URL and a name for the webhook, then build your payload using a simple template.
You can customize the payload using any of the available template variables:
- ${pipelineName} - Pipeline name
- ${status} - Execution status
- ${finishedAt} - Completion timestamp
- ${pipelineExecutionId} - Execution ID
- ${projectId} - Project ID
- ${accountId} - Account ID
This update gives teams the flexibility to pass the context they need to downstream systems—not just a generic alert.
Read the documentation for more details.
🤖 Maia update: documentation search
You can now ask Maia questions about the Data Productivity Cloud, and it will search our official online documentation in real time to provide accurate, up-to-date answers.
What's new:
When you ask "How do I..." or "What is...", Maia now actively browses our latest documentation to find the specific technical details you need.
Why is this helpful?
- Always up to date: As soon as new documentation is published for a feature, Maia has access to it.
- Source of truth: By pulling directly from the docs, Maia provides reliable, technical guidance on how different parts of the platform work together.
- Efficiency: No more switching tabs to search the docs yourself—Maia does the digging and summarizes the answer right in your chat.
How do I use it?
There's nothing to toggle—just ask! Try asking Maia specific technical questions like:
- "How do I set up a schedule?"
- "How do I manage secrets?"
Give the new search capabilities a try and let us know what you think!
📄 Maia Update: Custom Connector now supports pagination
We've rolled out pagination support for Maia’s Custom Connectors, making it easier than ever to handle APIs that return data across multiple pages—which is most of them.
This update unlocks several key capabilities:
- Maia will suggest what pagination method is needed and the recommended configuration
- The ability to pull large datasets across multiple pages
- Simpler, more scalable connector builds for complex APIs
🌲 Flattening for unstructured data across Flex and Custom Connectors
Flattening for unstructured data across Flex connectors and custom connectors is now live in the Data Productivity Cloud.
What this means:
- Users can now flatten nested JSON structures while configuring these connectors
- Reduces the need for manual post-processing or workarounds
- Makes Flex and custom connectors far more viable for enterprise-grade API ingestion
🔩 Re-run from pipeline run history
Three new re-run actions are now available to help with running pipelines:
- Re-run pipeline (top left button) - re-runs the entire scheduled or API-triggered execution from the beginning.
- Play button (per step) - re-runs that individual step only, as a new standalone execution.
- Play with arrow (per step) - re-runs from that step onwards, continuing execution from that point in the pipeline as a new standalone execution.
Read the documentation to learn more about pipeline observability features.
🔧 Enhanced project APIs for automated configuration management
You can now manage project variables, environment overrides, and project provisioning directly through the Data Productivity Cloud REST API. This means teams can automate configuration at scale—setting up projects, applying environment-specific settings, and managing credentials without manual UI interaction. Whether you're provisioning new workspaces or keeping environments in sync, the API now gives you full control over your project management workflows.
Read the documentation to learn more about these enhanced API capabilities.
🔒 IP allow list now available for Enterprise customers
Enterprise customers can now enhance their security posture by restricting account access to only trusted IP addresses. When enabled, any request from an unauthorized IP address—whether through the UI or API—receives a 403 error, providing robust network-level access control.
Users with the "Manage IP Allow List" permission can configure this feature from Profile & Account → IP Allow List, where they can add individual IPs or CIDR ranges, enable or disable entries individually, and search by IP, range, or description. The system supports both IPv4 and IPv6 addresses and includes built-in safeguards to help prevent admin lockout by auto-detecting and pre-populating the current user's public IP when adding the first address.
This feature is particularly valuable for organizations with strict security requirements, as it applies to both UI and API access across the entire account. Remember to include any IPs used by API clients, automation, or CI/CD tooling before enabling restrictions.
Read the full documentation for setup instructions, supported formats, and troubleshooting guidance.
📊 Richer, role-based sample data in onboarding
When you start a new project with Maia, you'll now get larger, more realistic sample datasets tailored to your job role—with over 1,000 rows of fact data alongside relevant dimension tables. This means you can explore pipeline features with meaningful data that reflects your real-world use case, whether you're in sales, marketing, finance, operations, or any other function.
Previously, onboarding generated just 3–4 small tables with 10–20 rows, taking 2–3 minutes. Now you get a rich dataset in under a minute, getting you to Maia faster and making an impact from the very first interaction.
⌨️ Improved "Add to Canvas" usability
You can now navigate the Add to Canvas modal entirely via keyboard:
- Search for your component as usual and press Enter to add the top result to the canvas
- Or search, then navigate the filtered results using the Up/Down arrow keys, select your component, and press Enter to add it
This update keeps your hands on the keys and your focus on the logic, making the process of building pipelines faster and more intuitive.
💬 We'd love to hear from you!
Let us know how these new features are improving your workflows—we're all ears! Feel free to add any comments or questions below.
Want to get involved? Join the Matillion Community to stay up to date, share feedback, and help shape our product roadmap for future innovations.
Data management
