Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The following Lakeflow Spark Declarative Pipelines features, improvements, and bug fixes were released in 2026.
Note
Because Lakeflow Spark Declarative Pipelines channel releases follow a rolling upgrade process, channel upgrades are deployed to different regions at different times. Your release, including Databricks Runtime versions, might not be updated until a week or more after the initial release date. To find the Databricks Runtime version for a pipeline, see Runtime information.
February 2026
These features and improvements to Lakeflow Spark Declarative Pipelines were released between January 14, 2026 and February 25, 2026.
Databricks Runtime versions used by this release
Channel:
- CURRENT (default): Databricks Runtime 16.4
- PREVIEW: Databricks Runtime 17.3
New features and improvements
- Pipelines now support type widening for Delta tables, allowing column data types to be safely broadened (for example,
INTtoLONG,FLOATtoDOUBLE) without requiring a full pipeline reset. This enables schema evolution workflows that previously required manual intervention. - You can now use SCD Type 1 materialization with
AUTO CDC, providing a simpler CDC pattern that upserts the latest value without maintaining full change history. This reduces storage overhead for use cases that don’t require full history. - Pipelines now reuse existing clusters when retrying failed updates, reducing retry latency and lowering compute costs by eliminating redundant cluster startup time.
- Predictive optimization enablement is now displayed properly on materialized views and streaming tables, if they have been refreshed within the last month.
- Pipelines now validate multiple flows together, catching configuration conflicts and dependency issues across flows during the dry-run phase before execution begins.
- Alterable metadata is now preserved during ingestion pipeline updates, enabling full support for 'ALTER' commands on ingestion streaming tables.
- Python errors in pipelines now carry SQL state codes, improving error diagnostics and enabling better programmatic error handling in downstream tools.
- Pipelines now support ARM instances for classic compute.
Bug fixes
- Identity column values in append-only streaming tables are now correctly generated on the first update run.
January 2026
These features and improvements to Lakeflow Spark Declarative Pipelines were released between November 14, 2025 and January 13, 2026.
Databricks Runtime versions used by this release
Channel:
- CURRENT (default): Databricks Runtime 16.4
- PREVIEW: Databricks Runtime 17.3
New features and improvements
You can now store and manage data quality expectations directly in Unity Catalog tables, centralizing data quality rules with your data governance framework. This enables version-controlled, auditable quality rules that can be shared across multiple pipelines.
Continuous pipelines running longer than 7 days now restart gracefully with minimal downtime and an explicit update cause (
INFRASTRUCTURE_MAINTENANCE), instead of restarting abruptly when the underlying compute needs to be refreshed.Pipelines now support queued execution mode, where multiple update requests are automatically queued and executed sequentially instead of failing with conflicts. This simplifies operations for pipelines with frequent update triggers and eliminates the need for manual retry coordination.
You can now materialize multiple SCD Type 2 views from a single change data source, improving efficiency when creating multiple historical views of the same data. This eliminates the need to reprocess source data for each SCD Type 2 output.
Pipeline schedules and configuration can now be stored and read from Unity Catalog table properties, enabling centralized settings management through data governance. This allows you to manage pipeline behavior alongside your data definitions.
MANAGEpermissions are now automatically propagated to materialized views and streaming tables in Unity Catalog, simplifying permission management for pipeline outputs. This ensures consistent access control without manual permission grants.SCD Type 2 operations now automatically coalesce duplicate records with the same natural key, ensuring data consistency and preventing duplicate historical records in your slowly changing dimension tables.
Pipelines now have an option to automatically drop inactive tables that are no longer part of the pipeline definition. This helps maintain clean data warehouses and reduces storage costs from obsolete tables. See Use Unity Catalog with pipelines.
Pipeline definition, patch operations, and run-as identity changes are now included in the audit log, providing comprehensive tracking of configuration changes for compliance and security monitoring. See Pipeline event log.
Bug fixes
No significant bug fixes were included in this release period. All changes were new features and improvements.