Releases: FederatedAI/FATE-Flow
Releases · FederatedAI/FATE-Flow
Release v2.2.0
Release v2.1.0
Release 2.1.0
Major Features and Improvements
Improved the display issue of output data.
Enhanced the PyPI package: configuration files have been relocated to the user's home directory, and the relative paths for uploading data are based on the user's home directory.
Supported running FATE algorithms in Spark on YARN client mode.
Bug-Fix
Fixed an issue where failed tasks could not be retried.
Fixed an issue where the system couldn't run when the task cores exceeded the system total cores.
Release v2.0.0
Major Features and Improvements
- Adapted to new scalable and standardized federated DSL IR
- Built an interconnected scheduling layer framework, supported the BFIA protocol
- Optimized process scheduling, with scheduling separated and customizable, and added priority scheduling
- Optimized algorithm component scheduling,support container-level algorithm loading, enhancing support for cross-platform heterogeneous scenarios
- Optimized multi-version algorithm component registration, supporting registration for mode of components
- Federated DSL IR extension enhancement: supports multi-party asymmetric scheduling
- Optimized client authentication logic, supporting permission management for multiple clients
- Optimized RESTful interface, making parameter fields and types, return fields, and status codes clearer
- Added OFX(Open Flow Exchange) module: encapsulated scheduling client to allow cross-platform scheduling
- Supported the new communication engine OSX, while remaining compatible with all engines from FATE Flow 1.x
- Decoupled the System Layer and the Algorithm Layer, with system configuration moved from the FATE repository to the Flow repository
- Published FATE Flow package to PyPI and added service-level CLI for service management
- Migrated major functionality from FATE Flow 1.x
Release v2.0.0-beta
Major Features and Improvements
- Migrated functions: data upload/download, process scheduling, component output data/model/metric management, multi-storage adaptation for models, authentication, authorization, feature anonymization, multi-computing/storage/communication engine adaptation, and system high availability
- Optimized process scheduling, with scheduling separated and customizable, and added priority scheduling
- Optimized algorithm component scheduling, dividing execution steps into preprocessing, running, and post-processing
- Optimized multi-version algorithm component registration, supporting registration for mode of components
- Optimized client authentication logic, supporting permission management for multiple clients
- Optimized RESTful interface, making parameter fields and types, return fields, and status codes clearer
- Decoupling the system layer from the algorithm layer, with system configuration moved from the FATE repository to the Flow repository
- Published FATE Flow package to PyPI and added service-level CLI for service management
Release v1.11.2
Major Features and Improvements
- Support real-time log retrieval and display for FATE LLM tasks.
- Optimize the logic of the job clean interface.
Bug Fixes
- Fix the thread accumulation caused by the session cleanup timeout bug.
Release v1.11.1
Major Features and Improvements
- Support distributed training with multiple gpus for FATE-LLM by Eggroll
Bug Fixes
- Fix hadoop connection failures in some scenarios
- Fix spark config in role does not take effect
Release v1.11.0
Major Features and Improvements
- Add data table preview query interface
Bug Fixes
- Fix the performance problems of upload and reader in processing large amounts of data
- Fix online inference cannot be done after model migration bug
- Fix the model cannot be saved to the specified database bug
- Fix reader data preview display bug
Release v2.0.0-alpha
Feature Highlights
- Adapted to new scalable and standardized federated DSL IR
- Standardized API interface with param type checking
- Decoupling Flow from FATE repository
- Optimized scheduling logic, with configurable dispatcher decoupled from initiator
- Support container-level algorithm loading and task scheduling, enhancing support for cross-platform heterogeneous scenarios
- Independent maintenance for system configuration to enhance flexibility and ease of configuration
- Support new communication engine OSX, while compatible with all engines from Flow 1.X
- Introduce OFX(Open Flow Exchange) module: encapsulated scheduling client to allow cross-platform scheduling
Release v1.10.1
Major Features and Improvements
- Optimize table info API
Release v1.10.0
Major Features and Improvements
- Add connection test API
- May configure gRPC message size limit
Bug Fixes
- Fix module duplication issue in model