- Project Owner 👉 : Shuxia Lin
- Prepared by 🌷 TULIP Lab
Data distillation, an emerging technique within the broader field of data compression and model training, aims to create compact, informative representations of large datasets. This process not only reduces storage and computational demands but also potentially enhances model performance by focusing on the most salient features of the data. The incorporation of causal relationship analysis into data distillation represents an innovative approach, promising to maintain or even improve the predictive power of models trained on distilled datasets by preserving the underlying causal structures of the data.
📔 NEXUS-S1
(compatible with SIT723)
The related documents are encrypted, and you will receive the password upon the acceptance into stage 1️⃣ of the project.
- TO BE ADDED
📔 NEXUS-S2
(compatible with SIT724)
The related documents are encrypted, and you will receive the password upon the acceptance into the stage 2️⃣ of the project.
- TO BE ADDED
📔 NEXUS-S3
(compatible with Honours)
The related documents are encrypted, and you will receive the password upon the acceptance into the stage 3️⃣ of the project.
- TO BE ADDED