My name is Yihong Chen. I research on AI knowledge acquisition, specifically on how different AI systems can learn to abstract, represent and use concepts/symbols efficiecntly.
I am open to collaborations on topics related to embedding learning, link prediction, and language modeling. If you would like to get in touch, you can reach me by emailing yihong-chen AT outlook DOT com, or simply booking a Zoom meeting with me.
π₯ Mar 2024, Quanta Magazine covers our research on periodical embedding forgetting. Check out the article here.
π₯ Dec 2023, I will present our forgetting paper at NeurIPS 2023. Check out the poster here.
π₯ Sep 2023, our latest work Improving Language Plasticity via Pretraining with Active Forgetting is accepted by NeurIPS 2023!
π₯ Sep 2023, I presented our latest work on forgetting at IST-Unbabel seminar.
π₯ Jul 2023, I presented our latest work on forgetting language modelling at ELLIS Unconference 2023. The slides are available here. Feel free to leave your comments.
π₯ Jul 2023, discover the power of forgetting in language modelling! Our latest work, Improving Language Plasticity via Pretraining with Active Forgetting, shows how pretraining a language model with active forgetting can help it quickly learn new languages. You'll be amazed by the model plasticity imbued via pretraining with forgetting. Check it out :)
π₯ Nov 2022, our paper, REFACTOR GNNS: Revisiting Factorisation-based Models from a Message-Passing Perspective, will appear in NeurIPS 2022! If you're interested in understanding why FMs can be some special GNNs and make them usable on new graphs, check it out!
π₯ Jun 2022, if you're looking for a hands-on repo to start experimenting with link prediction, check out our repo ssl-relation-prediction. Simple code, easy to hack π