張 一凡 · チョウ イーファン · [ʈʂāŋ īː fǽn]
- Assistant Professor, Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo
- Visiting Scientist, Imperfect Information Learning Team, RIKEN Center for Advanced Intelligence Project
[yivan.xyz@gmail.com] [yivan.zhang@k.u-tokyo.ac.jp]
[CV] [Google Scholar] [OpenReview] [DBLP] [GitHub]
I’m an assistant professor at The University of Tokyo, where I work on the theory and application of machine learning. I was previously a postdoctoral researcher at RIKEN Center for Advanced Intelligence Project. I obtained my Ph.D. from The University of Tokyo, where I was fortunate to be advised by Prof. Masashi Sugiyama.
Lately, I’ve been exploring algebra and applied category theory in machine learning. My long-term goal is to understand how the structure of learning systems gives rise to intelligent behavior. I aim to uncover how perception, abstraction, reasoning, and creativity emerge from first principles, and how algebraic tools can help manage their growing complexity. I’m especially interested in how intelligent agents can acquire, represent, and reuse knowledge and skills in structured ways, and how they can behave in alignment with broadly shared human values, even under minimal supervision.
Over the years, I’ve become fascinated by both foundational formalisms—like formal definitions of disentangled representations or reward aggregation in reinforcement learning—and fundamental challenges, such as compositional generalization and limited supervision. I’m drawn to questions that appear across domains in different forms, and I seek unifying abstractions that reveal their shared structure. These are the kinds of problems that resist quick fixes, but reward careful formulation and deep theoretical insight. I believe that we must ask the right questions before looking for the right answers.
That’s why I value abstractions that clarify rather than obscure complexity. For me, category theory offers such a lens: it reveals deep structure, offers diagrammatic reasoning, highlights compositionality, and helps manage complexity by showing how parts relate to wholes. While much of my thinking is rooted in abstract mathematics, I strive to translate abstract theories into practical intuitions that resonate with the broader machine learning community.
Concretely, my research spans the following topics:
- Foundational machine learning
- algebraic formalisms and techniques
- explainable decision-making under uncertainty
- weak supervision
- Representation learning
- disentanglement
- symmetry and equivariance
- abstraction and structured representation
- neural-symbolic learning and reasoning
- Reinforcement learning
- non-standard problem formulations
- compositional and hierarchical learning
- safety and alignment
- exploration
I love things that are discovered, not invented.
My research is/was generously supported by RIKEN Center for Advanced Intelligence Project (RIKEN AIP), Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), and Microsoft Research Asia (MSRA).
News
- August 2025: Co-organizing an IJCAI’25 tutorial, “AI Meets Algebra: Foundations and Frontiers” in Montréal, Canada
- May 2025: “Recursive Reward Aggregation” accepted to RLC’25 in Edmonton, Canada
Featured publications
Recursive Reward Aggregation
Yuting Tang, Yivan Zhang, Johannes Ackermann, Yu-Jie Zhang, Soichiro Nishimori, Masashi Sugiyama
Reinforcement Learning Conference 2025 (RLC’25)
[arXiv] [OpenReview] [code] [poster]
The discounted sum
In reinforcement learning, the recursive generation and aggregation of rewards—viewed as a recursive coalgebra homomorphism and an algebra homomorphism—naturally gives rise to a generalized Bellman equation. This algebraic perspective allows us to extend many value-based methods to a broad class of recursive reward aggregations.
Enriching Disentanglement: From Logical Definitions to Quantitative Metrics
Yivan Zhang, Masashi Sugiyama
Neural Information Processing Systems 2024 (NeurIPS’24)
Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML) at International Conference on Machine Learning 2023 (ICML’23)
[arXiv] [OpenReview] [NeurIPS’24] [TAG-ML’23] [poster]
A Category-theoretical Meta-analysis of Definitions of Disentanglement
Yivan Zhang, Masashi Sugiyama
International Conference on Machine Learning 2023 (ICML’23)
International Workshop on Symbolic-Neural Learning 2023 (SNL’23)
[arXiv] [OpenReview] [ICML’23] [slides] [poster]
Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
Yivan Zhang, Gang Niu, Masashi Sugiyama
International Conference on Machine Learning 2021 (ICML’21)
[arXiv] [ICML’21] [code] [slides] [poster]
Learning from Aggregate Observations
Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama
Neural Information Processing Systems 2020 (NeurIPS’20)
[arXiv] [NeurIPS’20] [code]