I am currently a lead researcher at Huawei Noah's Ark Lab (AI Lab). Before that, I obtained my Ph.D. degree in Computer Science and Engineering from The Chinese University of Hong Kong in 2016, supervised by Prof. Michael R. Lyu. I received the B.Eng degree from Beijing University of Posts and Telecommunications. My recent research focus is on building and applying practical machine learning algorithms (especially ranking, NLP and multimodal learning) for industrial-scale recommender systems, with a goal to help better discover users' interests and serve their needs. Our team has launched many self-designed ML algorithms on Huawei's products like News Feeds, Microvideo Stream, Music App, App Store, PPS Ads, etc.
I am always looking for students and interns who are interested in recommender systems, LLMs, or multimodal AI. Please feel free to reach out if you are interested!
My current research focuses mainly on recommender systems and pretrained multimodal models for understanding and generation. I have published 70+ papers in top conferences such as NeurIPS, SIGIR, KDD, WWW, ACL, CVPR, MM, etc., which have received .
Please check out the full list of publications or view my recent work organized by research topics below:
EAGER-LLM: Enhancing Large Language Models as Recommenders through Exogenous Behavior-Semantic Integration, Minjie Hong, Yan Xia, Zehan Wang, Jieming Zhu, Ye Wang, Sihang Cai, Xiaoda Yang, Quanyu Dai, Zhenhua Dong, Zhimeng Zhang, Zhou Zhao. In WWW 2025.
ROMA: Recommendation-Oriented Language Model Adaptation Using Multi-Modal Multi-Domain Item Sequences, Xingyu Lu, Jinpeng Wang, Jieming Zhu#, Zhicheng Zhang, Deqing Zou, Hai-Tao Zheng#, Shu-Tao Xia, Rui Zhang. In KDD 2025.
UniEmbedding: Learning Universal Multi-Modal Multi-Domain Item Embeddings via User-View Contrastive Learning, Boqi Dai, Zhaocheng Du, Jieming Zhu#, Jintao Xu, Deqing Zou, Quanyu Dai, Zhenhua Dong, Hai-Tao Zheng#, Rui Zhang. In CIKM 2024.
EASE: Learning Lightweight Semantic Feature Adapters from Large Language Models for CTR Prediction, Zexuan Qiu*, Jieming Zhu*, Yankai Chen, Guohao Cai, Weiwen Liu, Zhenhua Dong, Irwin King. In CIKM 2024.
EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration, Ye Wang, Jiahao Xun, Minjie Hong, Jieming Zhu#, Tao Jin, Wang Lin, Haoyuan Li, Linjun Li, Yan Xia, Zhou Zhao#, Zhenhua Dong. In KDD 2024.
Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey, Qijiong Liu*, Jieming Zhu*#, Yanting Yang, Quanyu Dai, Zhaocheng Du, Xiao-Ming Wu, Zhou Zhao, Rui Zhang, Zhenhua Dong. In KDD 2024.
FINAL: Factorized Interaction Layer for CTR Prediction, Jieming Zhu, Qinglin Jia, Guohao Cai, Quanyu Dai, Jingjie Li, Zhenhua Dong, Ruiming Tang, Rui Zhang. In SIGIR 2023.
Beyond Two-Tower Matching: Learning Sparse Retrievable Cross-Interactions for Recommendation, Liangcai Su, Fan Yan, Jieming Zhu#, Xi Xiao, Haoyi Duan, Zhou Zhao, Zhenhua Dong, Ruiming Tang. In SIGIR 2023.
FinalMLP: An Enhanced Two-Stream MLP Model for CTR Prediction, Kelong Mao*, Jieming Zhu*, Liangcai Su, Guohao Cai, Yuru Li, Zhenhua Dong. In AAAI 2023.
BARS: Towards Open Benchmarking for Recommender Systems, Jieming Zhu, Quanyu Dai, Liangcai Su, Rong Ma, Jinyang Liu, Guohao Cai, Xi Xiao, Rui Zhang. In SIGIR 2022.
A Survey of Personalized Large Language Models: Progress and Future Directions, Jiahong Liu, Zexuan Qiu, Zhongyang Li, Quanyu Dai, Jieming Zhu, Minda Hu, Menglin Yang, Irwin King. In Arxiv 2025.
MemSim: A Bayesian Simulator for Evaluating Memory of LLM-based Personal Assistants, Zeyu Zhang, Quanyu Dai, Luyu Chen, Zeren Jiang, Rui Li, Jieming Zhu, Xu Chen, Yi Xie, Zhenhua Dong, Ji-Rong Wen. In Arxiv 2024.
A Survey on the Memory Mechanism of Large Language Model based Agents, Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, Ji-Rong Wen. In Arxiv 2024.
PMG: Personalized Multimodal Generation with Large Language Models, Xiaoteng Shen, Rui Zhang, Xiaoyan Zhao, Jieming Zhu, Xi Xiao. In WWW 2024.
MIRA: Empowering One-Touch AI Services on Smartphones with MLLM-based Instruction Recommendation, Zhipeng Bian, Jieming Zhu#, Xuyang Xie, Quanyu Dai, Zhou Zhao, Zhenhua Dong. In ACL 2025.
CART: A Generative Cross-Modal Retrieval Framework With Coarse-To-Fine Semantic Modeling, Minghui Fang, Shengpeng Ji, Jialong Zuo, Hai Huang, Yan Xia, Jieming Zhu#, Xize Cheng, Xiaoda Yang, Wenrui Liu, Gang Wang, Zhenhua Dong, Zhou Zhao#. In ACL Findings 2025.
Enhancing Multimodal Unified Representations for Cross Modal Generalization, Hai Huang, Yan Xia, Shengpeng Ji, Shulei Wang, Hanting Wang, Minghui Fang, Jieming Zhu, Zhenhua Dong, Sashuai zhou, Zhou Zhao. In ACL 2025.
Towards Transformer-Based Aligned Generation with Self-Coherence Guidance, Shulei Wang, Wang Lin, Hai Huang, Hanting Wang, Sihang Cai, WenKang Han, Tao Jin, Jingyuan Chen, Jiacheng Sun, Jieming Zhu, Zhou Zhao. In CVPR 2025.
EvdCLIP: Improving Vision-Language Retrieval with Entity Visual Descriptions from Large Language Models, GuangHao Meng, Sunan He, Jinpeng Wang, Tao Dai, Letian Zhang, Jieming Zhu, Qing Li, Gang Wang, Rui Zhang, Yong Jiang. In AAAI 2025.
MART: Learning Hierarchical Music Audio Representations with Part-Whole Transformer, Dong Yao*, Jieming Zhu*, Jiahao Xun, Shengyu Zhang, Zhou Zhao, Liqun Deng, Wenqiao Zhang, Zhenhua Dong, Xin Jiang. In WWW 2024.
Achieving Cross Modal Generalization with Multimodal Unified Representation, Yan Xia, Hai Huang, Jieming Zhu, Zhou Zhao. In NeurIPS 2023.
Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks, Haoyi Duan, Yan Xia, Mingze Zhou, Li Tang, Jieming Zhu, Zhou Zhao. In NeurIPS 2023.
DisCover: Disentangled Music Representation Learning for Cover Song Identification, Jiahao Xun, Shengyu Zhang, Yanting Yang, Jieming Zhu, Liqun Deng, Zhou Zhao, Zhenhua Dong, Ruiqi Li, Lichao Zhang, Fei Wu. In SIGIR 2023.
Counterfactual Contrastive Learning for Weakly-Supervised Vision-Language Grounding, Zhu Zhang, Zhou Zhao, Zhijie Lin, Jieming Zhu, Xiuqiang He. In NeurIPS 2020.