DENNY ZHOU |
||
|
||
Founder and lead of the Reasoning Team in Google DeepMind. Solve Artificial General Intelligence (AGI) by building large language models (LLMs) to reason to achieve perfect generalization: Chain-of-thought, self-consistency, least-to-most (decomposing), zero-shot, compositional generalization, and LLM theory. Google Research Tech Impact Award 2022. WSDM Test of Time Award 2022. Distinguished keynotes in KDD 2023 LLM Day, and the Grand Opening of Yale’s Institute for Foundations of Data Science. General Chair for the 1st Conference on Language Modeling (COLM) 2024. |
||
PUBLICATIONSChengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, Xinyun Chen. Large Language Models as Optimizers. arXiv:2309.03409 [cs.LG], 2023. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou. Large Language Models as Tool Makers. arXiv:2305.17126 [cs.LG], 2023. Xinyun Chen, Maxwell Lin, Nathanael Schärli, Denny Zhou. Teaching Large Language Models to Self-Debug. arXiv:2304.05128 [cs.CL], 2023. Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, Tengyu Ma. Larger language models do in-context learning differently. arXiv:2303.03846 [cs.CL], 2023. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, Adam Roberts. The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. International Conference on Machine Learning (ICML), 2023. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, and Denny Zhou. Large Language Models Can Be Easily Distracted by Irrelevant Context. International Conference on Machine Learning (ICML), 2023. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. What learning algorithm is in-context learning? Investigations with linear models. International Conference on Learning Representations (ICLR), 2023. Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, Donald Metzler. UL2: Unifying Language Learning Paradigms. International Conference on Learning Representations (ICLR), 2023. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei. Scaling Instruction-Finetuned Language Models. arXiv:2210.11416 [cs.LG], 2022. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, Jason Wei. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them. arXiv:2210.09261 [cs.CL], 2022. Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, Denny Zhou, Donald Metzler, Slav Petrov, Neil Houlsby, Quoc V Le, Mostafa Dehghani. Transcending scaling laws with 0.1% extra compute. arXiv:2210.11399 [cs.CL], 2022. F. Shi, M. Suzgun, M. Freitag, X. Wang, S. Srivats, S. Vosoughi, HW Chung, Y. Tay, S. Ruder, D. Zhou, D. Das, and J. Wei. Language models are multilingual chain-of-thought reasoners. International Conference on Learning Representations (ICLR), 2023. A. Drozdov, N. Schärli, E. Akyürek, N. Scales, X. Song, X. Chen, O. Bousquet and D. Zhou. Compositional Semantic Parsing with Large Language Models. International Conference on Learning Representations (ICLR), 2023. J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, Ed H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. Emergent Abilities of Large Language Models. Transactions on Machine Learning Research (TMLR), 2022. D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le, and E. Chi. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. International Conference on Learning Representations (ICLR), 2023. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, ... and N. Fiedel. PaLM: Scaling language modeling with pathways. arXiv:2204.02311 [cs.CL], 2022. X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery and D. Zhou. Self-Consistency Improves Chain of Thought Reasoning in Language Models. International Conference on Learning Representations (ICLR), 2023. J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. arXiv:2201.11903 [cs.CL], 2022. Z. Yuan, Y. Wu, Z. Qiu, X. Du, L. Zhang, D. Zhou, and T. Yang. Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance. International Conference on Machine Learning (ICML), 2022. H. Ren, H. Dai, B. Dai, X. Chen, D. Zhou, J. Leskovec and D. Schuurmans. SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD), 2022. L. Hou, R. Pang, T. Zhou, Y. Wu, X. Song, X. Song, and D. Zhou. Token Dropping for Efficient BERT Pretraining. Annual Meeting of the Association for Computational Linguistics (ACL), 2022. W. Chen, W. Huang, X. Du, X. Song, Z. Wang, and D. Zhou. Auto-scaling Vision Transformers without Training. International Conference on Learning Representations (ICLR), 2022. Y. Li, A. Yu, T. Meng, B. Caine, J. Ngiam, D. Peng, J. Shen, B. Wu, Y. Lu, D. Zhou, Q. Le, A. Yuille, and M. Tan. DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection. Conference on Computer Vision and Pattern Recognition (CVPR), 2022. X. Song, A. Salcianu, Y. Song, D. Dopson and D. Zhou. Fast WordPiece Tokenization. Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021. X. Chen, P. Maniatis, R. Singh, C. Sutton, H. Dai, M. Lin, and D. Zhou. SpreadsheetCoder: Formula Prediction from Semi-structured Context. International Conference on Machine Learning (ICML), 2021. H. Ren, H. Dai, B. Dai, X. Chen, M. Yasunaga, H. Sun, D. Schuurmans, J. Leskovec, and D. Zhou. LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs. International Conference on Machine Learning (ICML), 2021. X. Liu, M. Ye, D. Zhou, and Q. Liu. Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision. The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), 2021. X. Chen, C. Liang, A. Yu, D. Song and D. Zhou. Compositional Generalization via Neural-Symbolic Stack Machines. Advances in Neural Information Processing Systems (NeurIPS) 34, 2020. N. B. Shah and D. Zhou. Approval Voting and Incentives in Crowdsourcing. ACM Transactions on Economics and Computation, Vol. 8, No. 3, Article 13. Publication date: June 2020. D. Zhou, M. Ye, C. Chen, T. Meng, M. Tan, X. Song, Q. Le, Q. Liu and D. Schuurmans. Go Wide, Then Narrow: Efficient Training of Deep Thin Networks. Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. M. Ye, C. Gong, L. Nie, D. Zhou, A. Klivans and Q. Liu. Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection. Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. Y. Xue, D. Zhou, N. Du, A. Dai, Z. Xu, K. Zhang and C. Cui. Deep State-Space Generative Model For Correlated Time-to-Event Predictions. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2020. Z. Sun, H. Yu, X. Song, R. Liu, Y. Yang, D. Zhou. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. Annual Conference of the Association for Computational Linguistics (ACL), 2020. ( code) X. Chen, C. Liang, A. Yu, D. Zhou, D. Song, and Q. Le. Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension. International Conference on Learning Representations (ICLR), 2020. Z. Tang, Y. Feng, L. Li, D. Zhou, and Q. Liu. Doubly robust bias reduction in infinite horizon off-policy estimation. International Conference on Learning Representations (ICLR), 2020. A. Mousavi, L. Li, Q. Liu, and D. Zhou. Black-box off-policy estimation for infinite-horizon reinforcement learning. International Conference on Learning Representations (ICLR), 2020. H. Dong, J. Mao, T. Lin, C. Wang, L. Li, and D. Zhou. Neural Logic Machines. International Conference on Learning Representations (ICLR), 2019. ( code) Q. Liu, L. Li, Z. Tang, and D. Zhou. Breaking the curse of horizon: Infinite-horizon off-policy estimation. Advances in Neural Information Processing Systems (NIPS) 31, 2018. P.-S. Huang, C. Wang, D. Zhou and L. Deng. Towards Neural Phrase-based Machine Translation. International Conference on Learning Representations (ICLR), 2018. ( code) H. Liu, Y. Feng, Y. Mao, D. Zhou, J. Peng and Q. Liu. Action-dependent Control Variates for Policy Optimization via Stein Identity. International Conference on Learning Representations (ICLR), 2018. P. Zhang, Q. Liu, D. Zhou, T. Xu and X. He. On the Discrimination-Generalization Tradeoff in GANs. International Conference on Learning Representations (ICLR), 2018. C. Wang, Y. Wang, P.-S. Huang, A. Mohamed, D. Zhou and L. Deng. Sequence Modeling via Segmentations. Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. L. Li, Y. Lu and D. Zhou. Provably Optimal Algorithms for Generalized Linear Contextual Bandits. Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. S. Du, J. Chen, L. Li, L. Xiao and D. Zhou. Stochastic Variance Reduction Methods for Policy Evaluation. Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. E. Parisotto, A. Mohamed, R. Singh, L. Li, D. Zhou and P. Kohli. Neuro-symbolic program synthesis. International Conference on Learning Representations (ICLR), 2017. N. Shah and D. Zhou. Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing. Journal of Machine Learning Research, 17(165):1-52, 2016. N. Shah and D. Zhou. No Oops, You Won't Do It Again: Mechanisms for Self-correction in Crowdsourcing. Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. C. Gao, Y. Lu and D. Zhou. Exact Exponent in Optimal Rates for Crowdsourcing. Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. Y. Zhang, X. Chen, D. Zhou and M. I. Jordan. Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing. Journal of Machine Learning Research, 17(102):1-44, 2016. N. Shah and D. Zhou. Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing. Advances in Neural Information Processing Systems (NIPS) 28, 2015. D. Zhou, Q. Liu, J. C. Platt, C. Meek and N. B. Shah. Regularized Minimax Conditional Entropy for Crowdsourcing. Technical Report arXiv:1503.07240 [cs.LG], 2015. N. B. Shah, D. Zhou and Y. Peres. Approval Voting and Incentives in Crowdsourcing. Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. (long version) N. B. Shah and D. Zhou. On the Impossibility of Convex Inference in Human Computation. Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015. X. Chen, Q. Lin, and D. Zhou. Statistical Decision Making for Optimal Budget Allocation in Crowd Labeling. Journal of Machine Learning Research, 16 (Jan):1-46, 2015. Y. Zhang, X. Chen, D. Zhou and M. I. Jordan. Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing. Advances in Neural Information Processing Systems (NIPS) 27, 2014. D. Zhou, Q. Liu, J. C. Platt, and C. Meek. Aggregating Ordinal Labels from Crowds by Minimax Conditional Entropy. Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. [slides] [data ] [code ] C. Gao and D. Zhou. Minimax Optimal Convergence Rates for Estimating Ground Truth from Crowdsourced Labels. Technical Report arXiv:1310.5764, October, 2013. H. Li, B. Yu, and D. Zhou. Error rate analysis of labeling by crowdsourcing. ICML'13 Workshop: Machine Learning Meets Crowdsourcing, 2013. X. Chen, Q. Lin, and D. Zhou. Optimistic Knowledge Gradient Policy for Optimal Budget Allocation in Crowdsourcing. Proceedings of the 30th International Conference on Machine Learning (ICML), 2013. (appendix) D. Zhou, J. C. Platt, S. Basu, and Y. Mao. Learning from the Wisdom of Crowds by Minimax Entropy. Advances in Neural Information Processing Systems (NIPS) 25, 2204-2212, 2012. [slides] [data ] [code ] Y. Song, D. Zhou, and L.-W. He. Query Suggestion by Constructing Term-Transition Graphs. Proceedings of the ACM 5th Conference on Web Search and Data Mining (WSDM), 353-362, 2012. D. Zhou, L. Xiao and M. Wu. Hierarchical Classification via Orthogonal Transfer. Proceedings of the 28th International Conference on Machine Learning (ICML), 801-808, 2011. (long version) Y. Song, D. Zhou, and L.-W. He. Post-Ranking Query Suggestion by Diversifying Search Results. Proceedings of the 34th ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 815-824, 2011. H. Ma, D. Zhou, C. Liu, M. Lyu, and I. King. Recommender Systems with Social Regularization. Proceedings of the ACM 4th Conference on Web Search and Data Mining (WSDM), 287-296, 2011. D. Zhou, L. Xiao and M. Wu. Hierarchical Classification via Orthogonal Transfer. NIPS Workshop on Optimization for Machine Learning, 2010. (long version) Y. Chi, X. Song, D. Zhou, K. Hino, and B. Tseng. On Evolutionary Spectral Clustering. ACM Transactions on Knowledge Discovery from Data (ACM-TKDD), Volume 3, Issue 4, Article 17, 2009. D. Shen, Y. Li, X. Li, and D. Zhou. Product Query Classification. Proceedings of the ACM 18th Conference on Information and Knowledge Management (CIKM), 741-750, 2009. Q. Mei, D. Zhou, and K. Church. Query Suggestion Using Hitting Time. Proceedings of the ACM 17th Conference on Information and Knowledge Management (CIKM), 469-478, 2008. D. Zhou and C. Burges. High-Order Regularization on Graphs. International Workshop on Mining and Learning with Graphs, Helsinki, Finland, 2008. Y. Chi, X. Song, D. Zhou, K. Hino, and B. Tseng. Evolutionary Spectral Clustering by Incorporating Temporal Smoothness. Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), 153-162, 2007. D. Zhou and C. Burges. Spectral Clustering and Transductive Learning with Multiple Views. Proceedings of the 24th International Conference on Machine Learning (ICML), 1159-1166, 2007. D. Zhou, C. Burges and T. Tao. Transductive Link Spam Detection. Proceedings of the 3rd International Workshop on Adversarial Information Retrieval on the Web, 21-28, 2007. D. Zhou, J. Huang and B. Schölkopf. Learning with Hypergraphs: Clustering, Classification, and Embedding. Advances in Neural Information Processing Systems (NIPS) 19, 1601-1608. (Eds.) B. Schölkopf, J.C. Platt and T. Hofmann, MIT Press, Cambridge, MA, 2007. G. Camps-Valls, T. V. Bandos and D. Zhou. Semi-Supervised Graph-Based Hyperspectral Image Classification. IEEE Transactions on Geoscience and Remote Sensing (45), No. 10, 3044-3054, 2007. D. Zhou and B. Schölkopf. Discrete Regularization. Book chapter, Semi-Supervised Learning, 221-232. (Eds.) O. Chapelle, B. Schölkopf and A. Zien, MIT Press, Cambridge, MA, 2006. J. Huang, T. Zhu, R. Rereiner, D. Zhou and D. Schuurmans. Information Marginalization on Subgraphs. Proceedings of 10th European Conference on Principles and Practice of Knowledge Discovery in Databases, 199-210, Springer, New York, NY, USA. T. V. Bandos, D. Zhou and G. Camps-Valls. Semi-Supervised Hyperspectral Image Classification with Graphs. IEEE International Geoscience and Remote Sensing Symposium (IGARSS06), 3883-3886. D. Zhou, J. Huang and B. Schölkopf. Learning from Labeled and Unlabeled Data on a Directed Graph. Proceedings of the 22nd International Conference on Machine Learning (ICML), 1041-1048. (Eds.) L. De Raedt and S. Wrobel, ACM press, 2005. D. Zhou and B. Schölkopf. Regularization on Discrete Spaces. Pattern Recognition, Proceedings of the 27th DAGM Symposium, 361-368, Springer, Berlin, Germany, 2005. D. Zhou, B. Schölkopf and T. Hofmann. Semi-Supervised Learning on Directed Graphs. Advances in Neural Information Processing Systems (NIPS) 17, 1633-1640. (Eds.) L.K. Saul, Y. Weiss and L. Bottou, MIT Press, Cambridge, MA, 2005. J. Weston, C. S. Leslie, E. Ie, D. Zhou, A. Elisseeff and W. S. Noble. Semi-Supervised Protein Classification Using Cluster Kernels. Bioinformatics 21(15), 3241-3247, 2005. D. Zhou, J. Huang and B. Schölkopf. Beyond Pairwise Classification and Clustering Using Hypergraphs. Max Planck Institute Technical Report 143, Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2005. D. Zhou, O. Bousquet, T.N. Lal, J. Weston and B. Schölkopf. Learning with Local and Global Consistency. Advances in Neural Information Processing Systems (NIPS) 16, 321-328. (Eds.) S. Thrun, L. Saul and B. Schölkopf, MIT Press, Cambridge, MA, 2004. D. Zhou, J. Weston, A. Gretton, O. Bousquet and B. Schölkopf. Ranking on Data Manifolds. Advances in Neural Information Processing Systems (NIPS) 16, 169-176. (Eds.) S. Thrun, L. Saul and B. Schölkopf, MIT Press, Cambridge, MA, 2004. J. Weston, A. Elisseeff, D. Zhou, C. Leslie and W. S. Noble. Protein Ranking: From Local to Global Structure in the Protein Similarity Network. Proceedings of the National Academy of Science (PNAS) 101(17), 6559-6563, 2004. J. Weston, C. Leslie, D. Zhou, A. Elisseeff and W. S. Noble. Semi-Supervised Protein Classification Using Cluster Kernels. Advances in Neural Information Processing Systems (NIPS) 16, 595-602. (Eds.) S. Thrun, L. Saul and B. Schölkopf, MIT Press, Cambridge, MA, 2004. D. Zhou and B. Schölkopf. A Regularization Framework for Learning from Graph Data. ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields, 2004. D. Zhou and B. Schölkopf. Learning from Labeled and Unlabeled Data Using Random Walks. Pattern Recognition, Proceedings of the 26th DAGM Symposium, 237-244. Volume 3175 of the series Lecture Notes in Computer Science, Springer, Berlin, Germany, 2004. K. Yu, V. Tresp and D. Zhou. Semi-Supervised Induction. Max Planck Institute Technical Report 141, Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 2004. |
||
INVITED TALKSTeach Language Models to Reason. Yale University, keynote speaker in FDS Grand Opening, Sept 22, 2023 Teach Language Models to Reason. Harvard University, ML foundations seminar, Septermber 15, 2023 Teach Language Models to Reason. National Taiwan University. August 17, 2023 Teach Language Models to Reason. Distinguished Keynote at KDD 2023 Large Language Model Day. August 8, 2023 Teach Language Models to Reason. ACL 2023 Natural Language Reasoning Workshop. July 13, 2023. Teach Language Models to Reason. CVPR 2023 Tutorial on Prompting in Vision. June 19, 2023. Teach Language Models to Reason. Institute for Foundations of Machine Learning (IFML) Workshop. April 20, 2023. Teach Language Models to Reason. Stanford University, NLP Seminar. March 2, 2023. Teach Language Models to Reason. University of California, Santa Barbara. January 16, 2023. Teach Language Models to Reason. University of Washington, Special ML Seminar. October 27, 2022. Teach Language Models to Reason. University of Santa Cruze. June 1, 2022. Incentive mechanisms for crowdsorucing data labeling. Distinguished Lecture Series at Jump Trading LLC, Chicago. December 14, 2016. Incentives in Human Computation. Computer Science Department, Yale University. Hosted by Prof. Dan Spielman. May 7, 2015. Incentives in Human Computation. Microsoft TechFest, March 26, 2015. Double or Nothing: Unique Mechanism to Incentivize MTurkers to Skip. Computer Science and Engineering Department, University of Washington. Hosted by Prof. Anna Karlin. March 6, 2015. (slides) Double or Nothing: Unique Mechanism to Incentivize MTurkers to Skip. Facebook, March 6, 2015. Double or Nothing: Unique Mechanism to Incentivize MTurkers to Skip. Microsoft Bing, November 5, 2014. Algorithmic Crowdsourcing. NIPS Workshop on Crowdsourcing: Theory, Algorithms and Applications, December 9, 2013. (slides) Learning from the Wisdom of Crowds by Minimax Entropy. Amazon, July 25, 2013. Learning from the Wisdom of Crowds by Minimax Entropy. UC Berkeley, Neyman Seminar, March 15, 2013. Hosted by Prof. Bin Yu and Aditya Guntuboyina. (slides) Learning from the Wisdom of Crowds by Minimax Entropy. Facebook, March 14, 2013. Learning from the Wisdom of Crowds by Minimax Entropy. Joint UW-Microsoft Research Machine Learning Workshop. Oct 26, 2012. |
||
PROFESSIONAL ACTIVITIESArea Chair of International Conference for Learning Representations (ICLR), 2022 Area Chair of Neural Information Processing Systems (NeurIPS), 2021 Area Chair of the International Conference on Machine Learning (ICML), 2021 Area Chair of International Conference for Learning Representations (ICLR), 2021 Area Chair of Neural Information Processing Systems (NeurIPS), 2020 Area Chair of Neural Information Processing Systems (NeurIPS), 2019 Area Chair of Neural Information Processing Systems (NeurIPS), 2018 Area Chair of Neural Information Processing Systems (NeurIPS), 2017 Area Chair of Neural Information Processing Systems (NeurIPS), 2016 Area Chair of Neural Information Processing Systems (NeurIPS), 2015 Area Chair of Neural Information Processing Systems (NeurIPS), 2010 Co-chair NIPS'14 workshop: Crowdsourcing and Machine Learning, Montreal, Quebec, Canada, 2014. Co-chair ICML'14 workshop: Crowdsourcing and Human Computing, Beijing, China, 2014. Co-chair NIPS'13 workshop: Crowdsourcing: Theory, Algorithms and Applications, Lake Tahoe, Nevada, United States, 2013. Co-chair ICML'13 workshop: Machine Learning Meets Crowdsourcing, Atlanta, United States, 2013.
| ||
HOBBIESI like reading, writing, painting, skiing, swimming, hiking, travelling, and jogging. |
||
Last updated 3/29/2023 |