应新葡的京集团8814登录入口邀请,悉尼科技大学Hong Yang博士后将为新葡的京官网作线上学术讲座,欢迎广大师生届时前往。
时间:2020年10月21日(星期三)13:00
地点:工B308
主讲人简介:Hong Yang博士后于2007年硕士毕业于中国科学院,博士毕业于悉尼科技大学,现为悉尼科技大学博士后。Hong Yang博士后有着丰富的企业工作经历,在世界知名公司MathWorks(该公司开发了Matlab软件)工作九年,曾为MathWorks公司的高级工程师。主要研究领域包括Graph data analysis, graph embedding, graph neural networks, image processing。近五年在知名国际期刊(如Pattern Recognition、TSMC等)及国际会议(如IJCAI、ICDM、CIKM等)发表论文十六篇。Hong Yang博士后还担任了多个国际会议的审稿人(如KDD、ICLR、NeurIPS等)。
摘要:
Network and graph data have been popularly used in describing a large body of real-world applications arranging from social networks, biological graphs, citation networks to transaction data. In order to obtain knowledge from network data, various machine learning models on networks have been proposed such as graph embedding models and graph neural networks. From the perspective of graph embedding models, attributed network embedding represents a new branch of models which aims to obtain knowledge from attributed networks where both attributes of nodes and links between nodes are observable for learning. Existing attributed network embedding models enable joint representation learning of node links and attributes. These models, however, are all designed in continuous Euclidean spaces which often introduce data redundancy and thus impose challenges to storage and computation costs on large-scale networks. On the other hand, from the perspective of graph neural networks(GNNs), existing graph neural networks built on attributed networks are based on manually designed graph neural architectures which often require heavy manual work and rich domain knowledge.
This talk will present a new problem of discrete and automated representation learning for network data. From the perspective of network embedding, a new class of attributed network embedding models that can learn discrete node representations is proposed to reduce data redundancy and eventually reduce storage and computation costs on attributed network data. From the perspective of graph neural networks, a graph neural architecture search algorithm (GraphNAS) is proposed that enables automatic design of the best graph neural architecture based on reinforcement learning. Theoretical and empirical studies on real-world attributed network datasets show that the proposed discrete embedding models outperform state-of-the-art attributed network embedding methods. Moreover, the new graph neural architecture search algorithm GraphNAS can design novel network architectures that rival the best human-invented architectures in terms of validation set accuracy.