Redian新闻
>
ICLR'23截稿, 图神经网络依然火热 (附42 篇好文整理)

ICLR'23截稿, 图神经网络依然火热 (附42 篇好文整理)

公众号新闻

MLNLP社区是国内外知名的机器学习与自然语言处理社区,受众覆盖国内外NLP硕博生、高校老师以及企业研究人员。
社区的愿景是促进国内外自然语言处理,机器学习学术界、产业界和广大爱好者之间的交流和进步,特别是初学者同学们的进步。

转载自 | 图神经网络与推荐系统

作者 | 北冥有鱼

收藏一下ICLR 2023图神经网络相关的文章,存下来慢慢看,后面会慢慢更新一些文章总结
  1. Graph Attention Retrospective Kimon Fountoulakis (Waterloo)

  2. Limitless Stability for Graph Convolutional Networks

  3. The Graph Learning Attention Mechanism: Learnable Sparsification Without Heuristics

  4. Network Controllability Perspectives on Graph Representation

  5. Graph Contrastive Learning Under Heterophily: Utilizing Graph Filters to Generate Graph Views

  6. Spectral Augmentation for Self-Supervised Learning on Graphs

  7. Simple and Deep Graph Attention Networks

  8. Agent-based Graph Neural Networks Karolis Martinkus (ETH), Pál András Papp (ETH), Benedikt Schesch (ETH) Roger Wattenhofer (ETH)

  9. A Class-Aware Representation Refinement Framework for Graph Classification Jiaxing Xu, Jinjie Ni, Sophi Shilpa Gururajapathy & Yiping Ke (NTU)

  10. ReD-GCN: Revisit the Depth of Graph Convolutional Network

  11. Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective

  12. Simple Spectral Graph Convolution from an Optimization Perspective

  13. GraphEditor: An Efficient Graph Representation Learning and Unlearning Approach Weilin Cong, Mehrdad Mahdavi (PSU)

  14. Specformer: Spectral Graph Neural Networks Meet Transformers

  15. DiGress: Discrete Denoising diffusion for graph generation Clement Vignac, Igor Krawczuk, Antoine Siraudin, Bohan Wang, Volkan Cevher, Pascal Frossard (EPFL)

  16. ASGNN: Graph Neural Networks with Adaptive Structure

  17. DeepGRAND: Deep Graph Neural Diffusion

  18. Empowering Graph Representation Learning with Test-Time Graph Transformation

  19. The Impact of Neighborhood Distribution in Graph Convolutional Networks

  20. NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs

  21. Wide Graph Neural Network

  22. How Powerful is Implicit Denoising in Graph Neural Networks Songtao Liu (PSU), Rex Ying (Yale), Hanze Dong (HKUST), Lu Lin (PSU), Jinghui Chen (PSU), Dinghao Wu (PSU)

  23. Learnable Graph Convolutional Attention Networks

  24. Revisiting Robustness in Graph Machine Learning

  25. Graph Neural Bandits Parnian Kassraie (ETH), Andreas Krause (ETH), Ilija Bogunovic (UCL)

  26. Learning Graph Neural Network Topologies

  27. Affinity-Aware Graph Networks Ameya Velingker (Google Research), Ali Kemal Sinop (Google Research), Ira Ktena (DeepMind), Petar Velickovic (DeepMind), Sreenivas Gollapudi (Google Research)

  28. Diffusing Graph Attention

  29. Relational Curriculum Learning for Graph Neural Networks

  30. Stable, Efficient, and Flexible Monotone Operator Implicit Graph Neural Networks

  31. Distributional Signals for Node Classification in Graph Neural Networks

  32. Rewiring with Positional Encodings for GNNs Rickard Bruel-Gabrielsson (MIT), Mikhail Yurochkin (MIT-IBM) Justin Solomon (MIT)

  33. Learning MLPs on Graphs: A Unified View of Effectiveness, Robustness, and Efficiency

  34. Fair Graph Message Passing with Transparency Zhimeng Jiang (TAMU), Xiaotian Han (TAMU), Chao Fan (TAMU), Zirui Liu (Rice), Na Zou (TAMU), Ali Mostafavi (TAMU), Xia Hu (Rice)

  35. Sign and Basis Invariant Networks for Spectral Graph Representation Learning Derek Lim (MIT), Joshua Robinson (MIT), Lingxiao Zhao (CMU), Tess Smidt (MIT), Suvrit Sra (MIT) Haggai Maron (NVIDIA Research) Stefanie Jegelka (MIT)

  36. Graph Neural Networks Are More Powerful Than We Think Charilaos I. Kanatsoulis, Alejandro Ribeiro (UPenn)

  37. Robust Graph Representation Learning via Predictive Coding

  38. Universal Graph Neural Networks without Message Passing

  39. Fair Attribute Completion on Graph with Missing Attributes

  40. Asynchronous Message Passing: A New Framework for Learning in GraphsLukas Faber, Roger Wattenhofer (ETH)

  41. Graph Neural Networks as Gradient Flows: understanding graph convolutions via energyFrancesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, Michael M. Bronstein (Twitter)

  42. Rethinking the Expressive Power of GNNs via Graph Biconnectivity


技术交流群邀请函

△长按添加小助手

扫描二维码添加小助手微信

请备注:姓名-学校/公司-研究方向
(如:小张-哈工大-对话系统)
即可申请加入自然语言处理/Pytorch等技术交流群

关于我们

MLNLP 社区是由国内外机器学习与自然语言处理学者联合构建的民间学术社区,目前已经发展为国内外知名的机器学习与自然语言处理社区,旨在促进机器学习,自然语言处理学术界、产业界和广大爱好者之间的进步。
社区可以为相关从业者的深造、就业及研究等方面提供开放交流平台。欢迎大家关注和加入我们。

微信扫码关注该文公众号作者

戳这里提交新闻线索和高质量文章给我们。
相关阅读
我的故事 —— 华西岁月(临床实习)(下)上海交大副教授五年参禅:神经网络的简单偏好笑到流泪-:LD示范钓鱼架构瓶颈原则:用注意力probe估计神经网络组件提供多少句法信息清华、上交等联合发表Nature子刊:「分片线性神经网络」最新综述!截稿|北京时间12月11日|GCPC 2023清华&上交等发表Nature子刊!分片线性神经网络最新综述!Hinton 最新研究:神经网络的未来是前向-前向算法HIRE——基于异构图神经网络的高阶关系级知识蒸馏方法汇编语言之母100岁逝世:曾和冯·诺依曼一起研究,退休后还在研究神经网络AAAI 2022 | 正交图神经网络大规模GNN如何学习?北邮最新《分布式图神经网络训练》综述,35页pdf阐述分布式GNN训练算法和系统NeurIPS 2022 | 基于解耦因果子结构学习的去偏差图神经网络北邮王啸:挖掘图神经网络中的「万物真理」从多篇顶会论文看图神经网络黑盒攻击近期进展ICML2022 | GNNRank: 基于有向图神经网络从两两比较中学习全局排序神经网络高斯过程 (Neural Network Gaussian Process)中山大学HCP Lab团队:AI解题新突破,神经网络推开数学推理大门有效融合语言模型、图神经网络,文本图训练框架GLEM实现新SOTA解决神经网络的百年难题,MIT新模型Liquid CfC让模拟大脑动力学成为可能7 Papers & Radios | 用神经网络推开数学推理大门;世界首个宏基因组蛋白质图谱百岁汇编语言之母逝世!71岁时她还在和儿子合写神经网络论文一种基于神经网络的策略,可增强量子模拟爱恨情仇曼哈顿运动鞋市场依然火热 制造巨头裕元前三季盈利狂涨近3倍ICLR 2023(投稿) | 扩散模型相关论文分类整理近万人围观Hinton最新演讲:前向-前向神经网络训练算法,论文已公开佩洛茜防台,中美为何绝对不会打起来?神经网络的简单偏好ICLR 2023(投稿)|自然语言处理相关论文分类整理研究人员开发在小型设备上训练大型神经网络 保护隐私只需一次向前推导,深度神经网络可视化方法来了!(ECCV Workshops 2022)我的故事 —— 华西岁月(临床实习)(上)TPAMI 2022 | 利用子图同构计数提升图神经网络的表达能力
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。