All Stories

  1. Breaking State-of-the-Art Poisoning Defenses to Federated Learning: An Optimization-Based Attack Framework
  2. Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
  3. Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function
  4. Defense against a privacy attack to protect training data information for neural networks.
  5. On Detecting Growing-Up Behaviors of Malicious Accounts in Privacy-Centric Mobile Social Networks
  6. A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
  7. Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
  8. Unveiling Fake Accounts at the Time of Registration: An Unsupervised Approach
  9. Privacy-Preserving Representation Learning on Graphs
  10. Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
  11. Backdoor Attacks to Graph Neural Networks
  12. Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
  13. Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
  14. Attacking Graph-based Classification via Manipulating the Graph Structure