-
Sparse MoE with Random Routing as the New Dropout: Training Bigger and Self-Scalable Models
Tianlong Chen, Zhenyu Zhang, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang
International Conference on Learning Representations (ICLR), 2023
We propose to use pruning to adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly.
[Paper]
[Code]
-
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang
International Conference on Learning Representations (ICLR), 2023
We propose a new plug-and-play MOE training framework, to enable scaling transformers to better accuracy in the full capacity setting without collapse.
[Paper]
[Code]
-
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances
Ajay Jaiswal, Tianlong Chen, Justin F Rousseau, Yifan Peng, Ying Ding, Zhangyang Wang
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022
We propose to use pruning to adaptively identify hard-to-learn (HTL) training samples, and improve pathology localization by attending them explicitly.
[Paper]
[Code]
-
Old can be Gold: Better Gradient Flow can make Vanilla-GCNs Great Again
Ajay Jaiswal*, Peihao Wang*, Tianlong Chen, Justin F. Rousseau, Ying Ding, Atlas Wang
Advances in Neural Information Processing Systems (NeurIPS), 2022
We derive a topology-aware isometric initialization and a Dirichlet Energy guided achitectural rewiring technique that boost vanilla-GCNs to be competitive of state-of-the-art.
[Paper]
[Code]
-
RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging
Ajay Jaiswal, Kumar Ashutosh, Justin F Rousseau, Yifan Peng, Zhangyang Wang, Ying Ding
IEEE International Conference on Data Mining (ICDM), 2022
Our proposed framework RoS-KD learns a smooth, well-informed, and robust student manifold by distilling knowledge from multiple teachers trained on noisy medical imaging datasets.
[Paper]
[Code]
-
Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A New Physics-Inspired Transformer Model
Zhiyuan Mao*, Ajay Jaiswal*, Atlas Wang, Stanley Chan
European Conference on Computer Vision (ECCV), 2022
We collect and present two new real-world turbulence datasets along with a physics-inspired transformer model for imaging through atmospheric turbulence.
[Paper]
[Code]
-
[Spotlight] Training Your Sparse Neural Network Better with Any Mask
Ajay Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Ying Ding, Atlas Wang
International Conference on Machine Learning (ICML), 2022
We present a sparse subnetwork training toolkit to imporve the training of subnetworks identified by any static pruning methods (SNIP, GRaSP, LTH, SynFlow, Random).
[Paper]
[Code]
-
Supervised Contrastive Learning for Cardiopulmonary Disease Classification and Localization in Chest X-rays using Patient Metadata
Ajay Jaiswal, Tianhao Li, Cyprian Zander, Yan Han, Justin Rousseau, Yifan Peng, and Ying Ding
IEEE International Conference on Data Mining (ICDM), 2021
We present novel augmentation method based on patient metadata and extend self-supervised contrastive learning framework for cardiopulmonary disease classificaton.
[Paper]
[Code]
-
RadBERT-CL: Factually-Aware Contrastive Learning For Radiology Report Classification
Ajay Jaiswal, Liyan Tang, Meheli Ghosh, Justin Rousseau, Yifan Peng, and Ying Ding
Machine Learning for Health (ML4H), 2021
We present a constastive learning pre-training framework for Bio-BERT on Radiology Report to capture factual critical information.
[Paper]
[Code]
Born is a small town (Belthara) in Eastern UP, India, I got my hands on a computer first time during my undergraduate freshman year.
My journey in computer science began when I was sixteen year old with a fantastic book "Let Us C" by Yashavant Kanetkar. During my career, I have
always tried to make best use of resources and smart people around me to learn and grow. I identified my research potential while working with Prof. Animesh
Mukherjee in IIT Kharagpur and later spent some wonderful time in Samsung Research after graduating. Currently, with the grace of GOD, I am very previlaged
to work and supervised by some extremely brilliant minds in VITA
and AI Health @ UT Austin.
Somewhere, something incredible is waiting to be known.
Visitor Counter