1

MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Lite Vision Transformer with Enhanced Self-Attention
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning