Paper Reading 56
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images
- ReConPatch: Contrastive Patch Representation Learning for Industrial Anomaly Detection
- RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection
- SimpleNet: A Simple Network for Image Anomaly Detection and Localization
- PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization
- Learning Foreground-Background Segmentation from Improved Layered GANs
- Hyperbolic Contrastive Learning for Visual Representations beyond Objects
- Enhancing Your Trained DETRs with Box Refinement
- Enhanced Training of Query-Based Object Detection via Selective Query Recollection
- Dense Distinct Query for End-to-End Object Detection
- Efficient Decoder-free Object Detection with Transformers
- FP-DETR: Detection Transformer Advanced by Fully Pre-training
- Fast Convergence of DETR with Spatially Modulated Co-Attention
- Accelerating DETR Convergence via Semantic-Aligned Matching
- DETReg: Unsupervised Pretraining With Region Priors for Object Detection
- UP-DETR: Unsupervised Pre-Training for Object Detection With Transformers
- Siamese DETR
- FS-DETR: Few-Shot DEtection TRansformer with Prompting and without Re-Training
- Lite DETR : An Interleaved Multi-Scale Encoder for Efficient DETR
- Anchor DETR: Query Design for Transformer-Based Object Detection
- Efficient DETR: Improving End-to-End Object Detector with Dense Prior
- DETR Does Not Need Multi-Scale or Locality Design
- Cascade-DETR: Delving into High-Quality Universal Object Detection
- Decoupled DETR: Spatially Disentangling Localization and Classification for Improved End-to-End Object Detection
- Sparse DETR: Efficient end-to-end object detection with learnable sparsity
- Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes
- Rethinking Transformer-based Set Prediction for Object Detection
- Sparse R-CNN: End-to-End Object Detection with Learnable Proposals
- DETRs with Collaborative Hybrid Assignments Training
- LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors
- Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets
- ProMix: Combating Label Noise via Maximizing Clean Sample Utility
- Effective Data Augmentation With Diffusion Models
- Multi-modal Queried Object Detection in the Wild
- TR-DETR: Task-Reciprocal Transformer for Joint Moment Retrieval and Highlight Detection
- Active Learning for Single-Stage Object Detection in UAV Images
- Foreground-Background Separation through Concept Distillation from Generative Image Foundation Models
- MS-DETR: Efficient DETR Training with Mixed Supervision
- Supervision Interpolation via LossMix: Generalizing Mixup for Object Detection and Beyond
- Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning
- DAC-DETR: Divide the Attention Layers and Conquer
- Data Augmentation for Object Detection via Controllable Diffusion Models
- SimPLR: A Simple and Plain Transformer for Object Detection and Segmentation
- BoxeR: Box-Attention for 2D and 3D Transformers
- Gen2Det: Generate to Detect
- Cal-DETR: Calibrated Detection Transformer
- Rank-DETR for High Quality Object Detection
- Align-DETR: Improving DETR with Simple IoU-aware BCE loss
- Less is More: Focus Attention for Efficient DETR
- AlignDet: Aligning Pre-training and Fine-tuning in Object Detection
- StageInteractor: Query-based Object Detector with Cross-stage Interaction
- Towards Efficient Use of Multi-Scale Features in Transformer-Based Object Detectors
- Towards Data-Efficient Detection Transformers
- DESTR: Object Detection with Split Transformer
- DETRDistill: A Universal Knowledge Distillation Framework for DETR-families
- AdaMixer: A Fast-Converging Query-Based Object Detector