Paper: Fixing Imbalanced Attention to Mitigate In-Context Hallucination of Large Vision-Language Model
The goal of this project is to mitigate the in context hallucination of LLaVA by tweaking its attention mask during token generation. The code base is based on PAI (ECCV 2024).
conda env create -f modPAI/environment.yml
conda activate modpai