Despite the recent success of deep neural networks, it remains challenging to effectively model the long-tail class distribution in visual recognition tasks. To address this problem, we ﬁrst investigate the performance bottleneck of the two-stage learning framework via ablative study. Motivated by our discovery, we propose a uniﬁed distribution alignment strategy for long-tail visual recognition. Speciﬁcally, we develop an adaptive calibration function that enables us to adjust the classiﬁcation scores for each data point. We then introduce a generalized re-weight method in the two-stage learning to balance the class prior, which provides a ﬂexible and uniﬁed solution to diverse scenarios in visual recognition tasks. We validate our method by extensive experiments on four tasks, including image classiﬁcation, semantic segmentation, object detection, and instance segmentation. Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and uniﬁed framework.