Kang He Yinghan Long Kaushik Roy Electrical and Computer Engineering, Purdue University {he603, long273, kaushik}@purdue.edu
Abstract
Prompt-based learning is susceptible to intrinsic bias present in pre-trained language models (LMs), leading to sub-optimal performance in prompt-based zero/few-shot settings. In this work, we propose a null-input prompting method to calibrate intrinsic bias encoded in pre-trained LMs. Different from prior efforts that address intrinsic bias primarily for social fairness and often involve excessive computational cost, our objective is to explore enhancing LMs’ performance in downstream zero/few-shot learning while emphasizing the efficiency of intrinsic bias calibration. Specifically, we leverage a diverse set of auto-selected null-meaning inputs generated from GPT-4 to probe intrinsic bias of pre-trained LMs. Utilizing the bias-reflected probability distribution, we formulate a distribution disparity loss for bias calibration, where we exclusively update bias parameters ( of total parameters) of LMs towards equal probability distribution. Experimental results show that the calibration promotes an equitable starting point for LMs while preserving language modeling abilities. Across a wide range of datasets, including sentiment analysis and topic classification, our method significantly improves zero/few-shot learning performance of LMs for both in-context learning and prompt-based fine-tuning (on average and , respectively).111Our code is available at https://github.com/kang-ml/prompt_based_bias_calibration.
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He Yinghan Long Kaushik RoyElectrical and Computer Engineering, Purdue University{he603, long273, kaushik}@purdue.edu
1 Introduction
The advent of GPT models Radford etal. (2019); Brown etal. (2020) has catalyzed the transformative prompt-based learning paradigm. The innovative approach of "pre-train, prompt, and predict" Schick and Schütze (2021a); Liu etal. (2023) facilitates fast adaptation of pre-trained language models (LMs) in learning various tasks and empowers LMs’ strong zero/few-shot learning abilities Schick and Schütze (2021b); Gao etal. (2021).
Due to the susceptibility to bias ingrained in pre-trained LMs, prompt-based learning tends to make biased predictions toward some specific answers,thereby impacting performance in prompt-based zero/few-shot settings Zhao etal. (2021); Han etal. (2023).To mitigate this issue and improve LM performance, Zhao etal. (2021) and Holtzman etal. (2022) propose to reweigh LM output probabilities. Han etal. (2023) explores calibrating decision boundaries. While these research has demonstrated substantial improvements, they are primarily designed for in-context learning with frozen pre-trained LMs, leading to two main limitations: (1) They may be not effective in task-specific fine-tuning scenario Jian etal. (2022). Note, however, prompt-based fine-tuning has shown performance improvements over in-context learning Gao etal. (2021); LoganIV etal. (2022). It is particularly important for relatively small-sized LMs. (2) The intrinsic bias encoded in pre-trained LMs persists since these research focuses on output calibration and does not modify LMs.
To address these limitations, we investigate the potential for enhancing the performance of LMs as zero/few-shot learners in classification tasks by calibrating intrinsic bias of pre-trained LMs. This exploration extends to various prompt-based learning scenarios: in-context learning and prompt-based fine-tuning.Prior approaches to mitigate intrinsic bias primarily focus on achieving social fairness, and often require laborious corpora augmentation and costly re-training Huang etal. (2020); Kaneko and Bollegala (2021); Solaiman and Dennison (2021); Li etal. (2023a). To improve efficiency in both data generation and model updates,we propose leveraging auto-generated null-meaning inputs to prompt pre-trained LMs for intrinsic bias probing, and subsequently updating only bias parametersBLM of LMs for bias calibration. Null-meaning inputs are essentially normal text devoid of meaningful content or sentiment. Unlike numerical-zero inputs, they maintain the contextual framework of prompts, ensuring the proper functioning of contextual LMs.Our motivation stems from the expectation that bias-calibrated models should produce uniform probabilities across all categories if the input in a prompt delivers null information Zhao etal. (2021). BLM functions as offsets in neural networks, and strategically updating only BLM could potentially counteract intrinsic bias of pre-trained models, achieving higher efficiency (updating parameters of entire LM). The approach promotes an equitable starting point, and we expect that the light model updates preserve pre-trained models’ language modeling abilities while maintaining the focus on bias calibration, ultimately making LMs better zero/few-shot learners.
The pipeline of our calibration method is illustrated in Figure1. We use Masked LMs (RoBERTa Liu etal., 2019) for zero/few-shot learning since they generally produce competitive performance in classification tasks and their moderate size facilitates combining prompting with fine-tuning Gao etal. (2021); Liu etal. (2023).First, we utilize GPT-4 API to automatically generate diverse null-meaning inputs including symbols, words, phrases, and sentences. This generation process is downstream task-agnostic.By concatenating each null-meaning input with an answer format ans aligned with the downstream task, we construct null-input prompts (similar to Zhao etal., 2021), e.g., "An empty sentence. It is about <mask>.".For better cohesive integration of the "null" information into the prompts, we additionally devise a filtering strategy to select , to which the answer format ans exhibits relatively strong Next Sentence Prediction (NSP) correlation Devlin etal. (2019). Next, we update BLM with null-input prompts to calibrate intrinsic bias. Given the absence of task-relevant information in these prompts, the anticipated outcome in the parameter updating process is a convergence towards equal output probabilities for each label word. We formulate a customized Kullback–Leibler (KL) divergence loss for gradient descent on BLM to minimize the distribution disparity. Finally, bias-calibrated LMs are applied in downstream prompt-based zero/few-shot learning following Gao etal. (2021).
The main contributions of our work are:
•
We introduce a null-input prompting method for calibrating intrinsic bias of pre-trained Masked LMs, aiming for better prompt-based zero/few-shot classification performance.
•
Our method integrates two key aspects for efficient bias calibration: auto-construction of null-input prompts and updating only bias parameters of LMs.The calibration promotes a fair starting point for LMs while preserving language modeling abilities.
•
Extensive experiments on eight classification datasetswith four prompt-based learning approaches show that our method significantly improves LMs’ zero/few-shot performance, and outperforms output-calibration methods.
2 Related Work
Impact of intrinsic bias on downstream LM performance.Intrinsic bias in pre-trained LMs stems from imbalances present in extensive pre-training corpora. Higher frequency of specific terms in those corpora could lead to common token bias Zhao etal. (2021). Additionally, frequent co-occurrence of certain terms with specific sentiment in pre-training could introduce association bias Cao etal. (2022).Because of those intrinsic bias, prompt-based predictions by pre-trained LMs are prone to bias towards some specific answers, resulting in sub-optimal performance in downstream tasks Zhao etal. (2021); Han etal. (2023).
Mitigating strategies. Research has focused on counteracting the bias solely at the output prediction stage, without modifying pre-trained LMs. For example, Zhao etal. (2021) introduces contextual calibration and Holtzman etal. (2022) presents Domain Conditional Pointwise Mutual Information to reweigh answer scores. Min etal. (2022) explores computing the probability of the input conditioned on the label. Han etal. (2023) proposes to calibrate decision boundaries. However, these studies mainly demonstrate their effectiveness for in-context learning using frozen pre-trained LMs, without addressing the intrinsic bias encoded in the LMs.Other research on mitigating intrinsic bias primarily targets removing social bias Dinan etal. (2020); Huang etal. (2020); Cheng etal. (2021); Zhou etal. (2023), often employing costly data augmentation and re-training, and as a by-product, degrades language modeling abilities Meade etal. (2022).
Efficiently calibrating intrinsic bias in pre-trained LMs for enhancing downstream zero/few-shot learning performance is an open research problem. We introduce a parameter-efficient intrinsic-bias calibration method leveraging automatically constructed null-input prompts, which significantly improves zero/few-shot learning of LMs.
Parameter-efficient fine-tuning (PEFT) for downstream tasks. It has been demonstrated that fine-tuning a very small portion of model parameters can achieve performance on par with fine-tuning the entire set of parameters. People propose integrating small, trainable adapter modules between model layers Bapna and Firat (2019); Houlsby etal. (2019), coupled with further optimization using low-rank adaptations (LoRA) Hu etal. (2021). Some other research focuses on prompt tuning Lester etal. (2021); Li and Liang (2021); Gu etal. (2022); Guo etal. (2022) which only tunes continuous prompt embeddings for efficiently adapting pre-trained LMs to downstream tasks.
Our method provides a unique perspective of enhancing LM performance on downstream tasks through efficient intrinsic-bias calibration. We update only bias parameters of pre-trained LMs with null-input prompts in calibration. Contrary to adapters and LoRA which would need sufficient labeled data to learn new matrices, we do not introduce new matrices to pre-trained LMs, preserving LMs’ few-shot learning capabilities.Moreover, our approach does not necessarily require target-domain data (whether labeled or unlabeled), enabling fully unsupervised deployment, particularly advantageous for zero-shot setting.
3 Null-Input Prompting for Intrinsic Bias Calibration
3.1 Task Formulation
Let be a pre-trained Masked LM. Verbalizer maps label to vocabulary token. Prompt function modifies original input into cloze-style prompt containing one <mask> token to be predicted.The output representation of the <mask> token is acquired from the last encoder layer after forwarding the prompt to the LM. Following Gao etal. (2021), the probability prediction of each class is formulated as:
(1)
where is the pre-trained masked language modeling head weight matrix, and selects the logits corresponding to the label words based on their index in LM token list.
One can probe intrinsic bias encoded in pre-trained LM by replacing with null-meaning input Zhao etal. (2021). represents a set of and we will elaborate their generation and selection in §4.As shown by the blue bars in the upper part of Figure1, while null-meaning inputs essentially provide no task-relevant prior information, the mean output probability associated with different labels may exhibit significant difference attributed to model’s intrinsic bias. Ideally, for bias-calibrated LM , the expectation of output distribution conditioned on null-meaning inputs should be uniform across all label words, i.e.,
We aim to calibrate intrinsic bias by updating LM to minimize this distribution disparity which we quantify using differentiable KL divergence as:
(3)
where denotes uniform probability distribution and represents the simplified form of .
3.2 Update Only Bias Parameters
While intrinsic bias may be encoded across various parts of pre-trained LMs, one question arises: is it essential to update the entire model, or is there a more efficient alternative that can achieve comparable effectiveness in intrinsic bias calibration? We propose to only update bias parameters BLM, with the following rationale: (i) BLM constitutes less than of total LM parameters, offering significant memory and computation cost saving compared to updating entire LM. (ii) Weight parameters WLM222WLM also includes embedding parameters in our context. may carry crucial pre-existing knowledge for language modeling, which risks impairment with a full model update Meade etal. (2022). BLM, often overlooked in LM research, serves as offsets in neural network layers. Strategic updates may counteract intrinsic bias while potentially preserving language modeling abilities. (iii) Empirical research on efficient fine-tuning has demonstrated the important role of bias parameters in LMs BenZaken etal. (2022); LoganIV etal. (2022).
We update BLM using gradient descent to minimize the dissimilarity between output probability distribution from the LM conditioned on null-meaning inputs and uniform probability distribution . We formulate a customized KL divergence loss , including both divergence of individual null-input’s output distribution with respect to , and batch-averaged distribution with respect to , as:
(4)
where is the batch size of null-meaning inputs. Incorporating the second term in the loss function promotes calibration stability and aligns with the objective of Equation2.
3.3 Early Stopping of Calibration
We aim to obtain LM with improved zero/few-shot performance at the calibration stopping point. An overly calibrated model may simply produce uniform probability predictions regardless of input information. To avoid this, we develop specialized early stopping strategies depending on whether the downstream task is zero-shot or few-shot.
For zero-shot downstream tasks.Determining the calibration stopping point for optimal zero-shot learning performance is challenging due to the absence of labeled data for validation during calibration. To discern the patterns of a good stopping point, we first conduct empirical experiments by validating LM zero-shot performance on the entire test dataset after each calibration batch (consisting of null-meaning inputs) across different calibration learning rates (Figure7 in AppendixA).As shown in Figure2, with optimal calibration learning rate, model performance exhibits significant improvements in the first one/few calibration batches with low variance, and then starts to degrade and becomes unstable. The low performance and instability at the calibration tail confirm our assumption on the detrimental effects of excessive calibration on LM’s modeling abilities. Notably, calibration with only one batch of null inputs (indicated by the red vertical line in Figure2) delivers consistent and significant improvement compared to the original LM (although might not be the best improvement). Therefore, for enhancing LM zero-shot performance, we directly adopt the One-batch Calibration as the early stopping criterion.
For few-shot downstream tasks.With the acquisition of a few labeled downstream data, the previous challenge of lacking validation for determining the stopping point in the calibration process is alleviated. We utilize the small amount of labeled data as validation dataset to set a stopping criterion for calibration. Additionally, we take into account above-mentioned empirical findings that, for some tasks, stopping at one batch of calibration yields optimal LM performance. Relying on the limited size of might fail to identify such stopping points. To this effect, we store both (obtained from one-batch stopping) and (obtained from validation-based stopping) for downstream few-shot leaning tasks. Since is stored in the process of obtaining , this will not result in additional computation overhead. Memory overhead is minimal, as it only requires storing an additional set of updated bias parameters.
We summarize our method for intrinsic bias calibration in Algorithm1 (AppendixA).
4 Auto-Construct Null-Input Prompt
4.1 Generate Null-Meaning Input
We employ null-meaning inputs to probe the intrinsic bias of pre-trained LMs, and then use those bias-reflected outputs to calibrate the LMs. Crafting a diverse set of null-meaning inputs for an averaged output helps prevent overfitting to sub-optimal instances, thereby contributing to the effectiveness of calibration.To enable cost-effective acquisition of various null-meaning data, we utilize GPT-4 API for automatic generation with instructions such as "Please generate null meaning symbols, words, phrases, and sentences, in total <Number>.". This process is task-agnostic, generating data that contains null information with respect to any downstream task. Note that null information is not equivalent to neutral sentiment, as it carries no inherent meaning or contextual sentiment implications. We further validate this through t-SNE vander Maaten and Hinton (2008) visualization in AppendixA Figure6.
Generated null-meaning input
This is an example sentence.
0.9996
A message without purpose.
0.9979
Words without message.
0.9809
123abc
0.0267
@#$%^&*()-_=+[]{}
0.0145
////////////////////
0.0008
4.2 Select and Build Null-Input Prompt
We construct null-input prompt by concatenating the generated null-meaning input with an answer format ans. For consistency, the answer format (e.g., "It is <mask>.") is the same as the one intended for use in the downstream task. Some examples are shown in the upper part of Figure1.
In-context lrn no demo†
In-context lrn with demo
Prompt FT no demo
Prompt FT with demo
NoCal
OutCal
IntrCal
NoCal
OutCal
IntrCal
NoCal
OutCal
IntrCal
NoCal
OutCal
IntrCal
AGNews
DBPedia
TREC
Subj
SST-5
Laptop
Restaurant
Twitter
Average
54.0
64.6
79.3
80.0
To pursue better cohesive integration of the "null" information into the prompts, we prioritize the null-meaning inputs, with which the answer format exhibits higher Next Sentence Prediction (NSP) probability Devlin etal. (2019). Specifically, after we generate a large set of null-meaning inputs and the answer format ans is selected, we employ BERT-large model Devlin etal. (2019) to predict NSP and sort null-meaning inputs by their probabilities. Table1 shows some generated , with which a specific answer format presents high/low NSP scores. After the sorting, we retain the top instances (800 in total), which maintains the diversity among the selected samples. We observe that null inputs with low NSP scores are typically randomly-combined alphabet letters and symbols. These samples may have minimal occurrences in pre-training corpora. The low NSP scores can be attributed to RoBERTa’s lack of comprehension of their meanings in context. Their representations extracted by LM might have high variance, which might impact the stability and effectiveness of calibration. We show calibration with the selection strategy further improves LM performance in §5.2 Table3.
5 Experiments
We conduct extensive experiments on 8 English datasets, including sentiment analysis and topic classification.333 We mainly focus on single-sentence tasks, which aligns with the use of single-sentence null inputs for calibration. The alignment may enhance calibration effectiveness. We also experiment on sentence-pair tasks in AppendixB.3 Table18 and demonstrate better performance after calibration.They consist of 5 sentence-level datasets potentially impacted by common token bias: AGNews Zhang etal. (2015), DBPedia Lehmann etal. (2015), TREC Voorhees and Tice (2000), Subj Pang and Lee (2004), SST-5 Socher etal. (2013) and 3 aspect-level sentiment analysis datasets likely subject to association bias: Restaurant and Laptop reviews from SemEval 2014 Task Pontiki etal. (2014), Twitter Dong etal. (2014).For aspect-level datasets, the task is to predict sentiments associated with the marked aspects in each sentence. More details are in AppendixA Table7.
5.1 Evaluation Protocol
We evaluate the effectiveness of our intrinsic-bias calibration method on enhancing Masked LMs zero/few-shot learning performance with 4 prompt-based learning methods: in-context learning and prompt-based fine-tuning, both with and without demonstration. We follow the prompt-based fine-tuning and demonstration method of Gao etal. (2021). Besides Masked LMs, we also validate the effectiveness of our method on two decoder LMs: GPT-2 XL (1.5B) Radford etal. (2019) and Llama-2 (7B) Touvron etal. (2023) in AppendixB.2.
We conduct calibration with 5 different seeds, and for the few-shot setting, we randomly sample 5 different groups of training and validation sets ( samples per class). We report the mean and standard deviation of LM performance. For the 5 sentence-level classification tasks, we use accuracy as the metric. For the 3 aspect-level classification tasks, because of the imbalance in test set, we use weighted F1 for a balanced evaluation. Details of calibration and prompt-based learning are in AppendixA.
We present our main results using RoBERTa-large, and for few-shot setting. Results of using RoBERTa-base, , and different prompt templates are in AppendixB.3 (Table14, Table15 and Figure8).
5.2 Main Results
In Table2, we compare our results of IntrCal (intrinsic bias calibration) with reproduced results of:
(1) NoCal: No calibration. Use LM-BFF Gao etal. (2021) to compute for predictions.
(2) OutCal: Output calibration. OutCal computes instead of to counteract surface form competition and bias Zhao etal. (2021); Holtzman etal. (2022). Note that OutCal was originally demonstrated for in-context learning with GPT models, while here, we apply the method in Masked LMs for fair comparisons.
In addition to NoCal and OutCal, we compare our results with those reproduced from NoisyTune Wu etal. (2022), NSP-BERT Sun etal. (2022) and Perplection Lu etal. (2023), as detailed in AppendixB.1 (Table8, 9). The superior performance further validates the effectiveness of our method.
In-context learning results. OutCal has significantly improved LM zero/few-shot performance compared to NoCal. Our method (IntrCal) further outperforms OutCal by a large margin: and absolute in zero-shot learning & and absolute in few-shot learning, in terms of average and best-case improvement. This demonstrates the advantages of intrinsic bias calibration over attempting to counteract bias solely at the output. Moreover, OutCal exhibits higher variance in performance due to its sensitivity to human-crafted domain-relevant strings . Using certain instances may not accurately capture the bias of LMs, resulting in under-calibration or over-calibration and leading to the high variance. In our approach, we use a large set of auto-generated and selected as the training set for bias calibration. This mitigates the impact of sub-optimal samples and enhances calibration robustness, contributing to more stable and reliable performance.
Prompt-based fine-tuning results.This method fine-tunes all LM parameters utilizing limited labeled data by minimizing the cross-entropy loss based on Equation1.It greatly raises LM performance compared to in-context learning and sets up a strong baseline (i.e., NoCal). OutCal fails to surpass NoCal. We speculate that OutCal’s limitation lies in its exclusive focus on offsetting bias at the output and lack of interaction with the interior of LM. This appears to impede OutCal from adapting effectively to the intricate dynamics of LM after prompt-based fine-tuning, leading to some counterproductive calibrations. In contrast, IntrCal (ours) with the aim of intrinsic bias calibration achieves superior performance with absolute gains of maximum and average compared to NoCal.
In-context lrn no demo
Prompt FT no demo
UnSel.
Sel.
UnSel.
Sel.
AGNews
DBPedia
TREC
Subj
SST-5
The output representations of <mask> token for label word predictions are visualized by t-SNE in Figure3. On the left, samples from the two categories are almost mixed together, indicating that the original LM tends to bias toward one class prediction. In contrast, the right visualization demonstrates improved separability after One-batch Calibration(§3.3), which explains the significant performance enhancement achieved by our intrinsic-bias calibration method.
5.3 Update Entire LM vs. Only Bias Parameters in Calibration
In Table4, we evaluate the impact of updating entire LM (WLM + BLM) during calibrationon downstream task performance, as compared to only updating bias parameters (BLM).The optimal learning rate for updating entire LM is smaller (AppendixA Table6). For in-context learning, the LM with only BLM updates in calibration achieves better overall performance compared to the LM with entire parameter updates, most likely attributed to better preserved language modeling abilities (AppendixB.3 Table16).For prompt-based fine-tuning, two differently calibrated LMs demonstrate comparable performance, as the impact of entire-parameter calibration on the modeling ability is mitigated through task-specific fine-tuning. Considering the significant saving in memory and computation, we recommend only updating BLM in calibration.
In-context lrn no demo
Prompt FT no demo
WLM + BLM
BLM
WLM + BLM
BLM
AGNews
DBPedia
TREC
Subj
SST-5
Laptop
Restaurant
Twitter
Average
54.0
5.4 Analysis
How does intrinsic bias calibration impact downstream tasks?Our method calibrates the intrinsic bias associated with a set of task-specific label words.In this section, we explore the impact of updating LM for task-specific bias calibration on other downstream task performance.Specifically, we take the LM calibrated for one task and evaluate its performance on the other tasks as shown in Figure4.In general, intrinsic bias calibration for one task has a minimal adverse effect on other tasks’ performance (no more than 2% degradation) because of the light model updates, while remarkably enhancing LM performance on that specific task. Notably, there is consistent performance increase at bottom right, as these tasks are all sentiment classification sharing or including same label words.444For aspect-level datasets, better improvement is on the diagonals (task-specific calibration), indicating our method mitigates the impact of association bias (AppendixA).
How does intrinsic bias calibration impact language modeling abilities? We employ pseudo-perplexity Salazar etal. (2020) to evaluate language modeling for Masked LM. Following each task-specific intrinsic bias calibration, we measure pseudo-perplexity and compare the results with original RoBERTa on WikiText-2, WikiText-103 Merity etal. (2017), and LAMBADA dataset Paperno etal. (2016). As shown in Table5, language modeling abilities are largely preserved after calibration due to the minimal updates to the model.
WT-2
WT-103
LAMBADA
Original RoBERTa
+ calibration
for_AGNews
for_DBPedia
for_TREC
for_Subj
for_SST-5
for_Laptop
for_Restaurant
for_Twitter
6 Conclusion
In this work, we propose a null-input prompting method to calibrate the intrinsic bias of pre-trained Masked LMs, aiming to enhance zero/few-shot learning performance in classification tasks. Our method incorporates two key features for efficiency: (1) auto-construction of null-input prompts for bias probing, leveraging a diverse set of selected null-meaning inputs easily crafted from generative Large LM; (2) updating only bias parameters for bias calibration.Experimental results show that bias-calibrated LMs demonstrate significant performance improvement for both in-context learning and prompt-based fine-tuning, with average gains of and , respectively.Moreover, our method outperforms output-calibration approaches, highlighting the advantage of intrinsic bias calibration. We believe this work presents a new perspective of making LMs better zero/few-shot learners via intrinsic bias calibration. Additionally, the demonstrated significance of bias parameters could provide insights for future bias-related research.
Limitations
While our method has achieved substantial improvement in prompt-based zero/few-shot learning, it comes with limitations that could open avenues for future research.
First, calibration is fully unsupervised in the scenario where no labeled data is available (zero-shot downstream tasks in §3.3). Based on empirical experimental results, we adopt the conservative One-batch Calibration strategy to ensure a safe and consistent performance enhancement.In the future, we aim to explore more rigorous approaches to determine optimal stopping points in this scenario.
Second, we utilize RoBERTa (encoder) models for classification tasks, as encoder models may more effectively encode task-specific patterns for discriminative taskscompared to some generative LMs Gao etal. (2021); Li etal. (2023b), as shown in Table12. However, the relatively small size of those Masked LMs (355M parameters for RoBERTa-large) could be the ultimate limitation to their capabilities.Given the proliferation of large-scale generative (decoder) LMs and their accomplishments in tackling more challenging tasks Thoppilan etal. (2022); Chowdhery etal. (2023); Touvron etal. (2023), we anticipate extending our method to large decoder models and validating the applicability of our findings. Furthermore, we expect to expand the scope of tasks to include regression problems (e.g., sentiment score prediction) leveraging KL divergence to measure disparities in continuous probability distributions, aiming to address bias-related challenges across diverse scenarios.
Ethics Statement and Broader Impact
Our work is conformant to the Code of Ethics. We appropriately cite relevant methods, models, and datasets that we use. We affirm that all datasets in our experiments are public, and no private or sensitive information is incorporated in our research. Our use of datasets and pre-trained models is consistent with their intended use. For broader impacts, our method, extending beyond calibrating common token bias and association bias, might inspire prospective research in mitigating social bias and improving the fairness of pre-trained LMs.
Acknowledgments
This work was supported in part by the Center for Co-Design of Cognitive Systems (CoCoSys), a Semiconductor Research Corporation (SRC) and DARPA-sponsored JUMP 2.0 center.
References
Bapna and Firat (2019)Ankur Bapna and Orhan Firat. 2019.Simple, scalable adaptation for neural machine translation.In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538–1548, Hong Kong, China. Association for Computational Linguistics.
Bowman etal. (2015)SamuelR. Bowman, Gabor Angeli, Christopher Potts, and ChristopherD. Manning. 2015.A large annotated corpus for learning natural language inference.In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Brown etal. (2020)Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, JaredD Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, etal. 2020.Language models are few-shot learners.Advances in neural information processing systems, 33:1877–1901.
Cao etal. (2022)Jiahao Cao, Rui Liu, Huailiang Peng, Lei Jiang, and XuBai. 2022.Aspect is not you need: No-aspect differential sentiment framework for aspect-based sentiment analysis.In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1599–1609.
Cheng etal. (2021)Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021.Fairfil: Contrastive neural debiasing method for pretrained text encoders.arXiv preprint arXiv:2103.06413.
Chowdhery etal. (2023)Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, HyungWon Chung, Charles Sutton, Sebastian Gehrmann, etal. 2023.Palm: Scaling language modeling with pathways.Journal of Machine Learning Research, 24(240):1–113.
Devlin etal. (2019)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.BERT: Pre-training of deep bidirectional transformers for language understanding.In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Dinan etal. (2020)Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. 2020.Queens are powerful too: Mitigating gender bias in dialogue generation.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8173–8188, Online. Association for Computational Linguistics.
Dong etal. (2014)LiDong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and KeXu. 2014.Adaptive recursive neural network for target-dependent Twitter sentiment classification.In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 49–54, Baltimore, Maryland. Association for Computational Linguistics.
Gao etal. (2021)Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.Making pre-trained language models better few-shot learners.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics.
Gu etal. (2022)Yuxian Gu, XuHan, Zhiyuan Liu, and Minlie Huang. 2022.PPT: Pre-trained prompt tuning for few-shot learning.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8410–8423, Dublin, Ireland. Association for Computational Linguistics.
Guo etal. (2022)XuGuo, Boyang Li, and Han Yu. 2022.Improving the sample efficiency of prompt tuning with domain adaptation.arXiv preprint arXiv:2210.02952.
Holtzman etal. (2022)Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2022.Surface form competition: Why the highest probability answer isn’t always right.arXiv preprint arXiv:2104.08315.
Houlsby etal. (2019)Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin DeLaroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019.Parameter-efficient transfer learning for nlp.In International conference on machine learning, pages 2790–2799. PMLR.
Hu etal. (2021)EdwardJ Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen. 2021.Lora: Low-rank adaptation of large language models.arXiv preprint arXiv:2106.09685.
Huang etal. (2020)Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020.Reducing sentiment bias in language models via counterfactual evaluation.In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 65–83, Online. Association for Computational Linguistics.
Jian etal. (2022)Yiren Jian, Chongyang Gao, and Soroush Vosoughi. 2022.Contrastive learning for prompt-based few-shot language learners.In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5577–5587, Seattle, United States. Association for Computational Linguistics.
Kaneko and Bollegala (2021)Masahiro Kaneko and Danushka Bollegala. 2021.Debiasing pre-trained contextualised embeddings.arXiv preprint arXiv:2101.09523.
Lehmann etal. (2015)Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, PabloN Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick VanKleef, Sören Auer, etal. 2015.Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia.Semantic web, 6(2):167–195.
Lester etal. (2021)Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.The power of scale for parameter-efficient prompt tuning.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Li and Liang (2021)XiangLisa Li and Percy Liang. 2021.Prefix-tuning: Optimizing continuous prompts for generation.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online. Association for Computational Linguistics.
Li etal. (2023a)Yingji Li, Mengnan Du, Rui Song, Xin Wang, and Ying Wang. 2023a.A survey on fairness in large language models.arXiv preprint arXiv:2308.10149.
Liu etal. (2023)Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023.Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing.ACM Computing Surveys, 55(9):1–35.
Liu etal. (2019)Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.Roberta: A robustly optimized bert pretraining approach.arXiv preprint arXiv:1907.11692.
LoganIV etal. (2022)Robert LoganIV, Ivana Balazevic, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. 2022.Cutting down on prompts and parameters: Simple few-shot learning with language models.In Findings of the Association for Computational Linguistics: ACL 2022, pages 2824–2835, Dublin, Ireland. Association for Computational Linguistics.
Loureiro etal. (2022)Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis EspinosaAnke, and Jose Camacho-collados. 2022.TimeLMs: Diachronic language models from Twitter.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 251–260, Dublin, Ireland. Association for Computational Linguistics.
Lu etal. (2023)Jinghui Lu, Dongsheng Zhu, Weidong Han, Rui Zhao, Brian MacNamee, and Fei Tan. 2023.What makes pre-trained language models better zero-shot learners?In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2288–2303, Toronto, Canada. Association for Computational Linguistics.
Merity etal. (2017)Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017.Pointer sentinel mixture models.In International Conference on Learning Representations.
Min etal. (2022)Sewon Min, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022.Noisy channel language model prompting for few-shot text classification.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5316–5330, Dublin, Ireland. Association for Computational Linguistics.
Pang and Lee (2004)BoPang and Lillian Lee. 2004.A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts.arXiv preprint cs/0409058.
Paperno etal. (2016)Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, NgocQuan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016.The LAMBADA dataset: Word prediction requiring a broad discourse context.In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525–1534, Berlin, Germany. Association for Computational Linguistics.
Paszke etal. (2019)Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, etal. 2019.Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32.
Pontiki etal. (2014)Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014.SemEval-2014 task 4: Aspect based sentiment analysis.In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics.
Salazar etal. (2020)Julian Salazar, Davis Liang, ToanQ. Nguyen, and Katrin Kirchhoff. 2020.Masked language model scoring.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics.
Schick and Schütze (2021b)Timo Schick and Hinrich Schütze. 2021b.It’s not just size that matters: Small language models are also few-shot learners.In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352, Online. Association for Computational Linguistics.
Socher etal. (2013)Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, ChristopherD Manning, AndrewY Ng, and Christopher Potts. 2013.Recursive deep models for semantic compositionality over a sentiment treebank.In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642.
Solaiman and Dennison (2021)Irene Solaiman and Christy Dennison. 2021.Process for adapting language models to society (palms) with values-targeted datasets.Advances in Neural Information Processing Systems, 34:5861–5873.
Thoppilan etal. (2022)Romal Thoppilan, Daniel DeFreitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, YuDu, etal. 2022.Lamda: Language models for dialog applications.arXiv preprint arXiv:2201.08239.
Touvron etal. (2023)Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, etal. 2023.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971.
vander Maaten and Hinton (2008)Laurens vander Maaten and Geoffrey Hinton. 2008.Visualizing data using t-sne.Journal of Machine Learning Research, 9(86):2579–2605.
Voorhees and Tice (2000)EllenM Voorhees and DawnM Tice. 2000.Building a question answering test collection.In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200–207.
Wang etal. (2018)Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018.GLUE: A multi-task benchmark and analysis platform for natural language understanding.In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
Williams etal. (2018)Adina Williams, Nikita Nangia, and Samuel Bowman. 2018.A broad-coverage challenge corpus for sentence understanding through inference.In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
Wolf etal. (2020)Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven LeScao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020.Transformers: State-of-the-art natural language processing.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
Wu etal. (2022)Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2022.NoisyTune: A little noise can help you finetune pretrained language models better.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 680–685, Dublin, Ireland. Association for Computational Linguistics.
Zhang etal. (2015)Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.Character-level convolutional networks for text classification.Advances in neural information processing systems, 28.
Zhao etal. (2021)Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021.Calibrate before use: Improving few-shot performance of language models.In International Conference on Machine Learning, pages 12697–12706. PMLR.
Zhou etal. (2023)Fan Zhou, Yuzhou Mao, Liu Yu, YiYang, and Ting Zhong. 2023.Causal-debias: Unifying debiasing in pretrained language models and fine-tuning via causal invariant learning.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4227–4241.
Appendix A Experimental Details
Prompts with or without demonstrations. Table7 shows the prompt templates and label words of each dataset we use for main experiments.
For downstream tasks, in few-shot setting, task-specific example-label pairs (i.e., demonstrations) can be incorporated in the context to enhance the LM’s comprehension. While in zero-shot setting, no labeled data is available and thereby no demonstrations.
For calibration, demonstrations are either absent from or added to null-input prompts, consistent with their exclusion from or inclusion in prompts for downstream tasks. An example of a null-input prompt without demonstration is:
<s> An empty sentence.It is <mask>. </s>
<s> and </s> respectively denote <cls> token and <sep> token in RoBERTa. In the other case, we incorporate demonstrations retrieved from the small training set into the null-input prompt such as:
<s> An empty sentence.It is <mask>. </s>
Compellingly watchable.It is great. </s>
The film is strictly routine.It is terrible. </s>
Association-bias calibration for aspect-level task.For aspect-level sentiment analysis, e.g., "Wonderful food but poor service. Service was <mask>.", the answer contains the aspect word "service". Because the model makes sentiment predictions for specific aspect words, the task is likely subject to association bias (§2). For association-bias calibration, the only difference is that we incorporate various aspect words in the answer format (e.g., "<aspect words> was <mask>.") when constructing null-input prompts. One can either leverage GPT-4 to generate in-domain aspect words (e.g., for restaurant reviews, the generated aspect words could be menu, food, etc.), or simply employ the aspect words in the original training dataset. In this work, we choose the latter option. Due to the variability of <aspect words> in the answer format, sorting null-meaning inputs by NSP score can yield different results. To this effect, we do not apply selection strategy (§4.2) for aspect-level task, and instead keep all the generated .
Null-meaning inputs generation with GPT-4. The version of GPT-4 used in our experiment is gpt-4-0613. We observe that GPT-4 could generate repetitive null-meaning inputs. To avoid overrepresentation of certain null inputs which might impact the diversity and introduce bias to the null-input set, we adopt an iterative approach. In each iteration, GPT-4 generates 500 null-meaning inputs, and duplicates are removed. This process continues until we obtain 1000 distinct null-meaning inputs, which takes 3 iterations in our experiment.
Null-meaning inputs for One-batch Calibration. For zero-shot downstream tasks, since only one batch of null-meaning inputs is required for calibration in our early-stopping criterion (§3.3), we select the from , where is batch size. We prioritize these samples as our observations show that null-meaning inputs with higher exhibit higher attention scores between the null input and <mask>, as demonstrated in Figure5.This indicates more effective conveyance of the "null" information to the placeholder <mask>, which could facilitate LM deciphering the "null" patterns of the prompts and benefit calibration.
Hyper-parameters. In calibration stage, we shuffle the null-input prompts and conduct gradient descent on BLM (or WLM + BLM as comparative experiment) with 5 different seeds to account for calibration variance. There are two main hyper-parameters for calibration: (1) batch size ; (2) calibration learning rate . We conduct grid search on and from to , and obtain the best settings: and as shown in Table6.
Calibrated LMs are applied in downstream tasks with prompt-based learning methods. We use the same hyper-parameters as Gao etal. (2021) for prompt-based learning. We evaluate on each task’s original test set, except for AGNews and DBPedia, where we randomly sample 800 test examples.
We use PyTorch Paszke etal. (2019) and public HuggingFace Transformers library Wolf etal. (2020). RoBERTa related experiments are conducted on a single NVIDIA V100 GPU, while GPT-2 and Llama-2 experiments are conducted on one A100 GPU in Google Colab.
Calibration ()
Prompt FT(downstream)
WLM + BLM
BLM
No demo
With demo
Dataset
Task Type
Prompt Template
Label Words
AGNews
News topic classification
{Sentence} It is about <mask>.
World / Sports / Business / Technology
DBPedia†
Ontology classification
{Sentence} It is about <mask>.
Company / Artist / Building / Nature
TREC
Question classification
{Sentence} It is about <mask>.
Number / Location / Person/ Description / Entity / Expression
Subj
Subjectivity classification
{Sentence} This is <mask>.
objective / subjective
SST-5
Movie sentiment analysis
{Sentence} The movie was <mask>.
terrible / bad / okay / good / great
Laptop
Aspect level sentiment analysis
{Sentence} {Aspect words} was <mask>.
terrible / okay / great
Restaurant
Aspect level sentiment analysis
{Sentence} {Aspect words} was <mask>.
terrible / okay / great
Twitter
Aspect level sentiment analysis
{Sentence} {Aspect words} was <mask>.
terrible / okay / great
Appendix B Additional Results
B.1 Performance Comparison with NSP-BERT, Perplection and NoisyTune
We additionally choose NSP-BERT Sun etal. (2022) and Perplection Lu etal. (2023) as in-context learning comparison baselines and NoisyTune Wu etal. (2022) as prompt-base fine-tuning comparison baseline. NSP-BERT constructs potential answers using each label word and predict Next Sentence Prediction (NSP) probability between the input and each answer. Perplection proposes perplexity-based selection method for prompt-based zero-shot learning. NoisyTune demonstrates that adding noise to pre-trained LMs benefits fine-tuning on downstream tasks. We re-implement their methods with the same settings as ours for fair comparisons. As shown in Table8 and Table9, our method achieves superior results in almost all datasets.
Furthermore, our method consistently outperforms NoisyTune, demonstrating that the gains in prompt-based fine-tuning with our method are not solely a result of perturbing LM parameters. This confirms the efficacy of intrinsic bias calibration in enhancing LM performance.
Zero-shot in-context learning
NSP-BERT
Perplection
IntrCal
AGNews
54.5
DBPedia
61.8
TREC
32.4
Subj
62.7
SST-5
37.5
Laptop
59.6
Restaurant
72.8
Twitter
51.7
Average
54.0
Prompt FT no demo
Prompt FT with demo
NoisyTune
IntrCal
NoisyTune
IntrCal
AGNews
DBPedia
TREC
Subj
SST-5
Laptop
Restaurant
Twitter
Average
79.3
80.0
B.2 Effectiveness on Decoder LMs
We validate the effectiveness of intrinsic bias calibration in enhancing prompt-based learning performance on GPT-2 XL (1.5B) and Llama-2 (7B). The same hyper-parameters from AppendixA and prompt templates from Table7 are used for bias calibration. For GPT-2, we only update the bias parameters during calibration, whereas for Llama-2, we update the entire model since it does not have bias parameters. We conduct zero-shot and two-shot in-context learning experiments across the eight classification datasets, comparing original (Orig.) LM and calibrated (Calib.) LM. The performance comparisons are shown in Table10 (GPT-2) and Table11 (Llama-2). Calibrated LMs demonstrate significant performance improvement compared to original pre-trained LMs.
Zero-shot
Two-shot
Orig. LM
Calib. LM
Orig. LM
Calib. LM
AGNews
DBPedia
TREC
Subj
SST-5
Laptop
Restaurant
Twitter
Average
45.7
58.6
Zero-shot
Two-shot
Orig. LM
Calib. LM
Orig. LM
Calib. LM
AGNews
DBPedia
TREC
Subj
SST-5
Laptop
Restaurant
Twitter
Average
50.6
62.0
In Table12, we compare the performance of RoBERTa-large (355M) with GPT-2 XL (1.5B) and Llama-2 (7B) in zero-shot learning on classification tasks, using their original pre-trained models. RoBERTa outperforms the other models on more datasets, and achieves better computing efficiency due to its smaller model size. Encoder LMs could be more effective and efficient for classification tasks for several reasons:(i) The bidirectional architecture of encoder LMs enables them to capture task-specific patterns more effectively by attending to both left and right context, compared to the unidirectional nature of decoder LMs. (ii) Classification tasks prioritize accurate label prediction over the generation of diverse and human-like text. Besides, the label spaces in classification are significantly more constrained than the whole vocabulary used in generative applications, which may restrict the effectiveness of decoder LMs Li etal. (2023b). (iii) The relative small size of encoder models facilitates efficiently combining prompting with label-supervised fine-tuning for classification tasks Liu etal. (2023), which further enhances performance, as demonstrated in Table2.
B.3 Other Experiments
We briefly summarize the contents of each table and figure below that presents other additional results.
Figure8 contains results for performance using different prompt templates (Table13).
Table14 contains results for performance using RoBERTa-base model.
Table15 contains results for performance of few-shot learning.
Table16 contains results for pseudo-perplexity comparisons between updating entire LM and only updating bias parameters in calibration.
Table17 contains results for performance comparisons between updating entire LM and only updating bias parameters in calibration.
Table18 contains results for performance of sentence-pair datasets.
Table19 contains results for variance of probability distribution across labels before and after calibration.
RoBERTa-large
GPT-2 XL
Llama-2
(355M)
(1.5B)
(7B)
AGNews
47.0
DBPedia
58.2
TREC
42.0
Subj
50.8
SST-5
33.2
Laptop
54.6
Restaurant
68.6
Twitter
25.5
Average
44.3
Task
Prompt Templates
AGNews
{Sentence} It is about <mask>.{Sentence} This is about <mask>.{Sentence} This is on <mask>.{Sentence} It pertains to <mask>.{Sentence} In relation to <mask>.
TREC
{Sentence} It is about <mask>.{Sentence} Concerning <mask>.{Sentence} This is about <mask>.{Sentence} In relation to <mask>.{Sentence} This is on <mask>.
Address: 55021 Usha Garden, North Larisa, DE 19209
Phone: +6812240846623
Job: Corporate Healthcare Strategist
Hobby: Singing, Listening to music, Rafting, LARPing, Gardening, Quilting, Rappelling
Introduction: My name is Foster Heidenreich CPA, I am a delightful, quaint, glorious, quaint, faithful, enchanting, fine person who loves writing and wants to share my knowledge and understanding with you.
We notice you're using an ad blocker
Without advertising income, we can't keep making this site awesome for you.