Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration
We benchmark LLM abstention with calibration-, training-, prompting, and consistency-based approaches. Informed by their weaknesses, we propose collaboration-based approaches, where multiple LLMs work in cooperation or competition to identify the knowledge gaps in each other and produce abstain decisions.
What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection
arxiv 2024   paper
We propose to explore the opportunities and risks of LLMs in social media bot detection. We find that LLMs with instruction tuning could become state-of-the-art bot detectors with as few as 1000 labeled examples, while LLM-designed bots could significantly harm the performance and calibration of existing bot detectors.
DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection
arxiv 2024   paper
We propose DELL to integrate LLMs as part of the pipeline in graph-based misinformation detection through 1) generating diverse news comments, 2) generating explanations for proxy tasks, and 3) merging specialized experts and predictions.
KGQUIZ: Evaluating the Generalization of Encoded Knowledge in Large Language Models
We propose KGQuiz, a knowledge-intensive benchmark to evaluate the generalizability of LLM knowledge abilities across knowledge domains and progressively complex task formats.
Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models
We propose Knowledge Card, a community-driven initiative to empower black-box LLMs with modular and collaborative knowledge. By incorporating the outputs of independently trained, small, and specialized LMs, we make LLMs better knowledge models by empowering them with temporal knowledge update, multi-domain knowledge synthesis, and continued improvement through collective efforts.
What Constitutes a Faithful Summary? Preserving Author Perspectives in News Summarization
We make the case for preserving author perspectives in news summarization: while existing approaches alter the political stances of news articles, our proposed P3Sum preserves author stances by employing diffusion models and controllable text generation.
Resolving Knowledge Conflicts in Large Language Models
We propose a protocol for resolving knowledge conflicts in LLMs: rather than solely relying on either parametric or non-parametric knowledge, LLMs should identify conflict existence, localize conflicting information segments, and provide both-sided answers.
Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with Large Language Models
We propose Knowledge Crosswords, a benchmark focusing on evaluating LLMs' abilities for geometric knowledge reasoning.
FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge
We propose a simple, easy-to-use, shenanigan-free summarization factuality evaluation model by augmenting language models with factual knowledge from knowledge bases.
BotPercent: Estimating Bot Populations in Twitter Communities
We make the case for community-level bot detection, proposing the system BotPercent to estimate the bot populations from groups to crowds. Armed with BotPercent, we investigate the overall bot percentage among active users, bot precense in the Trump reinstatement vote, and more, yielding numerous interesting findings with implications for social media moderation.
Detecting Spoilers in Movie Reviews with External Movie Knowledge and User Networks
We curate a new spoiler detection dataset LCS and a movie knowledge graph UKM. We then propose to leverage external knowledge and user social networks for spoiler detection.
Can Language Models Solve Graph Problems in Natural Language?
Are language models graph reasoners? We propose the NLGraph benchmark, a test bed for graph-based reasoning designed for language models in natural language. We find that LLMs are preliminary graph thinkers while the most advanced graph reasoning tasks remain an open research question.
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
We propose to study the political bias propagation pipeline from pretraining data to language models to downstream tasks. We find that language models do have political biases, such biases are in part picked up from pretraining corpora, and they could result in fairness issues in LM-based solutions to downstream tasks.
KALM: Knowledge-Aware Integration of Local, Document, and Global Contexts for Long Document Understanding
We propose KALM, a Knowledge-Aware Language Model that jointly incorporates external knowledge in three levels of document contexts: local, document-level and global.
BIC: Twitter Bot Detection with Text-Graph Interaction and Semantic Consistency
We propose to leverage text-graph interaction for Twitter bot detection while modeling semantic consistency in user timelines.
BotMoE: Twitter Bot Detection with Community-Aware Mixtures of Modal-Specific Experts
We propose community-aware mixture-of-experts to address two challenges in detecting advanced Twitter bots: manipulated features and diverse communities.
KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion
We propose to leverage contrastive learning in knowledge graph completion to alleviate the sparsity challenge in existing KGs.
PAR: Political Actor Representation Learning with Social Context and Expert Knowledge
We propose to learn representations of polical actors with social context and expert knowlegde, while applying learned representations to tasks in computational political science.
TwiBot-22: Towards Graph-Based Twitter Bot Detection
Shangbin Feng=, Zhaoxuan Tan=, Herun Wan=, Ningnan Wang=, Zilong Chen=, Binchi Zhang=, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang, Xinshun Feng, Qingyue Zhang, Hongrui Wang, Yuhan Liu, Yuyang Bai, Heng Wang, Zijian Cai, Yanbo Wang, Lijing Zheng, Zihan Ma, Jundong Li, Minnan Luo
We make the case for graph-based Twitter bot detection and propose a graph-based benchmark TwiBot-22, which addresses the issues of limited dataset scale, incomplete graph structure, and low annotation quality in previous datasets.
GraTo: Graph Neural Network Framework Tackling Over-smoothing with Neural Architecture Search
We present GraTo, an NAS-GNN framework to tackle over-smoothing in graph mining.
KCD: Knowledge Walks and Textual Cues Enhanced Political Perspective Detection in News Media
We introduce the mechanism of knowledge walks to enable multi-hop reasoning on knowledge graphs and levearge textual labels in graphs for political perspective detection.
Heterogeneity-aware Twitter Bot Detection with Relational Graph Transformers
We introduce relational graph transformers to model relation and influence heterogeneities on Twitter for heterogeneity-aware Twitter bot detection.
KGAP: Knowledge Graph Augmented Political Perspective Detection in News Media
We construct a political knowledge graph and propose a graph-based approach for knowledge-aware political perspective detection.
BotRGCN: Twitter Bot Detection with Relational Graph Convolutional Networks
We propose a graph-based approach for Twitter bot detection with relational graph convolutional networks and four aspects of user information.
TwiBot-20: A Comprehensive Twitter Bot Detection Benchmark
We propose a (the first) comprehensive Twitter bot detection benchmark that covers diversified users and supports graph-based approaches.
SATAR: A Self-supervised Approach to Twitter Account Representation Learning and its Application in Bot Detection
We propose to pre-train Twitter user representations with follower count and fine-tune on Twitter bot detection.