WINTER SEMINAR

DATESUBJECTPRESENTERMATERIALS
12.26EMR 자동생성 프로젝트Jung, Jimin & Kim, MyoungJinlink
Towards Measuring the Representation of Subjective Global Opinions in Language ModelsZi, Hayoonlink
Are Large Language Models Consistent over Value-laden Questions?
01.02Adaptive-RAG와 Query Complexity Classifier 개선 방안: Effectiveness한 정보 검색과 Efficiency 향상Jung, Jiminlink
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language ModelsLee, Jungseoblink
Understanding and Mitigating Language Confusion in LLMs
AgentBench: Evaluating LLMs as AgentsMoon, Hyeonseoklink
MIND2WEB: Towards a Generalist Agent for the Web
01.09Interpreting and Improving Large Language Models in Arithmetic CalculationKim, Dongjunlink
Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis
The Super Weight in Large Language ModelsKim, Minhyuklink
Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis
Learning to Edit: Aligning LLMs with Knowledge Editing Seo, Jaehyunglink
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
01.16Improving Factuality and Reasoning in Language Models through Multiagent DebateEo, Sugyeonglink
Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key?
ExpeL: LLM Agents Are Experiential LearnersYoon, JeongHolink
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document UnderstandingGyuho John Andrew Shimlink
DocLLM: A Layout-Aware Generative Language Model for Multimodal Documenet Understanding
01.23EFUF: Efficient Fine-Grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language ModelsJung, Dahyunlink
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
RAFT: Adapting Language Model to Domain Specific RAGPark, Chanheelink
RAG-Studio: Towards In-Domain Adaptation of Retrieval Augmented Generation Through Self-Alignment
Matryoshka Representation LearningSon, Junyounglink
Contextual Document Embeddings
02.06Analysis of Multi-Source Language Training in Cross-Lingual TransferLee, Seungyoonlink
OFA: A Framework of Initializing Unseen Subword Embeddings for Efficient Large-scale Multilingual Continued Pretraining
Investigating and Addressing Hallucinations of LLMs in Tasks Involving NegationKoo, Seonmin
Strong hallucinations from negation and how to fix them
SPLADE: Sparse Lexical and Expansion Model for First Stage RankingJang, Youngjoon
Enhancing Lexicon-Based Text Embeddings with Large Language Models

SUMMER SEMINAR

DATESUBJECTPRESENTERMATERIALS
07.04Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module OperationJung, Dahuynlink
Machine Unlearning of Pre-trained Large Language Models
Fine-Tuning Language Models For FactualityKang, Myunghoonlink
Assessing Factual Reliability of Large Language Model Knowledge
Language models can explain neurons in language modelsChun, Yong Chanlink
Sparse autoencoders find highly interpretable features in large language model
07.11QLLM: ACCURATE AND EFFICIENT LOW-BITWIDTH QUANTIZATION FOR LARGE LANGUAGE MODELSLim, Jungwoolink
OMNIQUANT: OMNIDIRECTIONALLY CALIBRATED QUANTIZATION FOR LARGE LANGUAGE MODELS
INSIDE: LLMS’ INTERNAL STATES RETAIN THE POWER OF HALLUCINATION DETECTIONSeo, Jaehyunglink
On Large Language Models’ Hallucination with Regard to Known Facts
ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation SystemsPark, Chanheelink
LLM Comparative Assessment Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models
07.18Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support ConversationSon, Suhyunelink
FEEL: A Framework for Evaluating Emotional Support Capability with Large Language Models
LOFTQ: LORA-FINE-TUNING-AWARE QUANTIZATION FOR LARGE LANGUAGE MODELSKim, Minhyuklink
Divergent Token Metrics: Measuring degradation to prune away LLM components – and optimize quantization
Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question ComplexityJang, Youngjoonlink
ARAGOG: Advanced RAG Output Grading
07.25Longformer: The Long-Document TransformerKim, Jeongwooklink
Generating Long Sequences with Sparse Transformers
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model LeaderboardsEo, Sugyeonglink
RouteLLM: Learning to Route LLMs with Preference Data
Toward Informal Language Processing: Knowledge of Slang in Large Language ModelsShim, Gyuholink
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
08.01Knowledge Graph Enhanced Large Language Model EditingLee, Jaewooklink
MEMoE: Enhancing Model Editing with Mixture of Experts Adaptors
Neuron-Level Knowledge Attribution in Large Language ModelsKim, Dongjunlink
Towards Uncovering How Large Language Model Works: An Explainability Perspective
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-TuningMoon, Hyeonseoklink
QuRating: Selecting High-Quality Data for Training Language Models
08.08RARR: Researching and Revising What Language Models Say, Using Language ModelsKim, Jinsunglink
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models
Instruction Pre-Training: Language Models are Supervised Multitask LearnersLee, Seungyoonlink
FUN with Fisher: Improving Generalization of Adapter-Based Cross-lingual Transfer with Scheduled Unfreezing
Retrieval meets Long Context Large Language ModelsSon, Junyounglink
Understanding Finetuning for Factual Knowledge Extraction
08.22RAFT: Adapting Language Model to Domain Specific RAGJang, Yoonnalink
Injecting New Knowledge Into Large Language Models via Supervised Fine-tuning
Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?Koo, Seonminlink
DFA-RAG: Conversational Semantic Router for Large Language Model with Definite Finite Automaton
unveiling linguistic regions in large language modelsKim, Dongjunlink
anthropocentric bias and the possibility of artificial cognition
08.29Not all Layers of LLMs are Necessary during InferenceHong, Seongtaelink
Tokenization Falling Short: The Curse of Tokenization
Challenging the Validity of Personality Tests for Large Language ModelsMoon, Hyeonseoklink
WHO IS CHATGPT? BENCHMARKING LLMS’ PSYCHOLOGICAL PORTRAYAL USING PSYCHOBENCH
Self-Alignment with Instruction BacktranslationLee, Jungseoblink
Self-Rewarding Language Models

WINTER SEMINAR

DATESUBJECTPRESENTERMATERIALS
01.04ALCUNA: Large Language Models Meet New KnowledgeLee, Jungseob link
Large Language Models Can Self-Improve
Evaluating Large Language Models At Evaluating Instruction Following Moon, Hyeonseok link
Human Feedback is not Gold Standard
Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models? Hong, Seongtae link
SoulChat: Improving LLMs’ Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations
01.11Inference-Time Intervention: Eliciting Truthful Answers from a Language Model Jung, Dahyun link
Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation
Hallucination Mitigation in Natural Language Generation from Large-Scale Open-Domain Knowledge Graphs Seo, Jaehyung link
The Troubling Emergence of Hallucination in Large Language Models – An Extensive Definition, Quantification, and Prescriptive Remediations
Unveiling the Pitfalls of Knowledge Editing for Large Language Models Son, Junyoung link
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
01.19 Emergent and Predictable Memorization in Large Language Models Lim, Jungwoo link
ProPILE: Probing Privacy Leakage in Large Language Models
CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs Koo, Seonmin link
SELF-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations
02.01The case for 4-bit precision: k-bit Inference Scaling LawsLee, Jaewook link
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators Kang, Myunghoon link
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Direct Preference Optimization: Your Language Model is Secretly a Reward Model Kim, Jeongwook link
Mixtral of Experts
02.22Prompting is not a substitute for probability measurements in large language models Kim, Jinsung link
Evaluating Large Language Models on Controlled Generation Tasks
Knowledge-enhanced mixed-initiative dialogue system for emotional support conversations Son, Suhyune link
Enhancing Empathetic and Emotion Support Dialogue Generation with Prophetic Commonsense Inference
02.29Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-Language Models Lee, Seungyoon link
Merging Generated and Retrieved Knowledge for Open-Domain QA
MoLE: Mixture of LoRA Experts Eo, Sugyeong link
Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models
DYNOSAUR: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation Jang, Yoonna link
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration

SUMMER SEMINAR

DATESUBJECTPRESENTERMATERIALS
08.03Think-on-Graph: Deep and Responsible Reasoning of Large Language Model with Knowledge Graph Son, Suhyune link
Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases
Rethinking with Retrieval: Faithful Large Language Model Inference
How Language Model Hallucinations Can Snowball Eo, Sugyeong link
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts
Generate rather than Retrieve: Large Language Models are Strong Context Generators Lee, Seungyoonlink
Guess The Instruction! Flipped Learning Makes Language Models Strong Zero-Shot Learners
Leveraging Large Language Models For Multiple Choice Question Answering
08.10SELF-INSTRUCT: Aligning Language Models with Self-Generated InstructionsLee, Jeongwoo link
WizardLM: Empowering Large Language Models to Follow Complex Instructions
Large Language Models Can Self-Improve
ZeRO: Memory Optimizations Toward Training Trillion Parameter ModelsKim, Jeongwook link
ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningMoon, Hyeonseoklink
PARAMETER-EFFICIENT FINE-TUNING DESIGN SPACES
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models
08.18Linearly Mapping from Image to Text Space Lee, Jungseoblink
MAGMA – Multimodal Augmentation of Generative Models through Adapter-based Finetuning
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
Visual Instruction Tuning
LLaMA2: Open and Efficient Foundation Language ModelsLee, Seungjunlink
FLAN
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question AnsweringLee, Jaewooklink
Enhanced Story Comprehension for Large Language Models through Dynamic Document-Based Knowledge Graphs
ChatDB: Augmenting LLMs With Databases as Their Symbolic Memory
08.24LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionHong, Seongtaelink
LLAMA-Adapter. V2:
LIMA: Less Is More for Alignment
Plug-and-Play Knowledge Injection for Pre-trained Language ModelsJung, Dahyun link
Towards Continual Knowledge Learning of Language Models
Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language ModelsLim, Jungwoolink
Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
08.31fireball: a dataset of dungeons and dragons actual-play with structured game state informationKim, Jinsunglink
marked personas: using natural language prompts to measure stereotypes in language models
What, When, and How to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue
Automatic Chain of Thought Prompting in Large Language ModelsSon, Junyounglink
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework
Zero-shot Faithful Factual Error CorrectionKang, Myunghoonlink
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Language Models (Mostly) Know What They Know
09.07HellaSwag: Can a Machine Really Finish Your Sentence? Seo, Jaehyunglink
Measuring Massive Multitask Language Understanding
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge (ARC)
TruthfulQA: Measuring How Models Mimic Human Falsehoods
Clues Before Answers: Generation-Enhanced Multiple-Choice QAKoo, Seonminlink
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
LoRA: Low-Rank Adaptation of Large Language ModelsJang, Yoonnalink
Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition

WINTER SEMINAR

DATESUBJECTPRESENTERMATERIALS
01.26RankGen: Improving Text Generation with Large Ranking ModelsLim, Jungwoolink
Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Generative Language Models for Paragraph-Level Question GenerationKang, Myunghoon link
Varifocal Question Generation for Fact-checking
Generating Literal and Implied Subquestions to Fact-check Complex Claims
02.02Detecting Label Erros by using Pre-trained Langauge ModelLee, Seungjun link
Style Transfer as Data Augmentation: A Case Study on Named Entity Recognition
Break it Down into BTS: Basic, Tiniest Subword Units for Korean
SALTED: A Framework for SAlient Long-tail Translation Error Detection Eo, Sugyeonglink
CTRLsum: Towards Generic Controllable Text Summarization
SentBS: Sentence-level Beam Search for Controllable Summarization
02.09AMAL:Meta Knowledge-Driven Few-Shot Adapter Learning Kim, Jinsung link
Dictionary-Assisted Supervised Contrastive Learning
Fast Vocabulary Transfer for Language Model Compression
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?Moon, Hyeonseok link
Evaluating Parameter Efficient Learning for Generation
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
02.16Entity-centered Cross-document Relation ExtractionSon, Junyoung link
DocInfer: Document-level Natural Language Inference using Optimal Evidence Selection
Entity Extraction in Low Resource Domains with Selective Pre-training of Large Language Models

NLP&AI Lab

서울특별시 성북구 안암로 145 고려대학교 애기능생활관 311호

145, Anam-ro, Seongbuk-gu, Seoul, Republic of Korea

[Postal] 02841 [Tel] + 82-3290-2684 [E-mail(Professor)] limhseok@korea.ac.kr. 

 

Copyright © 2024 NLP&AI Lab. All rights reserved.