Telecommunications Science ›› 2024, Vol. 40 ›› Issue (6): 173-194.doi: 10.11959/j.issn.1000-0801.2024151

Previous Articles    

Survey on large language models alignment research

Kunlin LIU, Xinji QU, Fang TAN, Honghui KANG, Shaowei ZHAO, Rong SHI   

  1. ZTE Corporation, Shenzhen 518057, China
  • Received:2024-03-28 Revised:2024-05-18 Online:2024-06-20 Published:2024-07-11

Abstract:

With the rapid development of artificial intelligence technology, large language models have been widely applied in numerous fields. However, the potential of large language models to generate inaccurate, misleading, or even harmful contents has raised concerns about their reliability. Adopting alignment techniques to ensure the behavior of large language models is consistent with human values has become an urgent issue to address. Recent research progress on alignment techniques for large language models were surveyed. Common methods for collecting instruction data and human preference datasets were introduced, research on supervised tuning and alignment adjustments was summarized, commonly used datasets and methods for model evaluation were discussed, and future research directions were concluded.

Key words: large language model, alignment technique, tune, reinforcement learning

CLC Number: 

No Suggested Reading articles found!