AI Research Wiki

Tag: direct-preference-optimization

1 item with this tag.

  • Apr 11, 2026

    Direct Preference Optimization: Your Language Model is Secretly a Reward Model

    • direct-preference-optimization
    • alignment
    • rlhf
    • preference-learning

Created with Quartz v4.5.2 © 2026

  • GitHub
  • Discord Community