Reinforcement Learning from Human Feedback: Difference between revisions
From ACT Wiki
Jump to navigationJump to search
imported>Doug Williamson (Create page - sources - Wikipedia - https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback#:~:text=In%20machine%20learning%2C%20reinforcement%20learning,learning%20(RL)%20through%20an%20optimization - ACT - https://www.treasurers.org/hub) |
(No difference)
|
Revision as of 21:20, 8 April 2023
Information technology - software - natural language processing - artificial intelligence - chatbots - training.
(RLHF).
Reinforcement Learning from Human Feedback is a training process for machine learning.
It uses human feedback, or human preferences, to rank - or score - instances of the behaviour or output from the system being trained, for example ChatGPT.
The human-supervised RLHF supplements an initial period of unsupervised training known as generative pre-training.
See also
- Artificial intelligence (AI)
- Bot
- Chatbot
- ChatGPT
- Enterprise-wide resource planning system
- Generative pre-trained transformer (GPT)
- Information technology
- Machine learning
- Natural language
- Natural language processing
- Robotics
- Software
- Software robot