Generative pre-trained transformer: Difference between revisions

From ACT Wiki
Jump to navigationJump to search
imported>Doug Williamson
(Add link.)
imported>Doug Williamson
(Expand definition.)
Line 8: Line 8:




This pre-training may then be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).
GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).





Revision as of 21:13, 8 April 2023

Information technology - software - natural language processing - artificial intelligence - chatbots.

(GPT).

Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.

They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.


GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).


See also


Other resource