Generative pre-trained transformer: Difference between revisions
From ACT Wiki
Jump to navigationJump to search
imported>Doug Williamson (Add link.) |
imported>Doug Williamson (Add link.) |
||
Line 19: | Line 19: | ||
* [[GPT-4]] | * [[GPT-4]] | ||
* [[Information technology]] | * [[Information technology]] | ||
* [[Large language model]] (LLM) | |||
* [[Natural language]] | * [[Natural language]] | ||
* [[Natural language processing]] | * [[Natural language processing]] |
Revision as of 19:29, 19 April 2023
Information technology - software - natural language processing - artificial intelligence - chatbots.
(GPT).
Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.
They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.
GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).
See also
- Artificial intelligence (AI)
- Bot
- Chatbot
- ChatGPT
- Enterprise-wide resource planning system
- GPT-4
- Information technology
- Large language model (LLM)
- Natural language
- Natural language processing
- Reinforcement Learning from Human Feedback (RLHF)
- Robotics
- Software
- Software robot