Generative pre-trained transformer: Difference between revisions
From ACT Wiki
Jump to navigationJump to search
imported>Doug Williamson (Expand definition.) |
(Improve linking.) |
||
| (4 intermediate revisions by 2 users not shown) | |||
| Line 16: | Line 16: | ||
* [[Chatbot]] | * [[Chatbot]] | ||
* [[ChatGPT]] | * [[ChatGPT]] | ||
* [[Deep learning]] | |||
* [[Enterprise-wide resource planning system]] | * [[Enterprise-wide resource planning system]] | ||
* [[Generative AI]] (GenAI) | |||
* [[Google Gemini]] | |||
* [[GPT-4]] | |||
* [[Information technology]] | * [[Information technology]] | ||
* [[Large language model]] (LLM) | |||
* [[Machine learning]] | |||
* [[Natural language]] | * [[Natural language]] | ||
* [[Natural language processing]] | * [[Natural language processing]] | ||
* [[Operational risk]] | |||
* [[Reinforcement Learning from Human Feedback]] (RLHF) | * [[Reinforcement Learning from Human Feedback]] (RLHF) | ||
* [[Robotics]] | * [[Robotics]] | ||
*[[Software]] | *[[Software]] | ||
* [[Software robot]] | * [[Software robot]] | ||
*[[Stakeholder]] | |||
*[[Supervised learning]] | |||
*[[Unsupervised learning]] | |||
| Line 29: | Line 39: | ||
*[https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Improving Language Understanding by Generative Pre-Training, Radford, Narasimhan, Salimans & Sutskever, 2018] | *[https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Improving Language Understanding by Generative Pre-Training, Radford, Narasimhan, Salimans & Sutskever, 2018] | ||
[[Category:Identify_and_assess_risks]] | [[Category:Identify_and_assess_risks]] | ||
[[Category:Manage_risks]] | [[Category:Manage_risks]] | ||
[[Category:Risk_reporting]] | |||
[[Category:Risk_frameworks]] | [[Category:Risk_frameworks]] | ||
[[Category: | [[Category:The_business_context]] | ||
Latest revision as of 12:15, 14 February 2025
Information technology - software - natural language processing - artificial intelligence - chatbots.
(GPT).
Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.
They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.
GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).
See also
- Artificial intelligence (AI)
- Bot
- Chatbot
- ChatGPT
- Deep learning
- Enterprise-wide resource planning system
- Generative AI (GenAI)
- Google Gemini
- GPT-4
- Information technology
- Large language model (LLM)
- Machine learning
- Natural language
- Natural language processing
- Operational risk
- Reinforcement Learning from Human Feedback (RLHF)
- Robotics
- Software
- Software robot
- Stakeholder
- Supervised learning
- Unsupervised learning