Generative pre-trained transformer: Difference between revisions
From ACT Wiki
Jump to navigationJump to search
imported>Doug Williamson (Create page - sources - Wikipedia https://en.wikipedia.org/wiki/Generative_pre-trained_transformer and ACT - https://www.treasurers.org/hub/treasurer-magazine/chatgpt-coming-treasurers) |
imported>Doug Williamson (Add link.) |
||
Line 24: | Line 24: | ||
*[[Software]] | *[[Software]] | ||
* [[Software robot]] | * [[Software robot]] | ||
==Other resource== | |||
*[https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Improving Language Understanding by Generative Pre-Training, Radford, Narasimhan, Salimans & Sutskever, 2018] | |||
[[Category:The_business_context]] | [[Category:The_business_context]] |
Revision as of 21:00, 8 April 2023
Information technology - software - natural language processing - artificial intelligence - chatbots.
(GPT).
Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.
They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.
This pre-training may then be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).
See also
- Artificial intelligence (AI)
- Bot
- Chatbot
- ChatGPT
- Enterprise-wide resource planning system
- Information technology
- Natural language
- Natural language processing
- Reinforcement Learning from Human Feedback (RLHF)
- Robotics
- Software
- Software robot