Reference rate and Generative pre-trained transformer: Difference between pages

From ACT Wiki
(Difference between pages)
Jump to navigationJump to search
imported>Doug Williamson
(Add link.)
 
(Add link.)
 
Line 1: Line 1:
A reference rate is a widely recognised and quoted interest rate - such as the Fed funds rate, the prime rate, or LIBOR - by reference to which a rate of interest is calculated.
''Information technology - software - natural language processing - artificial intelligence - chatbots.''


For example, in the rate ‘LIBOR plus 50 basis points’, LIBOR is the reference rate.
(GPT).


Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.


==See also==
They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.
*[[Adjustable-rate mortgage]]
*[[ARRC]]
*[[Base rate]]
*[[ESTER]]
*[[Fallback]]
*[[LIBOR]]
*[[Loan agreement]]
*[[OBFR]]
*[[Official Bank Rate]]
*[[Zero rate provision]]


[[Category:Accounting,_tax_and_regulation]]
 
[[Category:Financial_products_and_markets]]
GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).
 
 
== See also ==
* [[Artificial intelligence]]  (AI)
* [[Bot]]
* [[Chatbot]]
* [[ChatGPT]]
* [[Enterprise-wide resource planning system]]
* [[Generative AI]]  (GenAI)
* [[GPT-4]]
* [[Information technology]]
* [[Large language model]]  (LLM)
* [[Natural language]]
* [[Natural language processing]]
* [[Reinforcement Learning from Human Feedback]]  (RLHF)
* [[Robotics]]
*[[Software]]
* [[Software robot]]
 
 
==Other resource==
*[https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Improving Language Understanding by Generative Pre-Training, Radford, Narasimhan, Salimans & Sutskever, 2018]
 
[[Category:Identify_and_assess_risks]]
[[Category:Manage_risks]]
[[Category:Risk_reporting]]
[[Category:Risk_frameworks]]
[[Category:The_business_context]]

Latest revision as of 21:26, 4 October 2023

Information technology - software - natural language processing - artificial intelligence - chatbots.

(GPT).

Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.

They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.


GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).


See also


Other resource