Future value and Generative pre-trained transformer: Difference between pages

From ACT Wiki
(Difference between pages)
Jump to navigationJump to search
imported>Doug Williamson
(Colour change of example headers)
 
(Add link.)
 
Line 1: Line 1:
(FV).  
''Information technology - software - natural language processing - artificial intelligence - chatbots.''


If we invest money today (and roll up all the expected income) the future value receivable is the expected total value of our investment at its maturity.
(GPT).


If we ''borrow'' money today (and roll up all the interest payable) the future value payable is the total principal and interest repayable to the lender at the final maturity of the borrowing.
Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.


They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.


<span style="color:#4B0082">'''Example'''</span>


$100m is held today.  
GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).


The rate of return on capital (r) is 10% per year.


The Future value is:
== See also ==
 
* [[Artificial intelligence]]  (AI)
= $100m x 1.1<sup>1</sup>
* [[Bot]]
 
* [[Chatbot]]
= $110m
* [[ChatGPT]]
 
* [[Enterprise-wide resource planning system]]
 
* [[Generative AI]]  (GenAI)
<span style="color:#4B0082">'''More generally'''</span>
* [[GPT-4]]
 
* [[Information technology]]
FV = Present value x Compounding Factor (CF)
* [[Large language model]]  (LLM)
* [[Natural language]]
* [[Natural language processing]]
* [[Reinforcement Learning from Human Feedback]]  (RLHF)
* [[Robotics]]
*[[Software]]
* [[Software robot]]


Where:


CF = ( 1 + r )<sup>n</sup>
==Other resource==
*[https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf Improving Language Understanding by Generative Pre-Training, Radford, Narasimhan, Salimans & Sutskever, 2018]


r = return on capital or cost of capital per period
[[Category:Identify_and_assess_risks]]
 
[[Category:Manage_risks]]
n = number of periods
[[Category:Risk_reporting]]
 
[[Category:Risk_frameworks]]
 
[[Category:The_business_context]]
== See also ==
* [[Compounding factor]]
* [[Present value]]
* [[Terminal value]]
* [[Time value of money]]

Latest revision as of 21:26, 4 October 2023

Information technology - software - natural language processing - artificial intelligence - chatbots.

(GPT).

Generative pre-trained transformers are language models that have been pre-trained on large datasets of unlabelled natural language text.

They can generate new text that is human-like, and in some cases may be difficult to distinguish from human-written text.


GPT's unsupervised pre-training may then often be supplemented by additional fine-tuning human supervised training, known as Reinforcement Learning from Human Feedback (RLHF).


See also


Other resource