ChatGPT, the viral chatbot that generates conversational responses to written inputs from users, has made artificial intelligence (AI) the latest buzzword in tech.
AI took center stage at Google’s annual developer’s conference on May 10, where the company announced that its search engine would incorporate AI in order to synthesize search results for users. The company also plans to integrate AI into Gmail to help users write emails.
Following its $13 billion investment into OpenAI, the creator of ChatGPT, Microsoft announced its Bing search engine would use AI to “deliver better search, more complete answers, a new chat experience.” The company has also infused its popular Microsoft 365 apps, including Word and Excel, with a new set of AI features dubbed “Copilot.”
And many companies are already integrating AI with their own products. In fact, 94% of business leaders agree that AI will be critical to the success of their companies over the next five years, according to Deloitte’s latest “State of AI in the Enterprise” survey.
On the investment side, Goldman Sachs is optimistic about the future of AI and believes the technology could fuel a rise in productivity and drive up S&P 500 profits by 30% or more in the next decade, Goldman’s senior strategist Ben Snider told CNBC in May.
But despite the hype, if you’re interested in investing in AI, or anything AI-adjacent, it’s important to understand what you’re putting your money in before parting ways with any cash. Here are four AI terms to know.
1. Machine learning
Although machine learning may sound new, the term was actually coined by AI pioneer Arthur Samuel in 1959. Samuel defined it as a computer’s ability to learn without being explicitly programmed.
To do that, mathematical models, or algorithms, are fed large data sets and trained to identify patterns within each set. In theory, the algorithms are then able to apply the same pattern recognition process to a new data set.
For example, Spotify uses machine learning to analyze the music you listen to and recommend similar artists or generate playlists.
2. Large language model
A large language model (LLM) is an algorithm that learns how to recognize, summarize and generate text and other types of content after processing huge sets of data, according to Nvidia.
These models are trained using unsupervised learning, which means the algorithm is given a data set, but isn’t programmed on what to do with it. Through this process, an LLM learns how to determine the relationship between words and the concepts behind them.
3. Generative AI
Large language models are a type of generative AI. As its name implies, generative AI refers to artificial intelligence that is capable of generating content such as text, video or audio, according to Google’s AI blog.
In order to accomplish this, generative AI models use machine learning to process massive data sets and respond to a user’s input with new content, according to Nvidia.
ChatGPT is another example of a generative AI tool. The “GPT” stands for generative pretrained transformers. GPT is OpenAI’s large language model and is what powers the chatbot, helping it to produce human-like responses.
However, OpenAI says that ChatGPT sometimes may write “plausible-sounding but incorrect or nonsensical answers,” according to its website.
People have been using ChatGPT for a variety of tasks, including writing emails and planning vacations. The popular chatbot amassed 100 million monthly active users just two months into its launch, making it the fastest growing consumer application in history, according to a UBS note published in January.
DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!
Get CNBC’s free report, 11 Ways to Tell if We’re in a Recession, where Kelly Evans reviews the top indicators that a recession is coming or has already begun.
CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is ‘beyond anything I’ve ever seen in tech’