Glossary
This page provides a glossary of terms used in the generative AI lessons as a quick reference and for ensuring consistent usage.
F
- Fine-tuning ▫️ The process of training a pre-trained model on a specific dataset to adapt it to a specific task or domain.
G
- Generative AI ▫️ A class of AI applications that create original content (text, images, audio, code) in response to an input request.
I
- Inference ▫️ The process of using a trained model to generate responses to user prompts. This typically involves a deployed endpoint (API).
L
- LLM ▫️ Large Language Models that are pre-trained on large amounts of text data and fine-tuned for task-based inference.
P
- Prompt ▫️ A user request (using natural language) that is fed to a generative AI model to get a relevant response.
- Prompt Engineering ▫️ The process of designing and refining prompts to get relevant responses from a generative AI model.
T
- Task ▫️ The specific job that the model is expected to perform as a response to prompt. Ex: Text Summarization, Translation, Code Generation etc.