Encyclopedia

A B C D E F G H I L

Artificial Intelligence

Artificial Intelligence refers to computer systems designed to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and decision-making. AI encompasses a broad range of technologies that can analyze data, recognize patterns, make predictions, and adapt their behavior based on experience, fundamentally changing how we interact with technology across industries. Artificial Intelligence

Bayesian networks

Bayesian Networks refers to probabilistic graphical models that represent relationships between variables using directed graphs, where nodes represent variables and edges show probabilistic dependencies. These networks use Bayes’ theorem to calculate conditional probabilities, enabling systems to reason under uncertainty and make predictions based on incomplete information, making them valuable for diagnostic systems, risk assessment, and

Computer Vision

Computer Vision refers to artificial intelligence technology that enables computers to interpret, analyze, and understand visual information from digital images, videos, and real-world environments. This field combines machine learning, image processing, and pattern recognition to give machines the ability to “see” and make decisions based on visual data, powering applications from facial recognition and autonomous

Data privacy

Data Privacy refers to the protection and proper handling of personal information, encompassing individuals’ rights to control how their data is collected, stored, used, and shared by organizations. This concept includes both technical safeguards and legal frameworks that govern the processing of sensitive information, ensuring that personal data remains secure and is used only for

European Union Artificial Intelligence (EU AI) Act

European Union Artificial Intelligence Act (EU AI) refers to the world’s first comprehensive legal framework regulating artificial intelligence systems, establishing risk-based rules for AI development, deployment, and use across all EU member states. This landmark legislation categorizes AI applications by risk level and imposes strict requirements for high-risk systems, including transparency obligations, human oversight mandates, and

Federated Learning

Federated Learning refers to a machine learning approach that trains artificial intelligence models across multiple decentralized devices or institutions without centralizing data, enabling collaborative AI development while preserving data privacy and security. This technique allows smartphones, hospitals, financial institutions, and other organizations to contribute to model training while keeping sensitive information on local devices, fundamentally

General Data Protection Regulation (GDPR)

GDPR refers to the General Data Protection Regulation, a comprehensive data privacy law enacted by the European Union in 2018 that fundamentally transformed how personal data is collected, processed, and protected globally. This regulation grants individuals extensive rights over their personal information while imposing strict obligations on organizations that handle EU residents’ data, establishing the

Hallucination

AI Hallucination refers to a phenomenon in artificial intelligence systems where models generate false, misleading, or entirely fabricated information that appears plausible but has no basis in their training data or reality. This occurs when AI systems, particularly large language models and generative AI, produce confident-sounding responses that contain factual errors, invented citations, non-existent events,

Interpolation

Interpolation refers to a mathematical and computational technique that estimates unknown values between known data points by constructing functions or models that pass through or near the existing data. In artificial intelligence and machine learning, interpolation describes how models make predictions within the range of their training data, representing a fundamental concept for understanding model

Large Language Model (LLM)

Large Language Model (LLM) refers to artificial intelligence systems trained on vast amounts of text data to understand, generate, and manipulate human language at scale, typically containing billions or trillions of parameters that enable sophisticated natural language processing capabilities. These models, including systems like GPT-4, Claude, and Gemini, represent a breakthrough in AI’s ability to