AIWiki
Malaysia

Artificial Intelligence

Artificial intelligence (AI) is the simulation of human intelligence processes by computer systems, encompassing learning, reasoning, problem-solving, perception, and language understanding.

5 min readLast updated May 2026Foundations

Artificial intelligence (AI) refers to the capability of computational systems to perform tasks that typically require human cognitive functions — such as learning from experience, recognising patterns, making decisions, and generating natural language. The field spans a broad range of techniques and approaches, from rule-based expert systems developed in the 1980s to modern deep learning architectures trained on massive datasets.

The term was formally coined at the 1956 Dartmouth Conference, where John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester proposed the study of "every aspect of learning or any other feature of intelligence that can in principle be so precisely described that a machine can be made to simulate it."[^1]

Types of Artificial Intelligence

AI systems are commonly classified along two axes: their scope of application and the mechanisms underlying their behaviour.

Narrow AI (ANI)

Also called weak AI, narrow AI is designed and trained for a specific task. Modern AI systems — including large language models (LLMs), image classifiers, recommendation engines, and speech recognisers — are all examples of narrow AI. Despite the term "narrow," these systems can dramatically exceed human performance within their domain.

General AI (AGI)

Artificial general intelligence refers to a hypothetical system capable of performing any intellectual task that a human can. No AGI system currently exists. Researchers debate whether AGI is achievable, and if so, on what timeline. Estimates range from decades to centuries, or "never" among some sceptics.

Superintelligence (ASI)

Artificial superintelligence is a theoretical system that surpasses the collective intellectual capacity of humanity across every domain. This is primarily a topic in AI safety and philosophy, with practical implications explored through alignment research.

Key Subfields

AI encompasses numerous subfields, often developed semi-independently:

  • Machine Learning (ML) — systems that learn from data without being explicitly programmed for every scenario
  • Deep Learning — ML using multi-layered neural networks inspired by biological neurons
  • Natural Language Processing (NLP) — enabling machines to understand and generate human language
  • Computer Vision — enabling machines to interpret and act upon visual information
  • Robotics — physical AI systems that interact with the real world
  • Expert Systems — rule-based systems encoding domain expertise

Historical Development

| Era | Milestone | |-----|-----------| | 1950 | Alan Turing proposes the "Imitation Game" (Turing Test) | | 1956 | Dartmouth Conference — AI founded as an academic discipline | | 1957–1974 | First AI summer: early optimism, checkers-playing programs, ELIZA chatbot | | 1974–1980 | First AI winter: funding cuts, unmet expectations | | 1980s | Expert systems boom; second AI winter follows | | 1997 | Deep Blue defeats Kasparov at chess | | 2012 | AlexNet: deep learning revolutionises image recognition | | 2017 | Transformer architecture introduced (Attention is All You Need) | | 2022–present | Generative AI: GPT-4, Claude, Gemini, DALL-E, Stable Diffusion |

Economic Impact

The global AI market was valued at approximately USD 638 billion in 2024, with projections exceeding USD 3.6 trillion by 2034 (CAGR ~18.8%). AI is being adopted across healthcare (diagnostics, drug discovery), financial services (fraud detection, algorithmic trading), manufacturing (predictive maintenance, quality control), and logistics (route optimisation, warehouse automation).[^2]

AI Ethics and Governance

Responsible deployment of AI requires addressing several cross-cutting concerns:

  • Bias and fairness — AI systems can inherit or amplify biases present in training data
  • Transparency and explainability — Decisions made by AI, especially in high-stakes contexts, should be interpretable
  • Privacy — Data collection for AI training raises significant privacy concerns
  • Safety — AI systems must behave reliably in edge cases and adversarial conditions
  • Accountability — Clear responsibility frameworks are needed for AI-caused harm

International bodies — including the OECD, UNESCO, and the G7 — have published AI ethics principles. The EU's Artificial Intelligence Act (2024) is the most comprehensive binding regulation to date.

References

  1. McCarthy, J. et al. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Stanford University.
  2. Grand View Research (2024). Artificial Intelligence Market Size, Share & Trends Analysis Report. GVR-1-68038-251-9.
  3. MOSTI (2021). Malaysia National AI Roadmap 2021–2025. Ministry of Science, Technology and Innovation.