Over-hyped AI and Under-hyped AI

AI has always been hyped. And GenAI is currently at the peak of the Gartner’s Technology Hype Cycle for 2023. Different AI topics receive different levels of hype. So, what is the most over-hyped AI topic, and what is the most under-hyped AI topic?
Over-hyped AI
Artificial General Intelligence (AGI) is the most over-hyped topic. In 2022, Blake Lemoine claimed Google’s LaMDA chatbot was sentient and conscious. Blake was rightly and widely criticized on social media for his outlandish assertion. Google later fired him. With the introduction of ChatGPT and the many other Large Language Models (LLMs), we’ve heard more claims that we’ve achieved AGI. While LLMs’ abilities are truly amazing, they still hallucinate and many can’t reliably perform even addition. How can that be AGI?
Many technical advancements were crucial for the development of LLMs: neural networks, gradient descent training, statistical language models, GPUs, massive computational power, attention mechanism, self-supervised learning, and many others. I once heard someone say, “it will take 5 more Einsteins before we can build AGI”. We can debate the number 5, but that sounds about right to me. We are definitely moving forward, but we still have a long way to go. AI researchers in the 1960’s thought they could achieve AGI in one decade. And that was 6 decades ago. No one really knows, but I believe the time-to-AGI is measured in decades. Maybe 1 or 2. Maybe 10. Maybe more. Ever since the early days of Artificial Intelligence, we have continually under-estimated the enormity of the AGI mountain we need to climb.
We HAVE made tremendous progress. But we need to stop over-hyping AGI and its “existential risks”.
Under-hyped AI
However, I do believe we need to get serious about the risks posed by any over-reliance on AI technologies we don’t fully understand. Our economic and social worlds would collapse without electrical power, global communications, or global supply-chains. AI is a part of all of those. Our dependence on AI is growing quickly, and in many unseen areas: supply chain management and even in keystroke prediction in smart phones. We will reach a point where we can no longer just turn AI off. I don’t propose we eliminate our dependence on AI. Quite the contrary. AI produces tremendous value, and we will make our lives better with even more AI. But we must be “eyes wide open” about it.
In early 2023, over 1300 scientists called for a 6 month moratorium on development of LLMs. I never thought that was practical or realistic. But I agree with its thrust. Instead of a single, maniacal focus on capabilities and accuracy, we must also develop technologies to ensure we never loose control of our AIs. These are technologies to ensure AI is Trustworthy. There are many sub-topics under Trustworthy, including: reliability, explainability, fairness/bias, privacy, ethics and governance/accountability. I will do a future post about them.
Trustworthy AI is the most under-hyped AI topic. We (and that includes me) need to spend a lot more time developing Trustworthy technologies in parallel to LLMs.
To stay informed about Trustworthy AI, subscribe.
Comment (1)
Dan Anderson
Scott, I look forward to your continued discussion on Trustworthy AI. Just as determining the “trustworthiness” of two people standing side-by-side is a mostly subjective process, comparing an AI system to those two or another AI system is no less subjective in the short list of sub-topics as well as others! Can it be trusted to produce the most accurate diagnoses most of the time? How about, can it be trusted to produce nefarious results in accord with my nefarious objectives?
Comments are closed.