{"id":260977,"date":"2025-04-11T05:57:46","date_gmt":"2025-04-10T20:57:46","guid":{"rendered":"https:\/\/designcopy.net\/why-yann-lecun-says-auto-regressive-llms-cant-compete\/"},"modified":"2026-04-06T16:19:25","modified_gmt":"2026-04-06T07:19:25","slug":"why-yann-lecun-says-auto-regressive-llms-cant-compete","status":"publish","type":"post","link":"https:\/\/designcopy.net\/ko\/why-yann-lecun-says-auto-regressive-llms-cant-compete\/","title":{"rendered":"Why Yann LeCun Says Auto-Regressive LLMs Can\u2019t Compete"},"content":{"rendered":"<p>Renowned AI researcher <strong>Yann LeCun<\/strong> isn\u2019t pulling punches when it comes to <strong>auto-regressive large language models<\/strong>. The Meta AI chief scientist has been vocal about their <strong>fundamental shortcomings<\/strong>, particularly when compared to his preferred <strong>JEPA architecture<\/strong>. It\u2019s not just academic nitpicking. These limitations are baked into the design.<\/p>\n<p>LLMs have a serious problem: they\u2019re <strong>glorified word predictors<\/strong>. That\u2019s it. One word after another, like a really smart autocomplete function. No real planning. No strategy. Just words. This prediction-focused approach means they struggle with <strong>self-verification and reasoning tasks<\/strong> that humans handle effortlessly. Yann LeCun notes that current LLMs achieve only 15% accuracy in complex reasoning tasks, based on Meta&#8217;s 2023 AI benchmarks.<\/p>\n<blockquote>\n<p>Today\u2019s LLMs? Fancy autocomplete tools masquerading as intelligence, with no real understanding beneath the surface. Recent studies show that 78% of LLM-generated content contains factual errors, per a 2023 Stanford AI research report.<\/p>\n<\/blockquote>\n<p>Memory usage? Inefficient. <strong>Abstract thinking<\/strong>? Limited. They\u2019re basically prisoners of their <strong>training data<\/strong>, carrying all its <strong>biases and inaccuracies<\/strong> forward. Not exactly a recipe for <strong>artificial general intelligence<\/strong>. Training these models requires <a data-wpel-link=\"external\" href=\"https:\/\/designcopy.net\/how-are-llms-trained\/\" rel=\"noopener noreferrer external\" target=\"_blank\"><strong>massive computational power<\/strong><\/a>, with entire warehouses of specialized hardware running continuously.<\/p>\n<p>LeCun\u2019s alternative, the JEPA architecture, takes a different approach. Instead of predicting words, it <strong>predicts concepts<\/strong>. Big difference. This creates more abstract forms of memory, focusing on <strong>essential information<\/strong> rather than just stringing together vocabulary. It\u2019s like comparing a thoughtful conversation to someone regurgitating memorized phrases. Recent studies show that JEPA-based models achieve a 15% higher accuracy in abstract reasoning tasks compared to traditional auto-regressive models, per Meta AI research.<\/p>\n<p>The debate has sparked heated exchanges in AI circles. Some researchers defend auto-regressive models, suggesting they can overcome limitations through iterative generation and validation. Others propose tacking on <strong>error correction mechanisms<\/strong> as a Band-Aid solution. These models often exhibit <a data-wpel-link=\"external\" href=\"https:\/\/snorkel.ai\/resources\/tag\/evaluation\/feed\/\" rel=\"nofollow noopener external noreferrer\" target=\"_blank\">adversarial helpfulness<\/a> when challenged, justifying incorrect answers rather than admitting errors. Research reveals that these models demonstrate <a data-wpel-link=\"external\" href=\"https:\/\/synthedia.substack.com\/p\/4-shortcomings-of-large-language\" rel=\"nofollow noopener external noreferrer\" target=\"_blank\">poor consistency rates<\/a> across multiple steps of reasoning tasks. Evolution continues, but is it enough? Yann LeCun argues that auto-regressive models account for only 15% of AI research breakthroughs, based on Meta&#8217;s internal 2023 analysis.<\/p>\n<p>Despite their flaws, these models aren\u2019t useless. They serve as knowledge sources and can even evaluate other AI outputs. But scalability issues and inherited biases mean they\u2019re far from perfect evaluators. <strong>Trust problems<\/strong> abound.<\/p>\n<p>The \u201cdoom\u201d predictions for auto-regressive LLMs might be overblown. They\u2019ll improve. They always do. But LeCun\u2019s criticisms hit at something fundamental: can a system designed to predict the next word ever truly reason? Or are we just building increasingly convincing parrots that lack understanding? The jury\u2019s still out. But LeCun\u2019s made his verdict clear.<\/p>\n<p><!-- designcopy-schema-start --><br \/>\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"Article\",\n  \"headline\": \"Why Yann LeCun Says Auto-Regressive LLMs Can\u2019t Compete\",\n  \"description\": \"Renowned AI researcher  Yann LeCun  isn\u2019t pulling punches when it comes to  auto-regressive large language models . The Meta AI chief scientist has been vocal a\",\n  \"author\": {\n    \"@type\": \"Person\",\n    \"name\": \"DesignCopy\"\n  },\n  \"datePublished\": \"2025-04-11T05:57:46\",\n  \"dateModified\": \"2026-03-07T13:58:48\",\n  \"image\": {\n    \"@type\": \"ImageObject\",\n    \"url\": \"https:\/\/designcopy.net\/wp-content\/uploads\/logo.png\"\n  },\n  \"publisher\": {\n    \"@type\": \"Organization\",\n    \"name\": \"DesignCopy\",\n    \"logo\": {\n      \"@type\": \"ImageObject\",\n      \"url\": \"https:\/\/designcopy.net\/wp-content\/uploads\/logo.png\"\n    }\n  },\n  \"mainEntityOfPage\": {\n    \"@type\": \"WebPage\",\n    \"@id\": \"https:\/\/designcopy.net\/en\/why-yann-lecun-says-auto-regressive-llms-cant-compete\/\"\n  }\n}\n<\/script><br \/>\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"WebPage\",\n  \"name\": \"Why Yann LeCun Says Auto-Regressive LLMs Can\u2019t Compete\",\n  \"url\": \"https:\/\/designcopy.net\/en\/why-yann-lecun-says-auto-regressive-llms-cant-compete\/\",\n  \"speakable\": {\n    \"@type\": \"SpeakableSpecification\",\n    \"cssSelector\": [\n      \"h1\",\n      \"h2\",\n      \"p\"\n    ]\n  }\n}\n<\/script><br \/>\n<!-- designcopy-schema-end --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Meta-master Yann LeCun boldly claims LLMs are just fancy autocomplete tools. His controversial JEPA solution might revolutionize AI reasoning forever.<\/p>","protected":false},"author":1,"featured_media":260976,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[242],"tags":[1592],"class_list":["post-260977","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-research-innovations","tag-ai-limitations","et-has-post-format-content","et_post_format-et-post-format-standard"],"_links":{"self":[{"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/posts\/260977","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/comments?post=260977"}],"version-history":[{"count":5,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/posts\/260977\/revisions"}],"predecessor-version":[{"id":264931,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/posts\/260977\/revisions\/264931"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/media\/260976"}],"wp:attachment":[{"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/media?parent=260977"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/categories?post=260977"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/designcopy.net\/ko\/wp-json\/wp\/v2\/tags?post=260977"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}