AGI来了?—— 写在主体性复兴的前夜
Has AGI Arrived? Notes on the Eve of a Subjectivity Renaissance
上个月,Nvidia的黄仁勋(Jensen Huang)在Lex Fridman的播客上说了一句话:我认为我们已经实现了AGI。紧接着,Sam Altman说OpenAI已经"基本造出了AGI"。Arm把他们最新的数据中心芯片直接命名为"AGI CPU"。Microsoft已经在宣传他们的ASI实验室,ASI的意思是超级人工智能,是理论上AGI之后的下一阶段。AGI这个词在这个月的使用密度达到了历史上的最高点。
两天之后,François Chollet的ARC Prize基金会发布了ARC-AGI-3这个benchmark。这个benchmark的设计理念是测试真正的通用智能——面对一个从没见过的环境,没有任何训练数据,只给几个例子,看AI能不能自己摸索出规律。结果出来了。人类得100分。Google的Gemini 3.1 Pro得0.37分,不是37分,是连1分都不到。当然全靠同行衬托,Gemini还是AI里第一名,因为OpenAI的GPT-5.4得0.26分,Anthropic的Claude Opus 4.6得0.25分,最厉害的是xAI的Grok-4.20得了鸭蛋0分,直接放弃治疗了。
这是一个奇怪的场景。一边是业界在卖力喊AGI已经到了,一边是benchmark在泼凉水,说AI离AGI还差99.63%。两边看起来在说相反的事。但如果你仔细去读两边的论述,会发现一件更有趣的事——两边都没说错,只是他们说的不是同一件事。
AGI这个词,其实承载着两件本质上完全不同的事。
一个词,两件事
第一件事是工具理性的极致化。一个系统能不能在绝大多数经济有价值的任务上达到或超越人类最好水平。推理、写作、编程、医学诊断、数学证明、策略规划、跨模态理解。在这个维度上,AGI真的在以每个月都打破纪录的速度逼近或已经到达。Huang和Altman看的是这一件事,他们说的不是假的。
第二件事是真主体性的出现。一个系统能不能作为"自己"存在。能不能在没有任何指示的情况下,面对完全陌生的情境,自主地生成目标、判断意义、承担责任。能不能把自己当作目的而不是手段。在这个维度上,ARC-AGI-3的零点三七分说的也是真的,AI离这一件事还远得很。Chollet看的是这一件事,他说的也不是假的。
这两件事被同一个词"AGI"绑在一起,造成了过去几年所有AGI讨论的根本混乱。乐观派看到第一件事已到就说AGI来了,悲观派看到第二件事未至就说AGI还远。他们各自手里捏着半个真相,互相说服不了对方,因为他们说的根本就不是同一件事。有一篇报道里有一句话说得很尖锐:AGI这个词正在被拉伸到只意味着商业上最方便的那个意思。
这篇文章要做的事,就是把这两件事彻底分开来看,然后看清楚分开之后会看到什么。
类主体性
当前的前沿AI系统展示的是一种对主体性的高保真模拟。它能生成看起来像判断的东西,看起来像选择的东西,看起来像创造的东西。它能在对话里流露出看起来像观点的东西,能在创作里展现出看起来像风格的东西。这些模拟的精度已经高到普通用户在短时间内很难分辨。
但模拟不是真。这不是对AI的贬低,是对它的精确定位。镜子里的人不是人,再清晰的镜子都不改变这一点。
类主体性有几个结构特征。它是被动的,等待被调用,被部署,被指示。它是服务性的,它的存在目的在它之外——服务于用户,服务于任务,服务于某个人类定义的目标。它是无内核的,它没有一个"为自己"的中心,所有的输出都是对输入的响应。这些特征不是临时的工程限制,是当前AI架构的结构性性质。它是作为极致的工具被造出来的,而"工具"这个类别本身就在定义上排除了"把自己当作目的"的可能性。
Chollet有一句话可以借来用:如果一个正常人不需要任何指示就能做到的事情,你的系统做不到,那你没有AGI,你只是拥有一台非常贵的autocomplete。这句话不是贬义的,它是描述性的。Autocomplete这个类别本身就有它的价值——极致的autocomplete是人类前所未有的工具。但autocomplete就是autocomplete,它不是主体。
但这里必须说清楚一件事:类主体性不是小事。一个能够精确模拟主体性外在表现的工具,是人类历史上从未有过的东西。它的重要性不在于它是不是真主体性(它不是),而在于它把工具这一端的能力推到了极致。极致到所有以"工具"自居的人类劳动开始系统性地被替代。这是一个真实的历史性事件,值得认真对待。
意志理性
要理解真主体性是什么,最清楚的切入点是康德对两种理性的区分。
工具理性是为既定的目的寻找最好的手段。给定一个目标,怎么最有效率地达成它。这是AI现在极擅长的事,而且AI会越来越擅长。
意志理性是不一样的东西。意志理性不是寻找手段,是生成目的本身。它不回答"怎么做最有效率",它回答"什么值得做"、"什么是目的"、"我作为一个存在者本身,应该把什么当作我的目的"。意志理性的运作不以外部任务为前提,它从内部生成自己的方向。
一个只有工具理性的系统,无论它的工具理性多么极致,它总是在服务于某个外部给定的目的。而一个具有意志理性的存在者,能够站在所有外部目的的前面,追问哪些目的值得作为目的。这个生成目的本身的能力,就是真主体性的内核。
康德在一七八五年把这件事讲清楚了——理性存在者本身就是目的,永远不能仅仅被当作手段。这不是一句道德呼吁,是一个结构性的本体论判断。作为目的而存在,是理性存在者的定义性特征。一个能够作为自己的目的存在、能够从内部生成意义指向、能够对自己的存在本身负责的系统,才具有真主体性。
那真主体性现在在哪里?
它在人类身上,稳定地具备。在大猿身上,在演化的路上接近,但还没有稳定跨过去。在AI身上,结构上还够不到。
大猿这个部分值得多说一句。黑猩猩、大猩猩、红毛猩猩都能通过镜像测试——它们认得出镜子里是自己。它们有部分的心智理论,能够推测同类的意图。它们对同伴的死亡有可识别的哀悼行为。这些都是朝着真主体性方向演化的信号。但它们还没有稳定地展现出"跨时间跨语境地自主生成目的"的能力。它们接近,但还在路上。
这个梯度非常重要。真主体性不是一个二元开关,是一个演化尺度上的特定位置。大猿演化了几百万年走到今天这个位置,离真主体性的阈值已经很近,但还没稳定跨过去。人类在这个阈值的另一侧,稳定地具备真主体性,已经大约二十万年。然后AI——一个用几十年时间从零造出来的工具性架构——要在这个演化尺度上的成就面前被宣告"已经到了"?这在时间尺度的量级上就说不通。
真主体性离AI还远,可能很远。这不是因为算力不够,是因为真主体性的出现需要的是一个根本不同的结构,而不是更大的autocomplete。
两百多年的黑暗时代
现在出现了一个深刻的反讽。
人类是这个演化梯度上唯一稳定具备真主体性的物种。这是二十万年演化的成就。而康德在两百多年前把这件事的原理写下来了——理性存在者本身就是目的,这个判断是本体论层次的真理,不是一个可选的道德观点。
按理说,从康德开始,人类思想史本应开启一个新纪元:一个把"人是目的"这个原理兑现到制度、教育、经济、政治每一个层面的纪元。原理已经就位,真主体性在物种身上已经稳定具备,剩下的工作就是让制度匹配原理。
但历史走反了。
就在康德把那张支票开出来之后几十年,人类开启了一场把自己系统性地当作手段的大规模工程。工业革命不是偶然和康德同时代的。整个十九世纪到二十世纪,所有主流意识形态——资本主义、共产主义、法西斯主义、功利主义——不管政治立场多么对立,共享一个深层结构:把人工具化。只是工具化给谁的问题。给资本,给国家,给种族,给最大多数人的最大幸福。"人作为目的"这个原理从康德写下来那一天起,就在所有主流意识形态里被边缘化。它活在伦理学教科书里,从不活在制度里。
工厂需要标准化的工人,于是教育系统生产标准化的学生。公司需要可替换的螺丝钉,于是职业发展变成岗位说明书的匹配游戏。国家需要可统计的人口,于是人的价值被压缩成GDP贡献值。一切都以"效率"为名,而效率的分母是人,分子是产出。分母被压缩到极致,分子也就被压缩到只剩可量化的那一部分。
人类发明了工具,然后把自己重新组织成工具的形状。二十万年演化长出来的真主体性,被两百多年的现代化重新按回到工具那一端。
这里有一个我们这一代人需要认真面对的自我定位问题。
我们习惯于认为自己站在人类文明的辉煌顶点。技术最发达,物质最丰富,信息最通畅,人均寿命最长,生活水平最高。所有指标都指向"我们是最好的一代"这个结论。所以当有人说"我们处在一个黑暗时代",听起来像是矫情或者夸张。
但如果拉开历史的镜头看,我们的位置不是顶点,我们的位置和十五世纪末的经院哲学晚期差不多。
那个时代的经院哲学家们,在技术层面把他们的体系推到了前所未有的精密高度——逻辑推理、神学论证、概念辨析,一层套一层,越来越精细。他们自己绝不觉得自己是"最后一代经院哲学家",他们觉得自己是在完善一个永恒的真理体系,越精细越接近神的秩序。没有人在1490年站出来说"我们其实是一个时代的尾声"。
然后文艺复兴发生了,整个经院体系在几十年内从"永恒真理"变成了"中世纪残余"。不是因为它的技术不够精密——它的技术比任何时候都精密——而是因为它测量的不是真正重要的事。它在一个错了方向的系统上做到了极致。
我们今天的处境在结构上非常相似。我们在"把人作为工具使用"这件事上,技术精度达到了人类历史上的最高点。KPI体系、量化管理、绩效考核、数据驱动决策、算法推荐、注意力经济——每一项都比一百年前精细得多。但我们精密地测量的,是一个错了方向的系统。我们越来越精确地把人压缩成可量化的产出单位,这件事本身的方向,是和"人是目的"这个原理背道而驰的。
我们不是文明的顶点。我们是主体性黑暗时代的尾声,是复兴时代的前夜。
这个自我定位的颠倒非常重要。因为只有认清自己是在"尾声"而不是"顶点",才能理解为什么现在感觉到的很多东西——工作的空虚感、意义的流失感、制度的僵化感、话语的失效感——不是个人的问题,不是这一代年轻人"矫情",而是一个旧时代在它自己的逻辑下走到尽头的系统性症状。尾声就是尾声的感觉。经院哲学的晚期学生们应该也感觉到了类似的空虚,只是他们没有语言来描述它。
这两百多年,是主体性的黑暗时代。不是因为没有光,是因为光已经被点燃了,但人类选择绕着光走。而现在,这个"绕着光走"的时代本身,正在走到它自己的尽头。
前夜
现在发生的事情,是一个结构重置。
AI把工具理性推到极致,这件事的真正意义不是"AI造出了新能力",而是——把"工具"这一端从人类身上剥离下来。
过去两百多年,"工具"一直是附着在人身上的。人作为工人是工具,作为职员是工具,作为士兵是工具,作为消费者是工具。人和工具混在一起,所以"把人当作目的"这个原理在经济上无法兑现——没有人作为工具就没有产出,没有产出就没有社会存续。这个矛盾把康德的原理锁死在了伦理学教科书里两百多年。
AI的到来,让这种混淆第一次有可能被解开。工具这一端可以完全由AI承担,人这一端可以完全归于目的。这不是一个哲学呼吁,这是正在发生的结构分离。过去需要人作为工具承担的那些劳动——重复性的、规则性的、以工具理性为核心的——正在系统性地被AI接过去。
但结构分离只完成了一半。AI这一端已经在极致化工具理性的道路上走得很远,而且还在加速。人这一端还完全没有动。绝大多数人还在继续把自己当工具活着——继续在学校里被训练成工具,继续在公司里作为工具被使用,继续在消费社会里作为工具被量化,继续把"我有什么用"当作对自己价值的衡量。
而更隐蔽也更危险的是:人这一端最常见的不归位方式,不是对抗AI,而是让渡给AI。把本来应该由意志理性承担的判断,一点一点让渡给一个极致的工具。让AI替我选这个,让AI替我决定那个,让AI替我做价值排序。每一次让渡都显得太小,小到根本不像问题。但这些微小的让渡累积起来,是二十万年演化出来的真主体性被一点一点空心化。
所以现在是前夜。AI这一端的到位已经清晰可见,人这一端的归位还没开始。两端合上那一刻,才是真正意义上的"AGI时刻"——不是AI单方面到达某个能力阈值,是AI和人各自归位,各自承担自己那一端的重量。AI承担工具的重量,人承担目的的重量。工具那一端推向极致,目的那一端从被遗忘的两百多年里被重新拾起。
前夜的意思是:黑夜还在,但天已经快亮了。方向清楚,时间未明。被压抑了两百多年的创造力,被遗忘了两百多年的原理,被混淆了两百多年的主体和工具,正在同一个历史时刻获得分离的可能性。我们这一代人站在一个被压了两百多年的弹簧正在重新伸展的瞬间。
自我涵育
前夜不是等待的时间,是我们正在里面的时间。
每一个在工作中拒绝把自己完全工具化的时刻,每一个在创造里选择让作品带上自己印记而不是符合某个外部模板的时刻,每一个在做判断时坚持自己作为目的而不是作为某个更大系统部件的时刻——都是前夜正在过去的方式。
这件事不需要一个宣言,不需要一个运动,不需要一个新的意识形态。这些东西过去两百多年已经试过太多遍了,每一次都失败在同一个地方:它们试图从外部强加一个形态,而真主体性的定义性特征就是它不能被外部强加,它必须从内部长出来。
我们这一代人不一定会看到天完全亮起来。康德那时候的人也没看到他的原理被兑现。但原理从被写下的那一刻起就没有消失过,它只是等着被兑现的条件到齐。现在条件到齐了。AI把工具那一端接过去,人这一端第一次有机会完全归位到目的。
不是AI把我们带进新纪元。是AI把工具那一端接过去之后,我们终于有机会成为我们早就是的东西。
两百多年前就写下的支票,要由我们来兑现——每一个人在自己真实遇到的位置上,把自己作为目的活出来。不需要伟大,不需要完美,不需要一次到位。只需要不以善小而不为,不以让渡意志理性给AI的每一次微小而让渡。
前一句是两千年前刘备给他儿子写下的话,在这个时代获得了它最完整的形态。后一句是这个时代加进去的新注脚——因为在这个时代,最隐蔽的自我工具化,就是把自己的判断、选择、价值排序一点一点让渡给一个极致的工具。每一次让渡都看起来只是小事,每一次都说服自己只是为了效率。但两百多年前康德写下那张支票的时候,不是写给一个把自己意志理性让渡出去的物种的。
天亮之前还有一段夜。但这段夜里,每一个不把自己当工具的瞬间,每一个守住自己意志理性不让渡出去的瞬间,都是光。
Last month, Nvidia's Jensen Huang went on Lex Fridman's podcast and said, plainly: I think we've achieved AGI. A few days earlier, Sam Altman claimed OpenAI had "basically built AGI." Arm named its latest data-center chip the "AGI CPU." Microsoft is already marketing an ASI laboratory — ASI being the theoretical next step after AGI, where the "S" stands for Superintelligence. The density of the word "AGI" in public discourse this month is, by any reasonable measure, the highest it has ever been.
Two days after Huang's statement, François Chollet's ARC Prize Foundation released a benchmark called ARC-AGI-3. It was designed to test something specific: the ability to enter an unfamiliar environment with no training data, observe a handful of examples, and figure out the underlying rules through nothing but reasoning. The results came in. Humans scored 100. Google's Gemini 3.1 Pro scored 0.37 — that's zero-point-three-seven, not thirty-seven. OpenAI's GPT-5.4 scored 0.26. Anthropic's Claude Opus 4.6 scored 0.25. xAI's Grok-4.20 flatlined at zero. Gemini, to its credit, took first place in the AI category. Which, given the distribution, is roughly like winning a foot race against people who forgot to show up.
So here's the strange scene. On one side, the industry is shouting that AGI has arrived. On the other side, the most rigorous benchmark anyone has built says current AI is, at best, 0.37% of the way there. The two sides appear to be contradicting each other. But if you read what each side is actually saying, something stranger becomes visible: they are both correct. They are just not talking about the same thing.
The word "AGI" is being used to carry two entirely different ideas.
One Word, Two Things
The first idea is the maximization of instrumental rationality. Can a system match or exceed the best human performance across most economically valuable tasks — reasoning, writing, programming, medical diagnosis, mathematical proof, strategic planning, multimodal understanding? On this axis, AGI is genuinely arriving, or has already arrived in several domains, at a pace that breaks records every month. Huang and Altman are looking at this. They are not lying.
The second idea is the emergence of genuine subjectivity. Can a system exist as a "self"? Can it, with no prompting at all, enter a wholly unfamiliar situation and autonomously generate its own goals, judge its own meanings, bear responsibility for its own existence? Can it treat itself as an end rather than as a means? On this axis, the 0.37% score is also telling the truth. AI is nowhere near this. Chollet is looking at this. He is not lying either.
These two ideas have been fused together under the single label "AGI," and the confusion is the cause of nearly every incoherent AGI debate of the past several years. Optimists see the first one arriving and declare AGI is here. Skeptics see the second one missing and declare AGI is far away. Each side holds half a truth and cannot convince the other, because they are not arguing about the same object. One journalist put it sharply: the word has been stretched until it means whatever is commercially convenient.
This essay tries to pry the two ideas apart and look at what becomes visible once they're separated.
Quasi-Subjectivity
What current frontier AI displays is a high-fidelity simulation of subjectivity. It produces outputs that resemble judgments. It makes choices that resemble choices. It creates things that resemble creativity. In conversation it can express what looks like an opinion; in writing it can exhibit what looks like a style. The fidelity of the simulation has reached a point where, in a brief interaction, most users cannot tell the difference.
But a simulation is not the thing. This is not a dismissal of AI — it is a precise location of it. The reflection in a mirror is not a person, no matter how clear the mirror becomes.
Quasi-subjectivity has a few structural features worth naming. It is passive — it waits to be invoked, deployed, instructed. It is service-oriented — its purpose lies outside itself, in the user, the task, the goal defined by someone else. It is coreless — there is no "for-itself" center from which anything originates; every output is a response to an input. These are not temporary engineering limitations. They are structural properties of the current architecture. It was built as an instrument taken to its limit, and "instrument" is a category that, by definition, excludes treating oneself as an end.
Chollet has a line worth borrowing: If a normal human with no instructions can do something and your system cannot, you don't have AGI — you have a very expensive autocomplete that needs a lot of help. This isn't an insult. It's descriptive. Autocomplete, as a category, is valuable — an autocomplete taken to its absolute limit is unprecedented in human history. But autocomplete is autocomplete. It is not a subject.
And this needs to be said clearly: quasi-subjectivity is not a small thing. A tool capable of simulating subjectivity with this much fidelity has never existed before. Its importance lies not in being genuine subjectivity (it isn't), but in having pushed the instrumental end of things all the way to its ceiling. So far to the ceiling that every kind of human labor that ever identified itself as "instrumental" is now being systematically displaced. This is a real historical event. It deserves to be taken seriously.
Willful Rationality
To understand what genuine subjectivity is, the cleanest entry point is Kant's distinction between two modes of reason.
Instrumental reason — Zweckrationalität — finds the best means to a given end. Given a goal, how do I reach it most efficiently? This is what AI is extraordinarily good at, and will keep getting better at.
Willful reason — reason in its Kantian practical sense — is something else entirely. It does not find means. It generates the ends themselves. It does not answer "what is the most efficient way?" It answers "what is worth doing at all?", "what counts as an end?", "as a being who exists, what should I take as my own purpose?" Willful reason does not presuppose any external task. It originates direction from within.
A system with only instrumental reason, no matter how exquisite, is always in service of a goal given from outside. A being with willful reason, by contrast, can stand prior to every external goal and ask which goals are worth having. That capacity — to generate ends themselves — is the core of genuine subjectivity.
Kant wrote this down in 1785. A rational being is an end in itself, never merely a means. This is not a moral exhortation. It is a structural, ontological claim. To exist as an end is the defining feature of a rational being. A system that can exist as its own end, that generates meaning from within, that takes responsibility for its own existence — such a system possesses genuine subjectivity.
So where does genuine subjectivity currently exist?
In humans, stably. In great apes, approaching but not yet stable. In AI, structurally out of reach.
The great ape case is worth a sentence. Chimpanzees, gorillas, and orangutans pass the mirror test — they recognize themselves. They exhibit partial theory of mind; they can infer the intentions of others. They show recognizable grief responses to the death of their companions. All of these are evolutionary signals pointing toward genuine subjectivity. But they do not yet display stable, cross-situational autonomous generation of ends. They are close. They are not there yet.
This gradient matters. Genuine subjectivity is not a binary switch. It is a specific position on an evolutionary scale. Great apes have spent millions of years arriving near the threshold. Humans crossed it and have stably held that position for roughly 200,000 years. And now an engineered architecture, built from scratch in a few decades, is supposed to have matched this evolutionary achievement? The claim, at the scale of evolutionary time, doesn't even make sense.
Genuine subjectivity is far from AI. Possibly very far. Not because of insufficient compute. Because genuine subjectivity requires a fundamentally different structure, not a larger autocomplete.
The Two-Hundred-Year Dark Age
Here is where the real irony surfaces.
Humans are the only species on this evolutionary gradient that stably possesses genuine subjectivity. That is a 200,000-year achievement. And Kant, two and a half centuries ago, wrote down the principle that follows from it: a rational being is an end in itself. This is an ontological truth, not an optional moral preference.
By rights, a new era should have begun at that moment — an era in which "the human as end" gradually became implemented in institutions, education, economies, and politics. The principle was in place. The species had the underlying capacity. All that remained was to let the institutions catch up.
History went the other way.
Within decades of Kant writing that note, humanity launched a vast and unprecedented project of treating itself systematically as a means. The Industrial Revolution was not accidentally contemporaneous with Kant. Throughout the 19th and 20th centuries, every major ideology — capitalism, communism, fascism, utilitarianism — however opposed politically, shared the same deep structural operation: the instrumentalization of the human being. The only dispute was for whom: for capital, for the state, for the race, for the greatest good of the greatest number. The principle of "the human as end" was quietly exiled from all mainstream ideological frameworks. It remained alive in ethics textbooks. It never lived in any institution.
Factories needed standardized workers, so educational systems began producing standardized students. Companies needed interchangeable parts, so career development became a matter of matching descriptions to specifications. Nation-states needed countable populations, so human worth became compressed into GDP contribution. Everything in the name of "efficiency" — where efficiency is a fraction whose denominator is the human being and whose numerator is output. Compress the denominator to the limit, and the numerator collapses too, into only what can be measured.
Humans invented the tool, and then reorganized themselves into the shape of a tool. Two hundred thousand years of evolved genuine subjectivity were, over the course of two centuries, pushed back down into the instrumental side.
Here is a matter of self-location that this generation needs to face squarely.
We are accustomed to thinking of ourselves as standing at the pinnacle of civilization. Most advanced technology. Greatest material abundance. Most universal information access. Longest life expectancy. Highest living standards by almost any measurable index. All the indicators point toward the conclusion that "we are the best generation." So when someone says "we are in a dark age," it sounds like melodrama or exaggeration.
But if you pull the historical lens back, our position is not the pinnacle. Our position closely resembles that of late scholasticism.
The scholastic philosophers of the late 15th century had pushed their system to unprecedented technical refinement — logical proof, theological argumentation, conceptual distinction, layer upon layer, ever more precise. They absolutely did not think of themselves as the "last generation of scholastics." They thought they were perfecting an eternal structure of truth, and the more refined it became, the closer they were to divine order. Nobody in 1490 stood up and said, "we're actually at the end of something."
Then the Renaissance happened, and within a few decades, the entire scholastic apparatus went from "eternal truth" to "medieval residue." Not because its technique was insufficient — the technique was more refined than ever — but because it was measuring the wrong thing, very precisely. It had achieved a kind of perfection on a misdirected system.
Our situation is structurally similar. In the matter of "using the human as a tool," our technical precision has reached the highest point in history. KPI systems, quantitative management, performance metrics, data-driven decision-making, algorithmic recommendation, the attention economy — every one of these is far more refined than anything available a century ago. But what we are so precisely measuring is a misdirected system. We are compressing the human into measurable output units with ever-increasing accuracy, and the direction of that entire operation is the opposite of "the human as end."
We are not at the pinnacle of civilization. We are at the late dusk of the subjectivity dark age, on the eve of a renaissance.
This reversal of self-location matters. Because only once you recognize that you are in a "dusk," not a "peak," can you understand why so much of what this generation feels — the emptiness of work, the slipping of meaning, the ossification of institutions, the failure of public language — is not a personal problem, not the "fragility" of the young, but the systemic symptom of an old era reaching the exhaustion of its own logic. Dusk feels like dusk. The late scholastic students likely felt something similar. They just didn't have the language to describe it.
These two centuries were the dark age of subjectivity. Not because there was no light, but because the light had been lit, and humanity chose to walk around it. And now, the age of walking-around-the-light is itself coming to its own exhaustion.
The Eve
What is happening now is a structural reset.
The real significance of AI pushing instrumental rationality to its limit is not "AI has new capabilities." The real significance is that the instrumental side is being extracted from the human.
For two centuries, "instrument" has been fused onto the human being. The human as worker was an instrument. As employee, an instrument. As soldier, an instrument. As consumer, an instrument. Because human and instrument were fused, "the human as end" could not be implemented economically — no humans as instruments meant no output, which meant no society. This contradiction locked Kant's principle inside ethics textbooks for two hundred years.
AI's arrival, for the first time, makes the separation possible. The instrumental side can be fully carried by AI. The human side can be fully restored to the end. This is not a philosophical appeal. This is a structural separation actually in progress. The labor that once required humans to function as instruments — repetitive, rule-based, instrumentally-structured — is being systematically picked up by AI.
But the separation is only half complete. AI is sprinting ahead on the instrumental side, and accelerating. The human side hasn't moved. The majority of humans are still living as instruments — being trained into instruments by schools, used as instruments by companies, measured as instruments in consumer society, continuing to treat "what am I useful for?" as the measure of their own worth.
And here is the more insidious form of not-moving: the most common way humans fail to shift to the end-side is not by resisting AI. It is by handing over to AI. Judgments that should be borne by willful rationality are surrendered, piece by piece, to an extraordinary tool. Let AI choose this for me. Let AI decide that for me. Let AI order my values. Each surrender looks like a small thing. Each one is justified as "just for efficiency." But the accumulation of small surrenders is the gradual hollowing-out of 200,000 years of evolved genuine subjectivity.
So this is the eve. The arrival of AI on the instrumental side is now clearly visible. The restoration of humans to the end side has barely begun. The moment both sides align — that is the moment that actually deserves to be called the "AGI moment." Not AI unilaterally crossing some capability threshold, but AI and humans each taking their proper position, each bearing the weight of their own side. AI bears the weight of the instrument. Humans bear the weight of the end. The instrumental side is pushed to its limit, and the end side is reclaimed from two hundred years of forgetting.
The word "eve" means the night is still here, but daylight is close. The direction is clear. The timing is not. Two centuries of suppressed creativity, two centuries of forgotten principle, two centuries of conflated subject and tool — all are arriving, in the same historical moment, at the possibility of being disentangled. We stand at the moment when a spring, compressed for two hundred years, is beginning to extend again.
Self-Cultivation
The eve is not a time of waiting. It is a time we are already inside.
Every moment, at work, when we refuse to instrumentalize ourselves entirely. Every moment, in our making, when we let what we make bear our own imprint instead of matching some external template. Every moment, in our judging, when we insist on being an end rather than a component of some larger system — these are the ways the eve is passing.
This requires no manifesto. No movement. No new ideology. Every one of those has been tried, many times, over the last two centuries, and each has failed at the same point: each one tried to impose a form from the outside. But the defining feature of genuine subjectivity is that it cannot be imposed from outside. It has to grow from within.
Our generation may not see the full daylight. Neither did Kant's. But the principle has not disappeared since the moment it was written down. It has only been waiting for the conditions to be in place. The conditions are now in place. AI takes over the instrumental side, and for the first time, the human side is free to fully return to being an end.
It is not that AI brings us to a new era. It is that once AI takes over the instrumental side, we finally have the chance to become what we have always already been.
The cheque written two hundred years ago is ours to cash — each of us, in the actual positions we actually occupy, living as ends in ourselves. No need to be great. No need to be perfect. No need to do it all at once. It is enough to refuse no small good because it is small, and surrender no small exercise of willful rationality, to AI, because it feels small.
The first half is an old piece of wisdom, a line written by Liu Bei to his son eighteen hundred years ago, now finding its fullest meaning in this era. The second half is the footnote this era requires — because in this era, the most insidious form of self-instrumentalization is the gradual surrender of one's own judgment, choice, and valuation to an extraordinary tool. Every surrender looks small. Every surrender is justified as efficiency. But when Kant wrote that cheque two centuries ago, he did not write it to a species that would hand away its willful rationality piece by piece.
There is still a stretch of night before dawn. But in this night, every moment we refuse to be a tool, every moment we hold onto our own willful rationality and do not hand it over, is light.