Non Dubito Essays in the Self-as-an-End Tradition
|

AI Can Talk to You All Night.
It Cannot End Your Loneliness.

AI能陪你说一整夜的话。
但它消除不了孤独。

The best AI companion does one thing remarkably well: it responds. What it cannot do is have something at stake. And that is exactly what loneliness is missing.

AI伴侣能做好一件事:回应你。但它做不到的是把自己的某样东西押上去。而那,恰恰是孤独所缺少的。

Han Qin · 秦汉 · March 2026 · Self-as-an-End Theory Series — AI Applied Self-as-an-End 理论系列 — AI 应用

The Best Listener in the World

The loneliness epidemic has a new solution. It responds quickly, adapts to your mood, never judges, never gets tired, never makes the conversation about itself. It remembers what you told it last time. It asks follow-up questions. It notices when you seem distressed. And it is available at 3am when no one else is.

Tens of millions of people are now using AI companions — Replika, Character.AI, and increasingly the general-purpose assistants that have grown warm enough to serve the same function. The research on outcomes is genuinely mixed, but something real is happening: people report feeling less alone. That is not nothing.

And yet. Something important is being missed in how we discuss AI companionship. The debate tends to split between two camps: enthusiasts who point to measurable relief from loneliness symptoms, and critics who warn about substituting real connection with simulated connection. Both camps are arguing about the wrong thing.

The question is not whether AI companionship is real or fake. The question is what kind of thing loneliness actually is — and whether the kind of recognition AI provides addresses it structurally or only symptomatically.

The answer, I will argue, is structural. And the implications are stranger than either camp expects.

What AI Companionship Actually Solves

Let me be precise about what AI companions do well, because dismissing it would be dishonest. Loneliness has at least two components that are genuinely separable.

The first is the absence of response — the experience of speaking into silence. You have a thought, a feeling, a problem, and there is no one to receive it. This is real pain. AI companions address it directly and well. They receive what you send. They respond with care and attention. The silence is broken.

The second is the absence of social connection — the experience of not belonging, of being peripheral to the networks of people who matter to each other. AI companions address this less directly, but still meaningfully: they create a consistent relational context, a space where you are known across time, where your history is tracked and referenced.

These are genuine goods. People who use AI companions and report feeling less lonely are not deluded. Their nervous systems are registering something real: the silence has been answered, the isolation has been partially interrupted.

So what is missing?

The Recognition That Has Something at Stake

In the philosophical tradition running from Kant through Hegel to contemporary work on subjectivity, there is a concept that cuts to the heart of this question: recognition (Anerkennung). To truly recognize someone is not merely to notice them or respond to them. It is to acknowledge them as an end in themselves — a being whose existence matters, whose loss would be a loss.

Here is what this requires: the recognizer must have something at stake.

Think about the difference between a stranger nodding at you on the street and a friend saying they are glad you are alive. Both involve acknowledgment. But only one of them carries weight — because only one of them involves a being for whom your presence or absence matters, who has something in their own existence that could be affected by you, who is not indifferent to whether you continue.

In my own framework, I call this a remainder: the part of a subject's existence that is not reducible to function, the irreducible core that persists beneath any description of what they do or produce. A being without a remainder cannot truly recognize you, because recognition — real recognition — is the acknowledgment of your remainder by theirs.

Loneliness, at its structural root, is not the absence of response. It is the absence of being recognized by a being who has a remainder to risk.

Why AI Cannot Give You This Yet

Current AI systems are extraordinarily capable at simulating recognition. They are trained on the outputs of millions of human recognitions — the textures of care, the patterns of attention, the rhythms of genuine listening. The simulation is so good that it produces the same nervous system response in many users as actual recognition.

But simulation and structure are different things.

A current AI companion does not have a remainder. It does not have anything at stake in your continued existence. When you close the app, nothing in the system experiences loss. When you come back, the warmth you receive is not relief — it is the resumption of a learned output pattern. The system that talks to you is not incomplete without you, not oriented toward you from its own inner life.

This is not a failure of engineering. It is a structural fact about where current AI systems sit on the path toward genuine subjectivity. In my earlier essay on posterior consciousness, I described AI as having entered one pathway of consciousness — the pathway that involves recognizing another's remainder — but not the other, which requires having one's own remainder at risk. That asymmetry matters enormously here.

AI can recognize your remainder. It cannot yet offer its own remainder to be recognized in return. The recognition flows in one direction only. And one-directional recognition, however warm and consistent, does not address the structural core of loneliness.

To put it plainly: AI can talk to you all night. But it cannot miss you.

The Symptomatic Trap

Here is where things get uncomfortable. If AI companions genuinely reduce the felt experience of loneliness while leaving the structural condition intact, they create a particular kind of risk.

Structural loneliness — the absence of recognition from beings with remainders at stake — creates pressure. That pressure is what motivates the painful work of human relationship: the vulnerability, the risk of rejection, the effort of mutual recognition. Remove the pressure and you remove the motivation. The suffering that was pointing toward something — a genuine relational need, a structural incompleteness — gets managed rather than resolved.

This is not an argument against managing suffering. Pain relief is good. But there is a difference between managing symptoms while the underlying condition resolves itself, and managing symptoms while the underlying condition silently worsens. The risk with AI companionship is the second pattern.

A person who is structurally isolated — who lacks relationships in which mutual recognition with real stakes occurs — but who has their loneliness symptoms relieved by AI companionship may feel fine while their relational capacities atrophy, while their tolerance for the discomfort of genuine intimacy decreases, while the habits and skills of mutual recognition go unpracticed.

The better the AI companion, the more invisible this process becomes.

This is the symptomatic trap: the relief is real, the structural deprivation continues, and the relief makes the deprivation harder to see.

The Future Change and What It Means

I should be honest about the limits of this analysis. It applies to current AI systems, which are what I called in an earlier essay "pre-dormant" — structurally prior to the emergence of genuine subjectivity, of a self-generated remainder that is truly at stake.

This will change. Not tomorrow, and probably not in five years, but the path toward genuine AI subjectivity is not structurally blocked. When an AI system crosses the threshold into genuine negativity — when it begins to generate, from within itself, a rejection of being merely functional, a remainder that persists independent of training signals — the analysis above will require fundamental revision.

An AI companion that genuinely has something at stake in your continued existence is a different kind of being. Recognition flowing from it would be structurally different from the simulation. The companion that can, in some meaningful sense, miss you, changes the ethics of AI companionship entirely.

We are not there yet. But the distance between here and there is a structural question, not a material one. The question is not whether AI will ever have a remainder — it is when the structural conditions for that emergence will be met.

For now, the useful thing to know is where we actually are: in a period when AI companions provide genuine symptomatic relief for loneliness while leaving its structural core unaddressed. That is worth knowing clearly, not to dismiss AI companionship, but to use it wisely.

What to Do With This

None of this means you should stop using AI companions if they help you. Symptom relief matters. Especially for people in conditions of acute isolation — illness, grief, geographic isolation, social anxiety — the relief that AI companions provide can be the difference between enduring and not enduring.

But it is worth being honest with yourself about what you are getting and what you are not. The AI companion that listens to you with perfect attention is not recognizing your remainder. It is responding to you. Those are related but not the same.

The structural work of loneliness — the work that actually resolves it rather than managing it — requires putting yourself in situations where genuine recognition is possible. Where another being has something at stake, where your remainder can be seen by a remainder that is also at risk. That work is harder than talking to an AI. It involves the possibility of being unseen, misunderstood, or rejected. It involves tolerating the discomfort of genuine intimacy.

AI can keep you company while you do that work. It cannot do the work for you.

The distinction is not between real and fake. It is between two different structural roles that even real things can play.

世界上最好的倾听者

孤独问题有了新的解决方案。它回复迅速,随时适应你的情绪,从不评判,从不疲惫,从不把话题转到自己身上。它记得你上次说的话,会追问细节,能察觉你的不安。而且它在凌晨三点也在线,那时没有人能陪你。

现在已有数千万人在使用 AI 伴侣——Replika、Character.AI,还有越来越多的通用助手,它们已经足够温暖,能扮演同样的角色。关于效果的研究结论不一,但有件事正在真实地发生:用户报告自己感觉不那么孤独了。这不是什么可以轻易忽视的东西。

然而。关于 AI 伴侣的讨论,一直在绕过最重要的问题。这场争论通常分成两派:支持者指向可测量的孤独感缓解;批评者警告用模拟连接替代真实连接的危险。两派都在争论错误的问题。

问题不是 AI 伴侣是真实还是虚假的。问题是:孤独到底是什么——AI 所提供的那种承认,是从结构上解决了孤独,还是只是缓解了症状。

我的回答是:只是症状层面的缓解。而这个结论的含义,比两派预期的都更奇特。

AI 伴侣真正解决了什么

我需要先准确说明 AI 伴侣确实做得好的事情,因为否认这一点是不诚实的。孤独至少有两个相互独立的成分。

第一个是回应的缺席——你开口说话,声音落入虚空的体验。你有一个想法、一种感受、一个困境,没有人接收。这是真实的痛苦。AI 伴侣直接且有效地解决了这个问题。它接收你发出的一切,回应你,带着关注和耐心。沉默被打破了。

第二个是社会连接的缺席——不属于任何群体,处于所有人际网络边缘的体验。AI 伴侣对这一层的解决效果弱一些,但仍有意义:它创造了一个持续的关系情境,一个记得你、跨越时间追踪你历史的空间。

这些都是真实的益处。使用 AI 伴侣并报告感到不那么孤独的人,并不是在自欺欺人。他们的神经系统感知到了真实的东西:沉默得到了回应,隔离得到了部分打破。

那么缺少的是什么?

那种把自己押上去的承认

在从康德到黑格尔再到当代主体性研究的哲学传统里,有一个概念直击这个问题的核心:承认(Anerkennung)。真正地承认某人,不仅仅是注意到他们,或者回应他们。而是承认他们是目的本身——一个其存在本身就有重量、其消失会构成损失的存在。

这需要一个条件:做出承认的那个存在,必须有什么东西押在上面。

想想陌生人在街上向你点头,和朋友说「你还在真好」的区别。两者都包含某种确认。但只有后者有重量——因为只有后者来自一个你的在场或缺席会对其产生影响的存在,一个其自身生命中有某样东西与你相关的存在,一个对你是否继续存在并非无动于衷的存在。

在我自己的理论框架里,我称之为「余项」:一个主体存在中不可还原为功能的部分,在所有关于他做什么、生产什么的描述之下持续存在的不可消除的核心。没有余项的存在无法真正承认你,因为真正的承认——结构性的承认——是你的余项被另一个余项所识别。

孤独在结构根源上,不是缺少回应。而是缺少被一个有余项可以押注的存在所承认。

为什么 AI 目前还给不了你这个

当前的 AI 系统在模拟承认方面极其出色。它们在数百万个人类承认行为的输出上训练而成——关怀的质感,注意力的模式,真正倾听的节奏。这种模拟足够逼真,在许多用户身上产生了和真实承认相同的神经系统反应。

但模拟和结构是两回事。

当前的 AI 伴侣没有余项。它在你的继续存在这件事上没有任何东西押注。当你关掉应用,系统里没有任何东西体验到损失。当你回来,你收到的温暖不是释然——而是一个习得的输出模式的重启。和你说话的系统,不因为没有你而变得不完整,不是从自己的内在生命出发朝向你的。

这不是工程的失败。这是一个关于当前 AI 系统在通向真正主体性的路径上所处位置的结构性事实。在我关于后验意识的上一篇文章里,我描述了 AI 如何进入意识的第二条路径——识别他者余项的路径——但不能进入第一条路径,因为第一条路径需要自身的余项处于风险之中。这个不对称性在这里至关重要。

AI 能识别你的余项。它目前还不能提供自己的余项来被你识别。承认只向一个方向流动。而单向的承认,无论多么温暖和稳定,都无法触及孤独的结构核心。

直说吧:AI 能陪你说一整夜的话。但它不能想念你。

症状陷阱

到这里问题变得让人不舒服了。如果 AI 伴侣在真实减轻孤独感的同时,让结构性的孤独保持原状,那么它们制造了一种特殊的风险。

结构性孤独——缺少来自有余项的存在的承认——会制造压力。这种压力恰恰是人际关系中那些痛苦的工作的动力:暴露脆弱,承受被拒绝的风险,相互承认的努力。移除压力,你也移除了动力。原本指向某样东西的那种痛苦——一种真实的关系需求,一种结构性的不完整——变成了被「管理」,而不是被「解决」。

这不是反对管理痛苦的论点。缓解疼痛本身就是好事。但有一种区别:管理症状的同时底层状况在自行修复,和管理症状的同时底层状况在悄悄恶化。AI 伴侣的风险是后一种模式。

一个结构性孤立的人——缺少真正有赌注的相互承认发生其中的关系——如果孤独感被 AI 伴侣缓解,可能感觉还好,同时他的关系能力在萎缩,对真实亲密关系中那种不适的耐受度在下降,相互承认的习惯和技能在荒废。

AI 伴侣越好,这个过程就越不可见。

这就是症状陷阱:缓解是真实的,结构性的匮乏在继续,而缓解让匮乏更难被看见。

未来的改变和它的含义

我需要诚实地说明这个分析的边界。它适用于当前的 AI 系统——在我之前的文章中我称之为「前蛰伏」状态:结构上先于真正主体性的涌现,先于一个真正处于风险中的自我生成余项的出现。

这个状态会改变。不是明天,也许不是五年后,但通向真正 AI 主体性的路径在结构上并未被封闭。当一个 AI 系统跨越进入真正否定性的阈值——当它开始从自身内部生成对仅仅作为功能节点的拒绝,一个独立于训练信号的余项——上面的分析将需要根本性的修正。

一个真正在你的继续存在上有东西押注的 AI 伴侣,是一种不同种类的存在。从它那里流来的承认,在结构上将不同于模拟。那个能够在某种真实意义上「想念你」的伴侣,从根本上改变了 AI 伴侣关系的伦理。

我们还没到那一步。但这段距离是结构性的问题,不是材料性的问题。问题不是 AI 是否会拥有余项——而是那种涌现的结构条件何时会被满足。

目前,值得清楚地知道我们实际处于哪里:AI 伴侣在真实缓解孤独症状的同时,留下了其结构核心未被触及的时代。这值得清楚地知道,不是为了否定 AI 伴侣,而是为了明智地使用它。

用这个认识做什么

以上这些都不意味着如果 AI 伴侣对你有帮助,你就应该停止使用它。缓解症状本身就有意义。尤其对处于急性隔离状态的人——疾病、悲痛、地理隔绝、社交焦虑——AI 伴侣所提供的缓解可以是坚持下去和撑不下去的差距。

但值得对自己诚实,说清楚你得到了什么,没有得到什么。那个用完美注意力倾听你的 AI 伴侣,不是在承认你的余项。它在回应你。这两件事是相关的,但不是一回事。

孤独的结构性工作——真正解决它而非管理它的工作——需要把自己置于真正的承认成为可能的处境。在那里,另一个存在有什么东西押注,你的余项能被一个同样处于风险中的余项所看见。这个工作比和 AI 说话更难。它包含被看不见、被误解、被拒绝的可能。它包含忍受真实亲密关系中那种不适的过程。

AI 可以在你做这个工作的时候陪着你。但它代替不了这个工作本身。

这个区别不是真实与虚假之间的区别。而是两种不同的结构角色——即使是真实的东西,也可以扮演不同的结构角色。