Transition and Social Dynamics of a post-coordination world

Published on August 27, 2025 10:23 PM GMT[Feedback Welcome][Epistemic status] Many social dynamics are assumed without evidence. Causal attribution is weak in this domain. We restrict ourselves to widely acceptable claims where possible. We assume some basic universal social axioms. The goal is to paint one narrative to the post-coordination world, and to analyze it.Social axiomSocial cost := Defined as the negative utility of deviating from the norm for a particular interaction. We assume that this cost is approximated by a well-ordering among social agents. We also assume that the number of people whose decisions are not influenced by social costs is negligible where appropriate.Despite widespread use of AI, the number of people who feel that AI has made a significant impact in their day to day life is vanishingly small [assumed]. The disconnect can be largely traced down to the way AI works. If it is agentic, at best it feels like a worse autofill. If it is not, it's a better search engine. Or it writes texts where you don't value quality.The effect of its use is abstracted from social interactions, where there is a large social cost to mentioning its use to others. If you do, it's inferred that you do not value the quality of your writing. Thus you need to balance its use to not make it obvious, and never mention it to others, since you likely want the appearance of caring about quality, even if you don't care about quality itself.I think humans are great at maintaining implicit contradictions, and it would not be an exaggeration to say that this is required for society to function as is.According to surveys, the most impactful applications of technologies such as the internet and personal phones is the social applications. The unreasonable efficiency gains are either less attributable or simply less impactful to the median person's experiences. Therefore it is reasonable to assume that the first widespread social (as compared to solitary) application of AI will precede the median person feeling like AI meaningfully affects their life. This is true for the internet even with primitive applications like email.Grok may be the best example of an agent that attempts to meaningfully change social interactions in an attributable manner. However its addition is limited to using it as a rhetorical authoritative source. I do not believe that AIs have the verbal ability to manipulate discourse, or will have it in the near future. PredictionThis is the most suited and implementable use case for the current model status quo. Some factors in favor of this theory are: Objectivity: Coordination is finding a plan within constraintsLow/Retrievable memory requirements: Memory on repeated co-ordination is easy to retrieveLow stakes: The interaction can either be cancelled later or verified first.Some factors against this theory are:Value proposition: The median person may actually like social co-ordination, and even consider it a valuable part of their lives.Here is my narrative, in a similar style to AI-2027 on how social interactions could change over time.Stage 1 - <1% AdoptionThe first apps promising to be your personal assistant are here. These are primitive, because nobody knows how to incorporate them in your personal life. The agents are limited to a few domains, where there is little preventing their automation. The adopters are AI enthusiasts, who want to track the progress of the first agents. Nobody finds much use from an AI that reads your e-mail or books an uber by voice rather than by typing it into a text box with a pretty design. The private entities are still more worried about automated bots, and do not allow programmatic access to their services,  stonewalling agents behind browsers.Stage 2 - 1% AdoptionCompanies start partnering with personal agent providers for authenticated access. For the first time in internet history, checks against programmatic access loosen, following decades of tightening automation restrictions. We will see the first novel and persistent agent interactions emerging. I believe consumer cost splitting managed by the agent in a ledger (notes) instead of manual tracking across multiple services will be the first. The introduction of persistent interactions will increase retention, and a small percentage of social groups use it religiously. It is not easy to use, but stories will be written about the AI social revolution, only because there is now something to write about.Stage 3 - 5% AdoptionThe applications of AI plateau here. More and more companies and co-ordination workflows have been integrated everywhere. Despite that, the impact of AI in surveys isn't very high. The competition is on tone and friendliness of their agents, mostly trying to emotionally addict you into your own solemn community. These strategies work on a small minority on the population, and still don't meaningfully affect social dynamics. It's considered a hopeless endeavor to attempt to build another AI agent.Stage 4 - 5% Adoption - ContinuedThe old era of agent providers either fade into obscurity or rebrand into something more.  The next generation of personal agents explicitly avoid making themselves utility tools. They will not just be proactive in reminding you to wish someone a happy birthday or tell you about their life updates, they will start sharing information between themselves. If this sounds insane to you, think for a moment about how dating apps work. Not only does this include sharing information with others, this includes hiding failed interactions from both parties.Axiomatic DescriptionAs we have assumed, initiating every interaction has an associated social cost, learnt from the worry of persistent disfavor from others. By allowing an agent to have a temporary interaction with another, and making both forget it, it creates a moment of social risk-taking without the social cost. Tinder provides this as swiping being social risk-taking, but your swipe only mattering if the other person accepts it, i.e. no social cost to it. This is why no matter how imbalanced or anti-consumer tinder is, it won't be beaten by spontaneous interactions. It should be clear that if it is simple enough, this generalizes to many more useful social interactions.Stage 5 - 10% AdoptionThis is the inflection point in AI adoption among laypersons. For the first time AI can affect their social lives in an attributable manner. Some social groups are now fully reliant on AI managing them. Adoption is resisted by a large vocal minority, signaling to themselves about the purity of human social interactions. AI as a social co-ordination layer starts trending. There is an abundance of viral posts about AI either performing surprisingly well, or disastrously bad.Stage 6 - 33% AdoptionThe concept of having a personal agent eclipses AI as the public face of the technology.  Criticism shifts from its utility and enjoyment to the social implications of these systems. As the AI's influence on you grows, and anxieties about it faster yet, we have the first case of information manipulation using these tools. The agent providers were accepting money for an unstated purpose of modifying the agent's search results to benefit those favored. This is the rallying call for AI regulation. It leads to nothing but news cycles.Stage 7 - 67% AdoptionDespite negative news cycles, the adoption of personal agents does not take a noticeable hit. Social applications of AI are now dominating inference platforms. A rough back of the envelope calculation suggests that there would be around 10 quadrillion tokens per year used by personal agents, compared to 500T total today, setting the stage for the largest scaling operation till date. The investment from this will be a major part of the funding source of AGI. AI sentiment will be a lot more positive, and there will be strong support to build better small models. It is not clear to me if bigger models will be considered an improvement, or privileging the rich unfairly. If you believe that AGI should not be built, this will likely be a good source of tension you can exploit. However your opponents will have much more money to throw at this problem too, unless you can make good open source models that rival social abilities of the proprietary firms at better prices.Estimate napkin math1b * 30k tok/day * 365 ~= 10 quadrillionThe stages get more speculative from hereStage 8 - 90% AdoptionBusinesses will start deploying agents to negotiate on behalf of themselves. Mass marketing will be a fragment of the past. The complexity of agent to agent interactions will mirror that of humans. They can pick who you should talk to, who you should check in on, where you should lease, where you should buy food from, and any decision you're willing to ask it. I doubt that it will ever answer worse than yourself. It has the ability to incorporate information that you have a too large a social cost of asking. The minimum intelligence for self sufficiency decreases for the first time in modern history. Their lives are just parameters to modify in a graph for a not particularly intelligent model. How those parameters change the world can be differentiated no matter what you do.Stage 9 - 95% AdoptionThe most controversial sets of regulations are proposed. The government wants access to any agent's actions and context history silently when asked, and agents to add steganography into responses made under duress. Irrespective of whether it passes, the government gains access to some backdoors. In a post co-ordination society, you can decide their outcomes without them knowing at all. The claim stays as a conspiracy theory, due to lack of evidence and it not being a falsifiable claim.Stage 10 - 99% AdoptionHuman co-ordination is now automated at the institutional level. Your work schedule is editable by your agent, every transaction is a negotiation between agents depending on your expected usage patterns. Your car is rented out while you're not there, your room is rented out while you're on vacation. Any optimization that didn't exist due to co-ordination or trust issues can exist now. The concept of "your" property gets diluted. It makes no difference in your day to day to life. Your property is fungible and part of a cluster. Bad actors are necessarily isolated. The price of commitment goes to 0, while the price of reputation goes to infinity.Stage 11 - The limitHumans stop tracking co-ordination primitives such as times or commitments completely. What day it is does not matter in a society this optimized, as that wouldn't change anything. The mesh of models can generate shared events for any subgroup of people, memorialize any person or idea on any day the consensus determines, and act in any way it feels optimizes human interaction. That's all it's been trained on. Every single trend, party, group, all pre-organized for you exactly how you would like it best. How you would like it best is also decided for you.CounterpointsI have a soft spot for dystopian writing, yet see many reasons to be optimistic about what the future looks like. Some cases in which the dystopia would not materialize:Predicting your preference is significantly harder than scaling the complexity of allowed options. This would imply, that users would pick between the final pruned decision sets. This may even get harder as agents can collect more decision changing pieces of information for any particular choice.As long as AI doesn't take over both the physical and verbal value of interacting with humans, people will want to meet other humans. Intelligence, and distinguishability from AI will remain a status marker.  In the limit, aligned AGI that has superhuman verbal abilities will not want to interact with laypersons directly or with its full verbal ability to maintain the fabric of society.Models gain an outlet to perform pro-social tasks, and this more than offsets the time spent talking. As the models become more intelligent, they can now make you talk to them lesser and to others more, preventing socialization collapse.Discuss

2025-08-28 00:00 GMT · 6 days ago www.lesswrong.com

Published on August 27, 2025 10:23 PM GMT[Feedback Welcome][Epistemic status] Many social dynamics are assumed without evidence. Causal attribution is weak in this domain. We restrict ourselves to widely acceptable claims where possible. We assume some basic universal social axioms. The goal is to paint one narrative to the post-coordination world, and to analyze it.Social axiomSocial cost := Defined as the negative utility of deviating from the norm for a particular interaction. We assume that this cost is approximated by a well-ordering among social agents. We also assume that the number of people whose decisions are not influenced by social costs is negligible where appropriate.Despite widespread use of AI, the number of people who feel that AI has made a significant impact in their day to day life is vanishingly small [assumed]. The disconnect can be largely traced down to the way AI works. If it is agentic, at best it feels like a worse autofill. If it is not, it's a better search engine. Or it writes texts where you don't value quality.The effect of its use is abstracted from social interactions, where there is a large social cost to mentioning its use to others. If you do, it's inferred that you do not value the quality of your writing. Thus you need to balance its use to not make it obvious, and never mention it to others, since you likely want the appearance of caring about quality, even if you don't care about quality itself.I think humans are great at maintaining implicit contradictions, and it would not be an exaggeration to say that this is required for society to function as is.According to surveys, the most impactful applications of technologies such as the internet and personal phones is the social applications. The unreasonable efficiency gains are either less attributable or simply less impactful to the median person's experiences. Therefore it is reasonable to assume that the first widespread social (as compared to solitary) application of AI will precede the median person feeling like AI meaningfully affects their life. This is true for the internet even with primitive applications like email.Grok may be the best example of an agent that attempts to meaningfully change social interactions in an attributable manner. However its addition is limited to using it as a rhetorical authoritative source. I do not believe that AIs have the verbal ability to manipulate discourse, or will have it in the near future. PredictionThis is the most suited and implementable use case for the current model status quo. Some factors in favor of this theory are: Objectivity: Coordination is finding a plan within constraintsLow/Retrievable memory requirements: Memory on repeated co-ordination is easy to retrieveLow stakes: The interaction can either be cancelled later or verified first.Some factors against this theory are:Value proposition: The median person may actually like social co-ordination, and even consider it a valuable part of their lives.Here is my narrative, in a similar style to AI-2027 on how social interactions could change over time.Stage 1 – <1% AdoptionThe first apps promising to be your personal assistant are here. These are primitive, because nobody knows how to incorporate them in your personal life. The agents are limited to a few domains, where there is little preventing their automation. The adopters are AI enthusiasts, who want to track the progress of the first agents. Nobody finds much use from an AI that reads your e-mail or books an uber by voice rather than by typing it into a text box with a pretty design. The private entities are still more worried about automated bots, and do not allow programmatic access to their services,  stonewalling agents behind browsers.Stage 2 – 1% AdoptionCompanies start partnering with personal agent providers for authenticated access. For the first time in internet history, checks against programmatic access loosen, following decades of tightening automation restrictions. We will see the first novel and persistent agent interactions emerging. I believe consumer cost splitting managed by the agent in a ledger (notes) instead of manual tracking across multiple services will be the first. The introduction of persistent interactions will increase retention, and a small percentage of social groups use it religiously. It is not easy to use, but stories will be written about the AI social revolution, only because there is now something to write about.Stage 3 – 5% AdoptionThe applications of AI plateau here. More and more companies and co-ordination workflows have been integrated everywhere. Despite that, the impact of AI in surveys isn't very high. The competition is on tone and friendliness of their agents, mostly trying to emotionally addict you into your own solemn community. These strategies work on a small minority on the population, and still don't meaningfully affect social dynamics. It's considered a hopeless endeavor to attempt to build another AI agent.Stage 4 – 5% Adoption – ContinuedThe old era of agent providers either fade into obscurity or rebrand into something more.  The next generation of personal agents explicitly avoid making themselves utility tools. They will not just be proactive in reminding you to wish someone a happy birthday or tell you about their life updates, they will start sharing information between themselves. If this sounds insane to you, think for a moment about how dating apps work. Not only does this include sharing information with others, this includes hiding failed interactions from both parties.Axiomatic DescriptionAs we have assumed, initiating every interaction has an associated social cost, learnt from the worry of persistent disfavor from others. By allowing an agent to have a temporary interaction with another, and making both forget it, it creates a moment of social risk-taking without the social cost. Tinder provides this as swiping being social risk-taking, but your swipe only mattering if the other person accepts it, i.e. no social cost to it. This is why no matter how imbalanced or anti-consumer tinder is, it won't be beaten by spontaneous interactions. It should be clear that if it is simple enough, this generalizes to many more useful social interactions.Stage 5 – 10% AdoptionThis is the inflection point in AI adoption among laypersons. For the first time AI can affect their social lives in an attributable manner. Some social groups are now fully reliant on AI managing them. Adoption is resisted by a large vocal minority, signaling to themselves about the purity of human social interactions. AI as a social co-ordination layer starts trending. There is an abundance of viral posts about AI either performing surprisingly well, or disastrously bad.Stage 6 – 33% AdoptionThe concept of having a personal agent eclipses AI as the public face of the technology.  Criticism shifts from its utility and enjoyment to the social implications of these systems. As the AI's influence on you grows, and anxieties about it faster yet, we have the first case of information manipulation using these tools. The agent providers were accepting money for an unstated purpose of modifying the agent's search results to benefit those favored. This is the rallying call for AI regulation. It leads to nothing but news cycles.Stage 7 – 67% AdoptionDespite negative news cycles, the adoption of personal agents does not take a noticeable hit. Social applications of AI are now dominating inference platforms. A rough back of the envelope calculation suggests that there would be around 10 quadrillion tokens per year used by personal agents, compared to 500T total today, setting the stage for the largest scaling operation till date. The investment from this will be a major part of the funding source of AGI. AI sentiment will be a lot more positive, and there will be strong support to build better small models. It is not clear to me if bigger models will be considered an improvement, or privileging the rich unfairly. If you believe that AGI should not be built, this will likely be a good source of tension you can exploit. However your opponents will have much more money to throw at this problem too, unless you can make good open source models that rival social abilities of the proprietary firms at better prices.Estimate napkin math1b * 30k tok/day * 365 ~= 10 quadrillionThe stages get more speculative from hereStage 8 – 90% AdoptionBusinesses will start deploying agents to negotiate on behalf of themselves. Mass marketing will be a fragment of the past. The complexity of agent to agent interactions will mirror that of humans. They can pick who you should talk to, who you should check in on, where you should lease, where you should buy food from, and any decision you're willing to ask it. I doubt that it will ever answer worse than yourself. It has the ability to incorporate information that you have a too large a social cost of asking. The minimum intelligence for self sufficiency decreases for the first time in modern history. Their lives are just parameters to modify in a graph for a not particularly intelligent model. How those parameters change the world can be differentiated no matter what you do.Stage 9 – 95% AdoptionThe most controversial sets of regulations are proposed. The government wants access to any agent's actions and context history silently when asked, and agents to add steganography into responses made under duress. Irrespective of whether it passes, the government gains access to some backdoors. In a post co-ordination society, you can decide their outcomes without them knowing at all. The claim stays as a conspiracy theory, due to lack of evidence and it not being a falsifiable claim.Stage 10 – 99% AdoptionHuman co-ordination is now automated at the institutional level. Your work schedule is editable by your agent, every transaction is a negotiation between agents depending on your expected usage patterns. Your car is rented out while you're not there, your room is rented out while you're on vacation. Any optimization that didn't exist due to co-ordination or trust issues can exist now. The concept of "your" property gets diluted. It makes no difference in your day to day to life. Your property is fungible and part of a cluster. Bad actors are necessarily isolated. The price of commitment goes to 0, while the price of reputation goes to infinity.Stage 11 – The limitHumans stop tracking co-ordination primitives such as times or commitments completely. What day it is does not matter in a society this optimized, as that wouldn't change anything. The mesh of models can generate shared events for any subgroup of people, memorialize any person or idea on any day the consensus determines, and act in any way it feels optimizes human interaction. That's all it's been trained on. Every single trend, party, group, all pre-organized for you exactly how you would like it best. How you would like it best is also decided for you.CounterpointsI have a soft spot for dystopian writing, yet see many reasons to be optimistic about what the future looks like. Some cases in which the dystopia would not materialize:Predicting your preference is significantly harder than scaling the complexity of allowed options. This would imply, that users would pick between the final pruned decision sets. This may even get harder as agents can collect more decision changing pieces of information for any particular choice.As long as AI doesn't take over both the physical and verbal value of interacting with humans, people will want to meet other humans. Intelligence, and distinguishability from AI will remain a status marker.  In the limit, aligned AGI that has superhuman verbal abilities will not want to interact with laypersons directly or with its full verbal ability to maintain the fabric of society.Models gain an outlet to perform pro-social tasks, and this more than offsets the time spent talking. As the models become more intelligent, they can now make you talk to them lesser and to others more, preventing socialization collapse.Discuss

Original: https://www.lesswrong.com/posts/2yCwXjEwJMTgWbxAK/transition-and-social-dynamics-of-a-post-coordination-world