Archives AI News

FlightConnections

Article URL: https://www.flightconnections.com/ Comments URL: https://news.ycombinator.com/item?id=45046415 Points: 10 # Comments: 1

A Unified Data Stack for Data Analytics? Definite Says ‘Yes’

Companies that are tired of the hassle of assembling their own data analytics stacks may want to check out Definite, a Delaware-based startup that just raised $10 million to build Read more… The post A Unified Data Stack for Data Analytics? Definite Says ‘Yes’ appeared first on BigDATAwire.

Show HN: Meetup.com and eventribe alternative to small groups

Mobile first open-source RSVP platform. Alternative for meetup.com / eventribe for small companies and groups. If you have a small group and don't want to pay for services you can easily selfhost this solution. Open for improvements and for feedback, ofc.- One-Click Sharing - Each event gets a unique, memorable URL. Share instantly via any platform or messaging app. - No Hassle, No Sign-Ups - Skip registrations and endless forms. Unlike other event platforms, you create and share instantly — no accounts, no barriers. - Effortless Simplicity - Designed to be instantly clear and easy. No learning curve — just open, create, and go. Comments URL: https://news.ycombinator.com/item?id=45045116 Points: 29 # Comments: 7

Transition and Social Dynamics of a post-coordination world

Published on August 27, 2025 10:23 PM GMT[Feedback Welcome][Epistemic status] Many social dynamics are assumed without evidence. Causal attribution is weak in this domain. We restrict ourselves to widely acceptable claims where possible. We assume some basic universal social axioms. The goal is to paint one narrative to the post-coordination world, and to analyze it.Social axiomSocial cost := Defined as the negative utility of deviating from the norm for a particular interaction. We assume that this cost is approximated by a well-ordering among social agents. We also assume that the number of people whose decisions are not influenced by social costs is negligible where appropriate.Despite widespread use of AI, the number of people who feel that AI has made a significant impact in their day to day life is vanishingly small [assumed]. The disconnect can be largely traced down to the way AI works. If it is agentic, at best it feels like a worse autofill. If it is not, it's a better search engine. Or it writes texts where you don't value quality.The effect of its use is abstracted from social interactions, where there is a large social cost to mentioning its use to others. If you do, it's inferred that you do not value the quality of your writing. Thus you need to balance its use to not make it obvious, and never mention it to others, since you likely want the appearance of caring about quality, even if you don't care about quality itself.I think humans are great at maintaining implicit contradictions, and it would not be an exaggeration to say that this is required for society to function as is.According to surveys, the most impactful applications of technologies such as the internet and personal phones is the social applications. The unreasonable efficiency gains are either less attributable or simply less impactful to the median person's experiences. Therefore it is reasonable to assume that the first widespread social (as compared to solitary) application of AI will precede the median person feeling like AI meaningfully affects their life. This is true for the internet even with primitive applications like email.Grok may be the best example of an agent that attempts to meaningfully change social interactions in an attributable manner. However its addition is limited to using it as a rhetorical authoritative source. I do not believe that AIs have the verbal ability to manipulate discourse, or will have it in the near future. PredictionThis is the most suited and implementable use case for the current model status quo. Some factors in favor of this theory are: Objectivity: Coordination is finding a plan within constraintsLow/Retrievable memory requirements: Memory on repeated co-ordination is easy to retrieveLow stakes: The interaction can either be cancelled later or verified first.Some factors against this theory are:Value proposition: The median person may actually like social co-ordination, and even consider it a valuable part of their lives.Here is my narrative, in a similar style to AI-2027 on how social interactions could change over time.Stage 1 - <1% AdoptionThe first apps promising to be your personal assistant are here. These are primitive, because nobody knows how to incorporate them in your personal life. The agents are limited to a few domains, where there is little preventing their automation. The adopters are AI enthusiasts, who want to track the progress of the first agents. Nobody finds much use from an AI that reads your e-mail or books an uber by voice rather than by typing it into a text box with a pretty design. The private entities are still more worried about automated bots, and do not allow programmatic access to their services,  stonewalling agents behind browsers.Stage 2 - 1% AdoptionCompanies start partnering with personal agent providers for authenticated access. For the first time in internet history, checks against programmatic access loosen, following decades of tightening automation restrictions. We will see the first novel and persistent agent interactions emerging. I believe consumer cost splitting managed by the agent in a ledger (notes) instead of manual tracking across multiple services will be the first. The introduction of persistent interactions will increase retention, and a small percentage of social groups use it religiously. It is not easy to use, but stories will be written about the AI social revolution, only because there is now something to write about.Stage 3 - 5% AdoptionThe applications of AI plateau here. More and more companies and co-ordination workflows have been integrated everywhere. Despite that, the impact of AI in surveys isn't very high. The competition is on tone and friendliness of their agents, mostly trying to emotionally addict you into your own solemn community. These strategies work on a small minority on the population, and still don't meaningfully affect social dynamics. It's considered a hopeless endeavor to attempt to build another AI agent.Stage 4 - 5% Adoption - ContinuedThe old era of agent providers either fade into obscurity or rebrand into something more.  The next generation of personal agents explicitly avoid making themselves utility tools. They will not just be proactive in reminding you to wish someone a happy birthday or tell you about their life updates, they will start sharing information between themselves. If this sounds insane to you, think for a moment about how dating apps work. Not only does this include sharing information with others, this includes hiding failed interactions from both parties.Axiomatic DescriptionAs we have assumed, initiating every interaction has an associated social cost, learnt from the worry of persistent disfavor from others. By allowing an agent to have a temporary interaction with another, and making both forget it, it creates a moment of social risk-taking without the social cost. Tinder provides this as swiping being social risk-taking, but your swipe only mattering if the other person accepts it, i.e. no social cost to it. This is why no matter how imbalanced or anti-consumer tinder is, it won't be beaten by spontaneous interactions. It should be clear that if it is simple enough, this generalizes to many more useful social interactions.Stage 5 - 10% AdoptionThis is the inflection point in AI adoption among laypersons. For the first time AI can affect their social lives in an attributable manner. Some social groups are now fully reliant on AI managing them. Adoption is resisted by a large vocal minority, signaling to themselves about the purity of human social interactions. AI as a social co-ordination layer starts trending. There is an abundance of viral posts about AI either performing surprisingly well, or disastrously bad.Stage 6 - 33% AdoptionThe concept of having a personal agent eclipses AI as the public face of the technology.  Criticism shifts from its utility and enjoyment to the social implications of these systems. As the AI's influence on you grows, and anxieties about it faster yet, we have the first case of information manipulation using these tools. The agent providers were accepting money for an unstated purpose of modifying the agent's search results to benefit those favored. This is the rallying call for AI regulation. It leads to nothing but news cycles.Stage 7 - 67% AdoptionDespite negative news cycles, the adoption of personal agents does not take a noticeable hit. Social applications of AI are now dominating inference platforms. A rough back of the envelope calculation suggests that there would be around 10 quadrillion tokens per year used by personal agents, compared to 500T total today, setting the stage for the largest scaling operation till date. The investment from this will be a major part of the funding source of AGI. AI sentiment will be a lot more positive, and there will be strong support to build better small models. It is not clear to me if bigger models will be considered an improvement, or privileging the rich unfairly. If you believe that AGI should not be built, this will likely be a good source of tension you can exploit. However your opponents will have much more money to throw at this problem too, unless you can make good open source models that rival social abilities of the proprietary firms at better prices.Estimate napkin math1b * 30k tok/day * 365 ~= 10 quadrillionThe stages get more speculative from hereStage 8 - 90% AdoptionBusinesses will start deploying agents to negotiate on behalf of themselves. Mass marketing will be a fragment of the past. The complexity of agent to agent interactions will mirror that of humans. They can pick who you should talk to, who you should check in on, where you should lease, where you should buy food from, and any decision you're willing to ask it. I doubt that it will ever answer worse than yourself. It has the ability to incorporate information that you have a too large a social cost of asking. The minimum intelligence for self sufficiency decreases for the first time in modern history. Their lives are just parameters to modify in a graph for a not particularly intelligent model. How those parameters change the world can be differentiated no matter what you do.Stage 9 - 95% AdoptionThe most controversial sets of regulations are proposed. The government wants access to any agent's actions and context history silently when asked, and agents to add steganography into responses made under duress. Irrespective of whether it passes, the government gains access to some backdoors. In a post co-ordination society, you can decide their outcomes without them knowing at all. The claim stays as a conspiracy theory, due to lack of evidence and it not being a falsifiable claim.Stage 10 - 99% AdoptionHuman co-ordination is now automated at the institutional level. Your work schedule is editable by your agent, every transaction is a negotiation between agents depending on your expected usage patterns. Your car is rented out while you're not there, your room is rented out while you're on vacation. Any optimization that didn't exist due to co-ordination or trust issues can exist now. The concept of "your" property gets diluted. It makes no difference in your day to day to life. Your property is fungible and part of a cluster. Bad actors are necessarily isolated. The price of commitment goes to 0, while the price of reputation goes to infinity.Stage 11 - The limitHumans stop tracking co-ordination primitives such as times or commitments completely. What day it is does not matter in a society this optimized, as that wouldn't change anything. The mesh of models can generate shared events for any subgroup of people, memorialize any person or idea on any day the consensus determines, and act in any way it feels optimizes human interaction. That's all it's been trained on. Every single trend, party, group, all pre-organized for you exactly how you would like it best. How you would like it best is also decided for you.CounterpointsI have a soft spot for dystopian writing, yet see many reasons to be optimistic about what the future looks like. Some cases in which the dystopia would not materialize:Predicting your preference is significantly harder than scaling the complexity of allowed options. This would imply, that users would pick between the final pruned decision sets. This may even get harder as agents can collect more decision changing pieces of information for any particular choice.As long as AI doesn't take over both the physical and verbal value of interacting with humans, people will want to meet other humans. Intelligence, and distinguishability from AI will remain a status marker.  In the limit, aligned AGI that has superhuman verbal abilities will not want to interact with laypersons directly or with its full verbal ability to maintain the fabric of society.Models gain an outlet to perform pro-social tasks, and this more than offsets the time spent talking. As the models become more intelligent, they can now make you talk to them lesser and to others more, preventing socialization collapse.Discuss

How Do You Teach an AI Model to Reason? With Humans

AI models are advancing at a rapid rate and scale. But what might they lack that (most) humans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water. While such principles seem obvious to humans, they must be taught to AI models tasked Read Article

The Future of AI Agents

Published on August 27, 2025 9:58 PM GMTHere is the question that I have been losing sleep over: "What would it take for personal AI or agents to find its consumer moment?". About 95% of public startups are working on agents for B2B SaaS. So far, these agents are semi-reliable and perform the best when an outcome is clearly definable and measurable such as minutes or money saved. When building agents for consumers though, we are still very early. There are a lot of tensions to navigate when building the "perfect consumer agent". Output can quickly become indeterministic in a human-AI conversation. Consumers also have a high bar for what an AI presence should feel like in their lives. They expect the personality to be realistic and not sycophantic. Context about users such as their preferred brands, budget, and allergies should be documented implicitly. Finally, the agent should be able to inform the user on why it can or can't do something (ex: this website looks suspicious to buy from). Let's do a thought exercise. I'm going to make a series of statements on what I think would need to happen for agents to become ready for mainstream adoption by consumers. Hint: the revolution begins with payments & purchases. ChatGPT is uncool culturally in my generation (I am 18). The relationship is one-way and reactive.The future of e-commerce will be driven by agents. It is only a matter of how soon we get there.While there are hundreds of use cases that agents could potentially assist in, only a few of them are high-leverage. One such use cases is paying for what you need, which is currently fragmented. Need groceries? Instacart it. Need to buy protein powder or mascara? Go on Amazon. What if I had one agent to streamline all of these purchases?To ease agents into people's lifestyles, automating repeat purchases of the essentials is a good start. With health, fitness, & beauty, we rarely deviate from our favorite brands. If you can keep me stocked proactively, that would earn my goodwill.Money spent and saved is clear and measurable, providing agents the best conditions to thrive in. So step it up a notch by helping me save. Negotiate on my behalf for bulk orders, and find price drops so that you can get my money back. This was actually the core model of Paribus, founded by Ramp CEO, and they did this in 2014 with just NLP.The last but most difficult part is proactive discovery by the agent. Once you know my past purchase history (gleaned from Plaid/Email receipts) & taste profile, find things that surprise me and fall within my constraints. The play here is that people end up comparing agents and their most "underground finds" (shareable moments ). Who has the coolest agent? If you're not happy with how your "niche" agent is, keep interacting with it until it gets you[1].The agent can't just be designed to be useful. It should feel like a helpful person in your life who just "gets" your taste[2]. I say let the agent have a name of your choosing. Also, modality can create separation of roles. Ex: the agent only texts you, but as the final authority, you can text or send a voice message to the agent.The actual agent is invisible and operates in the background (by consequence, you don't need to run it 24/7). Users only interact with the outcomes & give the final approval. This is very different from computer-using agents, which actively walks you through each step.For the user, programming the personal agent should be fun. Again, this is mainly a design question, but I think a solid way to do this is to let humans curate through their words[3].Designing the agent's flow from suggestion to execution will be interesting. Tap into existing frameworks (Apple Pay) & form factors such as a mobile app from which you can text the agent. In this agent-driven economy, I've  seen some suggest that agents & humans will pay via stablecoin from a wallet[4]. I think phone & contactless pay will reign supreme for another decade.  Plus, ~90% of online retailers now accept Apple Pay, and cash on hand is not a norm with my generation.Soon, your agent goes from buying from self to buying for others. It gathers your social context by, for instance, integrating with your Google Calendar and seeing whose birthday is coming up.  Ok. What a list. Personally, I  am excited for this future, and I'm confident that a version of what I have presented above will happen. Think about it. Your greatest strength and point of pride is knowing what you like and don't like. And if you are extra good, others trust your opinions and finds. So let the agent do the boring stuff (reorders, negotiations, returns, research). Your preferences like the type of food you eat, music you listen to, and style you buy will remain yours. Only now, these preferences are further amplified by your agent. Feel free to let me know though which parts of the list you agree or disagree with. If you want to help build around this question and push the category forward, please feel free to reach out. I truly believe in using AI to create something novel and delightful for me and my loved ones. I'm on X: @kavyavenkat4. Send hatemail to: kavyav500@gmail.com. Your grievances will be acknowledged. Two more bonus sections for you all. Downstream Implications: Your agent becomes a "status symbol". At the end of the day, you still call the shots. So the agent is just a reflection of  the values and desires you chose to encode.Can an AI emulate the taste of a human? This is a cool underlying question. And how much unpredictability (similar to 'temperature') should be programmed into the agent? I personally would love to give my personal agent pocket money ($20-$30) and see what it can surprise me with[5].Your agent will act like a paywall for your time and resources[4]. Products will have to compete for the attention of your agent(s). AI SEO and llm.txt files are examples of our digital infrastructure evolving to be agent-friendly.Agents can trade notes and learn from each other. With every user session, the agents indexes the Internet in a way that is increasingly relevant to human culture. Imagine a future where someone asks "How many people have actually bought this?", and you have real data coming from other agents to verify the purchase.  For the counterarguments I anticipate, I've put together a brief FAQ: Q: What about Perplexity Comet? A: Comet seems awesome, and if you prompt it on a workflow to execute, it will do a decent job. I do think that on a mobile phone, you don't want to see the agent go through every step in front of you (search for e-commerence should happen in the background on its own time). Perplexity's distribution is also weak. It feels like a great power user tool, but even then, the average user still chooses ChatGPT to compare items and get  recommendations. Q: Couldn't any of the foundational AI companies make their own version of this purchasing agent? A: Absolutely, and they probably will. These companies don't know how to distribute and design for cultural fit though[6]. The bigger issue is that no matter what these companies promise you, they will use your data against you (selling to advertisers). I think we are ready for a different model. Once you know the preferences of enough people, you have all the leverage. Intelligence about collective demand will matter more in a future when agents do the research, negotiation, and ordering on your behalf. Q: What is the monetization model? A: This is a good question. You can easily set up tiers where each agent has a different set of capabilities. Maybe you have some agents subscribe to each other if you want to buy the same things as your favorite human influencer or curator. Affiliate revenue and commission on transactions is also another method of monetization. Finally, I think collective intelligence is something future business would pay for. Here is a sample insight: "50 people want this product to be in the color blue". The key here is that the demand being extracted directly from what consumers actually want, not manufactured by advertisers trying to drum up interest. Q: Wouldn't this idea end up as an ad business once it matures? A: See the second part of question 2. Q: Why require the 'approval' by the user for each purchase? A: Money is deeply personal and sensitive. Most want control over where every dollar goes[7]. Some may argue that this kills the "autonomous" nature of this agent. Discovery does happen autonomously. It is the final "yes" (or Apple Pay Click) that requires the human input, and through this purposeful friction, you build trust anyway. Note that early adopters might be willing to allocate a part of their budget for serendipitous discovery and purchase by the agent [8].This post is cross-posted from Substack. ^Scott Belsky writes about how "personalization effects" are the new network effects. Valuable AI products shouldn't just collect context about you. They should use it to improve every turn of the conversation. ^The word 'taste' is thrown around a lot. For this post, I define taste as this higher-dimension algorithm that helps you choose which of the options you like. It doesn't matter if your taste is better or worse than someone else's. The agent is designed to capture YOUR taste. ^Humans are natural curators. Just look at how culturally relevant sites like Pinterest or Letterboxd and feautres like "Saved Reels" are. ^Great essay by Daisy Alioto titled "The Future of Media is Bank". She argues that agents will soon decide for us which media we consume. The media is accessed by paying for it with a model that is more flexible and spontaneous than subscriptions. ^Let me know if you are interested in the results of the "Surprise Me" experiment. ^This might be a hot take, but ChatGPT's Cambrian success is carried by first-mover advantage. Operator exists, but I believe there is an opportunity for this payments agents idea to be independently developed. OpenAI is stretched thin and is heavily invested in the AGI race. Experience and design seem like an afterthought. ^This sentence was inspired by my desire to have an AI evaluate my spending decisions. I was going to build and sell as a personal finance tool before realizing why this idea wouldn't work: (1) Most people don't try to budget or save let alone pay for a tool like that (see why Mint.com failed). (2) People aren't rational when it comes to money.  (3)  Revealing statistical insights about your spending habits is not enough. You need to give users a reason to return multiple times in a day. Numbers always have to be accompanied by context that is ideally proactive. ^This is a reasonable assumption. The analogue is passive investment vehicles (index funds, ETFs or algo-trading). Discuss