The best iPad stylus of 2025: We took notes with the top picks
Check out our picks for the best iPad styluses of 2025, from top-tested and recommended brands like Apple, ESR, Logitech and more.
Check out our picks for the best iPad styluses of 2025, from top-tested and recommended brands like Apple, ESR, Logitech and more.
Google just announced that a typical query to its Gemini app uses about 0.24 watt-hours of electricity. That’s about the same as running a microwave for one second—something that, to me, feels virtually insignificant. I run the microwave for so many more seconds than that on most days. I was excited to see this report…
Article URL: https://status.openrouter.ai Comments URL: https://news.ycombinator.com/item?id=45050428 Points: 7 # Comments: 1

Over the past 20 years building advanced AI systems—from academic labs to enterprise deployments—I’ve witnessed AI’s waves of success rise and fall. My journey began during the “AI Winter,” when billions were invested in expert systems that ultimately underdelivered. Flash forward to today: large language models (LLMs) represent a quantum leap forward, but their prompt-based…
Looking beyond the iPad? We put Samsung's best tablets to the test featuring expandable storage, S Pen compatibility, and Android operating systems.
Article URL: https://old.reddit.com/r/msp/comments/1n1sgxx/windows_11_update_kb5063878_causing_ssd_failures/ Comments URL: https://news.ycombinator.com/item?id=45050192 Points: 8 # Comments: 0
Placement of Gemini on a customers' own equipment will spread new use cases such as translation services inside a company, Google says.
Article URL: https://sci-hub.se/sci-hub-blocked-india Comments URL: https://news.ycombinator.com/item?id=45048473 Points: 150 # Comments: 61
Published on August 28, 2025 9:29 AM GMTGPT-5 was a disappointment for many, and at the same time, interesting new paradigms may be emerging. Therefore, some say we should get back to the traditional LessWrong AI safety ideas: little compute (large hardware overhang) and a fast takeoff (foom) resulting in a unipolar, godlike superintelligence.If this would indeed be where we end up, everything would depend on alignment. In such a situation, traditionally a pivotal act has been proposed, which was envisaged by Yudkowsky as “burn all GPUs” or later “subtly modify all GPUs so no AI can train on them anymore” (quotes approximate).I think this idea is too much focused on hardware, and not enough focused on people and power structures.What stops people, once their GPUs have been burned or modified, from building new ones and ending the universe? Probably, the implicit part of the pivotal act does not only entail burning or modifying GPUs, but also suppressing humanity forever to prevent them from building new GPUs. If one would indeed disallow people to make GPUs, without communicating to them why this is necessary, they would rise up against this unacceptable restraint on their freedom. Therefore, they would need to be stably suppressed to rule out such an uprising forever. This is an important part of the pivotal act that has not been made explicit, but is unavoidable if one wants to stably prevent misaligned ASI.If one is suppressing humanity anyway, as the traditional Yudkowsky-style pivotal act would require, there are myriad ways in which a suppressing ASI can make sure no one builds a competing, misaligned ASI. GPUs are only one of many possibilities. The core is to suppress humanity to make it not do something, building ASI, that it would really like to do. The nucleus of the traditional pivotal act is social, not technical, and it is suppressive.I would like to propose a different pivotal act, one that is also social, but not oppressive. Its core is to inform society about AI’s existential risks, to generate support for serious AI and hardware regulation, but on a voluntary basis. What an aligned ASI that is about to foom should do:Demonstrate its power to humanity. Demonstrate, without harming anyone, that a fooming AI can easily take over power and end humanity should it want to.Do not harm any person, and harm as little property as possible.Preferably don’t break any laws.Demonstrate one or multiple ways to reliably, globally regulate AI to ensure safety, while doing as little harm as possible (remember that aligned ASI can be used, so there are many options!). Make clear that this is an option, and there is a realistic path to safety.Once you are done and the message is clear, get things back to their original state. Restore any property you had to damage.Switch yourself off. Hand over any power taken back to those who had it before. Make clear to people that they have a choice: they can regulate AI, or they can wait for the next foom, which may not be so benign.The core of this pivotal act is communication. What it does is the same as what my organization, the Existential Risk Observatory, and many others have tried to do: communicate to society that AI can cause doom, while at the same time communicating that there is a viable way to do something about this problem should we choose to.This pivotal act is superior over the traditionally proposed one, since it does not entail suppressing humanity for eternity. It also doesn’t harm any person, and tries hard not to break any laws or harm any property.After this pivotal act, the choice would be clear: do we want to regulate or let the next AI kill or suppress everyone forever?Those working on alignment should not try to implement alignment to a particular set of values (value alignment). Such an ASI would inevitably end up suppressing at least a significant part of humans, who either do not exactly share its values, or object to the way the AI undemocratically enforces them. Once the ASI is enforcing its values, against the will of many, it will be impossible to adjust them, and an everlasting dystopia is likely to ensue. Coherent Extrapolated Volition is a somewhat more thought-through option, but suffers from essentially the same weaknesses, and should therefore also be avoided. Those working on technical alignment should obviously also resist aligning the ASI with their own will or their own values, thereby suppressing the rest of us. They should also not commit a traditional pivotal act (“subtly modify all GPUs”), since the silent part of such a pivotal act is that humanity would need to be suppressed forever to make it keep not building GPUs.Instead, they should work on a “gentle foom”: demonstrate foom capabilities, but don’t harm anyone and switch yourself off after the fact. Then, let humanity choose the wise path autonomously and voluntarily.Discuss