Archives AI News

Tonverk is Elektron’s new polyphonic sample mangler and groovebox

Tonverk hero 3560x2000 1

Elektron has built a cult-like following over the years with its unique and, at times, esoteric take on electronic musical instruments. On paper, Tonverk is a seemingly over-powered sampler that continues that tradition. It’s the rare piece of hardware capable of creating multisampled instruments on its own. It turns a single sample track into a […]

FTC orders AI companies to hand over info about chatbots’ impact on kids

STK414 AI CVIRGINIA 2 C

The Federal Trade Commission (FTC) is ordering seven AI chatbot companies to provide information about how they assess the effects of their virtual companions on kids and teens. OpenAI, Meta, its subsidiary Instagram, Snap, xAI, Google parent company Alphabet, and the maker of Character.AI all received orders to share information about how their AI companions […]

Gmail is launching a tab for all your Amazon purchases

acastro STK459 09

Gmail is trying to make it easier to track your online orders with a new Purchases tab coming to mobile and the web. When you click on the tab, you’ll only see emails related to your purchases, including order confirmations and shipping estimates.  This new tab builds on Gmail’s existing package-tracking features on mobile, which […]

Prove your expertise with our Professional Security Operations Engineer certification

Security leaders are clear about their priorities: After AI, cloud security is the top training topic for decision-makers. As threats against cloud workloads become more sophisticated, organizations are looking for highly-skilled professionals to help defend against these attacks. To help organizations meet their need for experts who can manage a modern security team’s advanced tools, Google Cloud’s new Professional Security Operations Engineer (PSOE) certification can help train specialists to detect and respond to new and emerging threats. Unlock your potential as a security operations expert Earning a Google Cloud certification can be a powerful catalyst for career advancement. Eight in 10 learners report that having a Google Cloud certification contributes to faster career advancement, and 85% say that cloud certifications equip them with the skills to fill in-demand roles, according to an Ipsos study published in 2025 and commissioned by Google Cloud. Foresite, a leading Google Cloud managed security service provider (MSSP), said that the certification has been instrumental in helping them provide security excellence to their clients. “As a leader at Foresite, our commitment is to deliver unparalleled security outcomes for our clients using the power of Google Cloud. The Google Cloud Professional Security Operations Engineer (PSOE) certification is fundamental to that mission. For us, it's the definitive validation that our engineers have mastered the advanced Google Security Operations platform we use to protect our clients' businesses. Having a team of PSOE-certified experts provides our clients with direct assurance of our capabilities and expertise. It solidifies our credibility as a premier Google Cloud MSSP and gives us a decisive edge in the market. Ultimately, it’s a benchmark of the excellence we deliver daily,” said Brad Thomas, director, Security Engineering.  The PSOE certification can help validate practical skills needed to protect a company's data and infrastructure in real-world scenarios, a key ingredient for professional success. It also can help security operations engineers demonstrate their ability to directly address evolving and daily challenges. aside_block <ListValue: [StructValue([('title', '$300 in free credit to try Google Cloud security products'), ('body', <wagtail.rich_text.RichText object at 0x3e00d396ee50>), ('btn_text', ''), ('href', ''), ('image', None)])]> Gain a decisive edge with certified security talent For organizations, including MSSPs and other Google Cloud partners, this certification is a powerful way to help ensure that your security professionals are qualified to effectively implement, respond to, and remediate security events using Google Cloud’s suite of solutions.  Hiring managers are increasingly looking for a specific skill set. The Ipsos study also found that eight in 10 leaders prefer to recruit and hire professionals who hold cloud certifications, seeing them as a strong indicator of expertise. “We are excited about Google’s new Professional Security Operations Engineer certification, which will help Accenture demonstrate our leading expertise in security engineering and operations to clients. This validation is important because it gives our clients confidence in knowing Accenture has certified professionals with structured training as they choose the best service partner for their security transformations. For our teams, this new certification offers a clear path for professional development and career advancement. Google’s Professional Security Operations Engineer certification will enable Accenture to support clients better as they successfully adopt and get the most out of the Google Security Operations and Security Command Center platforms,” said Rex Thexton, chief technology officer, Accenture Cybersecurity.  Demonstrate comprehensive expertise across Google Cloud Security tools A Google Cloud-certified PSOE can effectively use Google Cloud security solutions to detect, monitor, investigate, and respond to security threats across an enterprise environment. This role encompasses identity, workloads, services, infrastructure, and more.  Plus, PSOEs can perform critical tasks such as writing detection rules, remediating misconfigurations, investigating threats, and developing orchestration workflows. The PSOE certification validates the candidate’s abilities with Google Cloud security tools and services, including: Google Security Operations Google Threat Intelligence Security Command Center Specifically, the exam assesses ability across six key domains: Platform operations (~14%): Enhancing detection and response with the right telemetry sources and tools, and configuring access authorization. Data management (~14%): Ingesting logs for security tooling and identifying a baseline of user, asset, and entity context. Threat hunting (~19%): Performing threat hunting across environments and using threat intelligence for threat hunting. Detection engineering (~22%): Developing and implementing mechanisms to detect risks and identify threats, and using threat intelligence for detection.  Incident response (~21%): Containing and investigating security incidents; building, implementing, and using response playbooks; and implementing the case management lifecycle. Observability (~10%): Building and maintaining dashboards and reports to provide insights, and configuring health monitoring and alerting. While there are no formal prerequisites to take the exam, we recommend that candidates have: At least three years of security industry experience. At least one year of hands-on experience using Google Cloud security tooling. The certification is relevant for experienced professionals, including those in advanced career stages and roles, such as security architects. Your path to security operations starts here To prepare for the exam, Google Cloud offers resources that include online training and hands-on labs. The official Professional Security Operations Engineer Exam Guide provides a complete list of topics covered, helping candidates align their skills with the exam content. Candidates can also start preparing through the recommended learning path. You can learn more and register for the Professional Security Operations Engineer certification today.

Building scalable, resilient enterprise networks with Network Connectivity Center

1 Ov5W3EO.max 1000x1000 1

For large enterprises adopting a cloud platform, managing network connectivity across VPCs, on-premises data centers, and other clouds is critical. However, traditional models often lack scalability and increase management overhead. Google Cloud's Network Connectivity Center is a compelling alternative.  As a centralized hub-and-spoke service for connecting and managing network resources, Network Connectivity Center offers a scalable and resilient network foundation. In this post, we explore Network Connectivity Center's architecture, availability model, and design principles, highlighting its value and design considerations for maximizing resilience and minimizing the "blast radius" of issues. Armed with this information, you’ll be better able to evaluate how Network Connectivity Center fits within your organization, and to get started. The challenges of large-scale enterprise networks Large-scale VPC networks consistently face three core challenges: scalability, complexity, and the need for centralized management. Network Connectivity Center is engineered specifically to address these pain points head-on, thanks to: Massively scalable connectivity: Scale far beyond traditional limits and VPC Peering quotas. Network Connectivity Center supports up to 250 VPC spokes per hub and millions of VMs, while enhanced cross-cloud connectivity upcoming features like firewall insertion will help ensure your network is prepared for future demands. Smooth workload mobility and service networking: Easily migrate workloads between VPCs. Network Connectivity Center natively solves transitivity challenges through features like producer VPC spoke integration to support private service access (PSA) and Private Service Connect (PSC) propagation, streamlining service sharing across your organization. Reduced operational overhead: Network Connectivity Center offers a single control point for VPC and on-premises connections, automating full-mesh connectivity between spokes to dramatically reduce operational burdens. Under the hood: Architected for resilience Let’s home in on how Network Connectivity Center stays resilient. A key part of that is its architecture, which is built on three distinct, decoupled planes. A very simplified view of the Network Connectivity Center & Google Cloud networking stack Management plane: This is your interaction layer — the APIs, gcloud commands, and Google Cloud console actions you use to configure your network. It's where you create hubs, attach spokes, and manage settings. Control plane: This is the brains of the operation. It takes your configuration from the management plane and programs the underlying network. It’s a distributed, sharded system responsible for the software-defined networking (SDN) that makes everything work. Data plane: This is where your actual traffic flows. It's the collection of network hardware and individual hosts that move packets based on the instructions programmed by the control plane. A core principle that Network Connectivity Center uses across this architecture is fail-static behavior. This means that if a higher-level plane (like the management or control plane) experiences an issue, the planes below it continue to operate based on the last known good configuration, and existing traffic flows are preserved. This helps ensure that, say, a control plane issue doesn't bring down your entire network. aside_block <ListValue: [StructValue([('title', '$300 to try Google Cloud networking'), ('body', <wagtail.rich_text.RichText object at 0x3e00d39b7430>), ('btn_text', ''), ('href', ''), ('image', None)])]> How Network Connectivity Center handles failures A network's strength is revealed by how it behaves under pressure. Network Connectivity Center's design is fundamentally geared towards stability, so that potential issues are contained and their impact is minimized. Consider the following Network Connectivity Center design points: Contained infrastructure impact: An underlying infrastructure issue such as a regional outage only affects resources within that specific scope. Because Network Connectivity Center hubs are global resources, a single regional failure won't bring down your entire network hub. Connectivity between all other unaffected spokes remains intact. Isolated configuration faults: We intentionally limit the "blast radius" of a configuration error with careful fault isolation. A mistake made on one spoke or hub is isolated and will not cascade to cause failures in other parts of your network. This fault isolation is a crucial advantage over intricate VPC peering topologies, where a single routing misconfiguration can have far-reaching consequences. Uninterrupted data flows: The fail-static principle ensures that existing data flows are highly insulated from management or control plane disruptions. In the event of a failure, the network continues to forward traffic based on the last successfully programmed state, maintaining stability and continuity for your applications. Managing the blast radius of configuration changes Even if an infrastructure outage affects resources in its scope, Network Connectivity Center connectivity in other zones or regions remains functional. Critically, Network Connectivity Center configuration errors are isolated to the specific hub or spoke being changed and don’t cascade to unrelated parts of the network — a key advantage over complex VPC peering approaches. To further enhance stability and operational efficiency, we also streamlined configuration management in Network Connectivity Center. Updates are handled dynamically by the underlying SDN, eliminating the need for traditional maintenance windows for configuration changes. Changes are applied transparently at the API level and are designed to be backward-compatible, for smooth and non-disruptive network evolution. Connecting multiple regional hubs Network Connectivity Center hub is a global resource. A multi-region resilient design may involve regional deployments with a dedicated hub per region. This requires connectivity across multiple hubs. Though Network Connectivity Center does not offer native hub-to-hub connectivity, alternative methods allow communication across Network Connectivity Center hubs, fulfilling specific controlled-access needs: Cloud VPN or Cloud Interconnect: Use dedicated HA VPN tunnels or VLAN attachments to connect Network Connectivity Center hubs. Private Service Connect (PSC): Leverage a producer/consumer model with PSC to provide controlled, service-specific access across Network Connectivity Center hubs. Multi-NIC VMs: Route traffic between Network Connectivity Center hubs using VMs with network interfaces in spokes of different hubs. Full-mesh VPC Peering: For specific use cases like database synchronization, establish peering between spokes of different Network Connectivity Center hubs. Frequently asked questions What happens to traffic if the Network Connectivity Center control plane fails?Due to the fail-static design, existing data flows continue to function based on the last known successful configuration. Dynamic routing updates will stop, but existing routes remain active. Does adding a new VPC spoke impact existing connections?No. When a new spoke is added, the process is dynamic and existing data flows should not be interrupted. Is there a performance penalty for traffic traversing between VPCs via Network Connectivity Center? No. Traffic between VPCs connected by Network Connectivity Center will experience the same performance compared to VPC peering. Best practices for resilience While Network Connectivity Center is a powerful and resilient platform, designing a network for maximum availability requires careful planning on your part. Consider the following best practices: Leverage redundancy: Data plane availability is localized. To survive a localized infrastructure failure, be sure to deploy critical applications across multiple zones and regions. Plan your topology carefully: Choosing your hub topology is a critical design decision. A single global hub offers operational simplicity and is the preferred approach for most use cases. Consider multiple regional hubs only if strict regional isolation or minimizing control plane blast radius is a primary requirement, and be aware of the added complexity. Finally, even though they are regional, Network Connectivity Center hubs are still "global resources" — that means in the event of global outages, the management plane operations may be impacted independent of regional availability.  Choose Network Connectivity Center for transitive connectivity: For large-scale networks that require transitive connectivity for shared services, choosing Network Connectivity Center over traditional VPC peering can simplify operations and allow you to leverage features like PSC/PSA propagation. Embrace infrastructure-as-code: Use tools like Terraform to manage your Network Connectivity Center configuration, which reduces the risk of manual errors and makes your network deployments repeatable and reliable. Monitor network health: Regularly use the Google Cloud Service Health dashboard and your Personalized Service Health dashboard to stay informed about the status of Network Connectivity Center and other services. Plan for scale: Be aware of Network Connectivity Center's high, but finite, scale limits (e.g., 250 VPC spokes per hub) and plan your network growth accordingly. A simple approach to scalable, resilient networking Network Connectivity Center removes much of the complexity from enterprise networking, providing a simple, scalable and resilient foundation for your organization. By understanding its layered architecture, fail-static behavior, and design principles, you can build a network that not only meets your needs today but is ready for the challenges of tomorrow.  To get started, review the design considerations and Network Connectivity Center official documentation or contact Google Cloud teams for guidance on complex, multi-hub network designs.

Three-part framework to measure the impact of your AI use case

Generative AI is no longer just an experiment. The real challenge now is quantifying its value. For leaders, the path is clear: make AI projects drive business growth, not just incur costs. Today, we'll share a simple three-part plan to help you measure the effect and see the true worth of your AI initiatives. This methodology connects your technology solution to a concrete business outcome. It creates a logical narrative that justifies investment and measures success. 1. Define what success looks like (the value) The first step is to define the project's desired outcome by identifying its "value drivers." For any AI initiative, these drivers typically fall into four universal business categories: Operational efficiency & cost savings: This involves quantifying improvements to core business processes. Value is measured by reducing manual effort, optimizing resource allocation, lowering error rates in production or operations, or streamlining complex supply chains. Revenue & growth acceleration: While many organizations initially focus on efficiency, true market leadership is achieved through growth. This category of value drivers is the critical differentiator, as it focuses on top-line impact. Value can come from accelerating time-to-market for new products, identifying new revenue streams through data analysis, or improving sales effectiveness and customer lifetime value. Experience & engagement: This captures the enhancement of human interaction with technology. It applies broadly to improving customer satisfaction (CX), boosting employee productivity and morale with intelligent tools (EX), or creating more seamless partner experiences. Strategic advancement & risk mitigation: This covers long-term competitive advantages and downside protection. Value drivers include accelerating R&D cycles, gaining market-differentiating insights from proprietary data, strengthening operational resiliency, or ensuring regulatory compliance and reducing fraud. aside_block <ListValue: [StructValue([('title', 'Try Google Cloud for free'), ('body', <wagtail.rich_text.RichText object at 0x3e00d39d1f40>), ('btn_text', ''), ('href', ''), ('image', None)])]> 2. Specify what it costs to succeed (your investment) The second part of the framework demands transparency regarding the investment. This requires a complete view of the Total Cost of Ownership (TCO), which extends beyond service fees to include model training, infrastructure, and the operational support needed to maintain the system. For a detailed guide, we encourage a review of our post, How to calculate your AI costs on Google Cloud.  3. State the ROI  This is the synthesis of the first two steps. The ROI calculation makes the business case explicit by stating the time required to pay back the initial investment and the ongoing financial return the project will generate. The framework in action: An AI chatbot for customer service Now, let's apply the universal framework to a specific use case. Consider an e-commerce company implementing an AI chatbot. Here, the four general value drivers become tailored to the world of customer service. Step 1: Define success (the value)The team uses the customer-service-specific quadrants to build a comprehensive value estimate. Quadrant 1: Operational efficiency Reduced agent handling time: By automating 60% of routine inquiries, the company frees up thousands of agent hours. This enables agents to serve more customers or perhaps provide better quality service to premium customers.  Estimated hours saved: ~725 hrs (lets say this equate to $15,660 in value) Lower onboarding & training costs: New agents become productive faster as the AI handles the most common questions, reducing the burden of repetitive training. Estimated monthly value: $1,000 Quadrant 2: Revenue growth 24/7 Sales & support: The chatbot assists customers and captures sales leads around the clock, converting shoppers who would otherwise leave. Estimated mMonthly vValue: $5,000 Improved customer retention: Faster resolution and a better experience lead to a small, measurable increase in customer loyalty and repeat purchases. Estimated monthly value: $1,000 Quadrant 3: Customer and employee experience Enhanced agent experience & retention: Human agents are freed from monotonous tasks to focus on complex, rewarding problems. This improves morale and reduces costly agent turnover. Estimated monthly value: $500 Quadrant 4: Strategic enablement Expanding business to more languages: Enabling human agents to provide support in 15+ additional languages, thanks to the translation service built into the system. Estimated revenue increase: $1,750 Total estimated monthly value = $15,660 + $1,000 + $5,000 + $1,000 + $500 + $1,750 = $24,910 Step 2: Define the cost (the investment)Following a TCO analysis from our earlier blog post, we calculated the total ongoing monthly cost for the fully managed AI solution on Google Cloud would be approximately $2,700. Step 3: State the ROI The final story was simple and powerful. With a monthly value of around $25,000 and a cost of only $2,700, the project generated significant positive cash flow. The initial setup cost was paid back in less than two weeks, securing an instant "yes" from leadership. Get started Contact us to consult with an expert here. Related Article How to calculate your AI costs on Google Cloud Learn a comprehensive approach to manage expenses and maximize value from your AI investments on Google Cloud. Read Article

Xbox Cloud is getting a much-needed upgrade

xboxlogo

Over the past week, I've been using Nvidia's new RTX 5080 GeForce Now tier. Nvidia's cloud gaming service has been the best on the market for years now, and this upgrade makes it even better. I've been playing Cyberpunk 2077, Overwatch 2, and Silksong, and it's genuinely comparable to my own PC. The upgrade is […]

Fortnite will soon let you buy exactly the V-bucks you need

fortnite reload elite zadie slurp rush map 1920x1080 7515cd7aa2e9

If you want to buy a skin or virtual gear from the Fortnite item shop but don’t have enough V-Bucks, Epic Games is going to add a way to “top up” your V-Bucks balance so that you can buy just the V-Bucks you need to afford your purchase. Epic is calling this feature the “Exact […]

The best earbuds we’ve tested for 2025

247231 Buying Guide Earbuds CVirginia

It’s hard to buy a bad pair of wireless earbuds these days, and with constant discounts and deals wherever you look, now is as good a time as any to splurge on the pair you’ve been eyeing. The market has come a long way since the early era of true wireless earbuds when we had […]

Roku wants you to see a lot more AI-generated ads

Roku

Are you tired of having to watch the same three or four ads over and over again? That could change soon, if Roku has its way. The smart TV and streaming device maker is working on dramatically expanding the number of advertisers vying for your attention, to the point where ads on streaming could soon […]