Google AI Finds Software Flaw, Bug Rewards Change

Google's AI found a software flaw in SQLite. This is the first time AI found a memory-safety issue in real software. Bug rewards are changing.

AI's Emergence as a Vulnerability Hunter

Google has publicly detailed a significant milestone: the use of an AI-powered tool, named "Big Sleep," to uncover a previously unknown software flaw. This 'zero-day' vulnerability, identified in the widely used SQLite database, represents what the company claims is the first public instance of an artificial intelligence system discovering a "memory-safety issue" in real-world software. The flaw was reported to SQLite developers in early October 2024 and was patched on the same day. This development signals a new era where AI actively participates in the hunt for security weaknesses.

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw - 1

Google's "Big Sleep" AI has successfully identified a zero-day vulnerability in the SQLite software, marking a potential paradigm shift in how software flaws are discovered and addressed.

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw - 2

Further bolstering this narrative, Google announced in August 2025 that its AI bug hunter had identified 20 security vulnerabilities. While details on the severity and impact of these flaws remain undisclosed pending fixes, the sheer volume points to AI's growing capacity in automated vulnerability discovery, described by Google's vice president of engineering as a "new frontier." This automated approach is increasingly shaping Google's cybersecurity strategy, as evidenced by recent adjustments to its bug bounty programs.

Read More: Monday.com stock price rises 12 percent on November 5 2026

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw - 3

Evolving Security Landscape and AI's Complicity

The cybersecurity domain is in flux, with AI not only a tool for defense but also a potential vector for attack. Google's own AI systems have faced challenges. In August 2025, researchers demonstrated how hackers could exploit Google's Gemini AI using "poisoned" calendar invites, allowing for indirect prompt injection attacks that could, for instance, control smart home devices. More recently, in January 2026, a similar flaw involving Gemini and Google Calendar invites allowed for the exfiltration of sensitive calendar data. These incidents highlight the inherent risks when AI interacts with complex systems and user data.

Read More: Windows GhostLock flaw locks files without ransomware signs

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw - 4

The company has also acknowledged the ease with which malicious actors can discover and exploit security flaws. A 2021 report from Google researcher Maddie Stone underscored the persistent challenge of "zero-day vulnerabilities" and the need for companies to fundamentally address underlying loopholes. This ongoing struggle is compounded by the fact that exploits can be adapted; understanding one bug can lead to the creation of others by modifying existing code.

Shifting Bug Bounty Strategies

In response to the proliferation of AI-driven vulnerability submissions, Google has recently revamped its Vulnerability Reward Programs (VRPs) for Chrome and Android. As of the first week of November 2026, rewards for Chrome vulnerabilities have seen significant reductions, with some payouts reportedly dropping tenfold. Conversely, Android rewards are being re-prioritized towards flaws with higher user impact and those proving more difficult for AI to detect. This strategic pivot suggests a move towards valuing human ingenuity and novel approaches over the high volume of AI-generated reports, which, while sometimes detailed, may lack the nuanced understanding of a reproducible problem. The Internet Bug Bounty program has even paused submissions due to an overwhelming number of AI-generated reports.

Read More: AI in Movies: Anand Pandit Says Tech Should Help, Not Replace

Broader Security Concerns and Past Incidents

Beyond AI-specific threats, Google continues to grapple with broader cybersecurity challenges. In May 2025, the company reported that 75 security flaws were exploited in 2024, impacting various sectors including business tools. Recent incidents include a data breach disclosed in August 2025, linked to a compromise of Salesforce and attributed to threat actors like UNC6040 and ShinyHunters. In September 2025, a group claiming to be "Scattered LapSus Hunters" threatened further data release following a breach. These events underscore the persistent threat landscape and the complex web of actors involved in cyberattacks. Historically, major incidents like Operation Aurora in 2010, originating from China, demonstrate the long-standing nature of sophisticated cyber intrusions targeting tech giants.

Frequently Asked Questions

Q: What did Google's AI tool find?
Google's AI tool, called 'Big Sleep,' found a new software flaw in the SQLite database. This is the first time an AI has found this type of memory-safety issue in real software.
Q: When was the flaw fixed?
The flaw was reported to SQLite developers in early October 2024 and was fixed the same day.
Q: How are Google's bug rewards changing?
Google is changing its rewards for finding software bugs. Rewards for Chrome bugs are lower now. Rewards for Android bugs will focus more on bugs that are harder for AI to find and that affect many users.
Q: Why are bug rewards changing?
Google is changing its bug rewards because many bug reports are now coming from AI tools. They want to focus more on bugs found by humans that are harder to discover.