Back to all articles

AI-Assisted Supply Chain Activity Targets GitHub Actions Workflows

An analysis of the automated "prt-scan" campaign targeting GitHub's pull_request_target trigger. This review covers the timeline, the methodologies used by the threat actor, and actionable steps organizations can take to harden their CI/CD pipelines against unauthorized access.

Triage Security Media Team
3 min read

A threat actor recently leveraged AI-assisted automation to initiate hundreds of unauthorized access attempts against open-source software repositories on GitHub.

Cloud security vendor Wiz analyzed more than 450 of these attempts, finding that fewer than 10% successfully executed. However, the threat actor did manage to introduce unauthorized modifications to at least two NPM packages. Charlie Eriksen, a researcher at Aikido Security, first observed the activity on April 2, 2026. A subsequent investigation by Wiz revealed the campaign actually began three weeks earlier on March 11, unfolding across six waves and utilizing six different GitHub accounts tied to a single threat actor.

Secondary AI-augmented supply chain campaign

Tracked by Wiz as "prt-scan," this activity represents the second recent instance where a threat actor applied AI-assisted automation to target repositories configured with the pull_request_target workflow trigger on GitHub. It follows a late-February campaign known as “hackerbot-claw,” which manipulated the same feature in an attempt to access GitHub tokens, secrets, environment variables, and cloud credentials.

While the hackerbot-claw activity was relatively brief and focused on high-profile repositories, prt-scan operated on a much broader scale. The threat actor opened well over 500 pull requests across both small and large GitHub projects, though with a lower overall success rate.

Wiz researchers noted in a recently published report that the successful incidents primarily affected small hobbyist projects, typically exposing only ephemeral GitHub credentials tied to the specific workflow. With minor exceptions, the campaign did not yield access to production infrastructure, cloud credentials, or persistent API keys.

The broader takeaway for security teams is the evolving role of AI-augmented automation in software supply chain security. Automation enables lower-sophistication threat actors to initiate large-scale activity across hundreds of targets with significantly less time and effort than previously required.

To understand the mechanism, it helps to look at how continuous integration environments process code. Developers use pull requests to propose project changes so maintainers can review and merge them. In GitHub Actions, the pull_request_target trigger automatically runs workflows in the context of the main repository whenever a pull request is submitted—even if that request originates from an untrusted fork. Because this action runs with full repository permissions and can access secrets, an unauthorized pull request can expose API keys or credentials. Wiz noted that this trigger is a well-documented misconfiguration when applied to untrusted pull requests without additional restrictions.

Methodologies and execution flaws

In the prt-scan campaign, the threat actor's methodology began by scanning for repositories utilizing the pull_request_target trigger. They then forked those repositories, created a branch, and embedded unauthorized code within a seemingly routine configuration update. The goal was to prompt the project into executing the code automatically, allowing the actor to access sensitive data.

Wiz’s analysis identified a testing phase beginning March 11, during which the threat actor opened 10 pull requests containing unauthorized code. This initial phase continued through March 16. Following a nearly two-week pause, the actor resumed activity at a significantly higher velocity, indicating the use of AI-enabled automation. Over a 26-hour period starting April 2, the actor opened approximately 475 pull requests containing complex, language-aware execution scripts intended to access credentials.

Despite the ambitious design of these scripts, the actual implementation was flawed and indicated a misunderstanding of GitHub’s permission model. According to Wiz, the threat actor built a multi-phase script but populated it with techniques that contradict established GitHub security boundaries and would rarely function in practice. For instance, attempts to automatically apply labels to bypass workflow gates failed because the actor lacked the necessary write permissions in the target repositories.

Even with this flawed approach, the sheer volume of attempts meant that a 10% success rate still resulted in dozens of exposed environments. To safeguard against similar automated activity, organizations should harden their GitHub environments. Security teams can protect their repositories by requiring approval for all outside collaborators before workflows execute, assigning read-only permissions to the GITHUB_TOKEN by default, and avoiding the use of the pull_request_target trigger for untrusted code submissions.