Tell HN: GitHub might have been leaking your webhook secrets. Check your emails.

43 pointsposted 13 days ago
by ssiddharth

Item id: 47767928

14 Comments

esher

13 days ago

Got that too. My first reaction: Go to HN to understand what's going on. Where are the comments?

sph

13 days ago

> webhook secrets for webhooks you are responsible for were inadvertently included in an HTTP header on webhook deliveries

LOL how does this even happen?

freakynit

13 days ago

Same reaction of mine as well. I mean, how do you even fck up this way? ... I dont know why, but, this is giving me vibe-coded vibes.

Developer might have prompted to include some signature (definitely they didn't use this word, or else AI would not have messed this way) to verify the webhooks as being coming from legitimate source, and AI probably went ahead with the secret key itself :)

johnlbevan2

13 days ago

Assuming I've understood correctly...

Checking the listed impacted web hooks I noticed that no repo's listed twice, even though some have multiple hooks defined... That makes me think the report may have a bug.

Also, many of the listed hooks show as `This hook has never been triggered`; which if correct means it wouldn't have had an issue as the compromised payload's never been sent to it.

I'll write a powershell script (well, AI will write it and then I'll tweak) to help check which webhooks have been called in this time window (or at least, say they were most recently triggered after the issue first began) to help get a more accurate report of what's concerning. If that proves useful, I'll share here.

johnlbevan2

13 days ago

Powershell Script to get all webhooks that have actually run (we can't filter by date; but this may wipe out a significant number):

    # Authenticate via `gh auth login -s admin:enterprise` before running this script
    # Save this script as c:\\temp\GHWebookAudit.ps1; then navigate to c:\\temp\ and invoke via `.\GHWebookAudit.ps1 -Orgs @("MyExampleOrg", "MyOtherExampleOrg") -InformationAction Continue`
    [CmdletBinding()]
    Param (
        [Parameter(Mandatory)]
        [string[]]$Orgs
    )

    $results = foreach ($org in $Orgs) {
        Write-Information "Fetching ALL repositories for org [$org]..."
        
        # gh api --paginate fetches every page and outputs a stream of JSON arrays
        # We use -Raw to ensure we capture the full stream before converting
        $reposJson = gh api --paginate "orgs/$org/repos?per_page=1000"
        
        # ConvertFrom-Json can handle multiple JSON arrays in the stream
        $repos = $reposJson | ConvertFrom-Json

        # Sometimes the pagination returns a single array or a list of arrays; 
        # Ensure we are iterating over the objects themselves.
        $repoList = if ($repos.GetType().IsArray -and $repos[0].GetType().IsArray) { 
            $repos | ForEach-Object { $_ } 
        } else { 
            $repos 
        }

        Write-Information "Found $($repoList.Count) repositories. Starting audit..." 

        foreach ($repoObj in $repoList) {
            $repoName = $repoObj.name
            $fullRepo = "$org/$repoName"
            Write-Information "Checking: $fullRepo"

            # Get all hooks for the repository
            $hooksJson = gh api "repos/$fullRepo/hooks" 2>$null
            if (-not $hooksJson) { continue }
            
            $hooks = $hooksJson | ConvertFrom-Json

            foreach ($hook in $hooks) {
                $hookId = $hook.id
                $hookUrl = $hook.config.url

                # Get the most recent delivery... we could pontentially check for activity between the impacted dates, but that would need additional filters/commplexity; for now keeping it relatively simple
                # correction: this only goes back 3 days; so doesn't work... https://docs.github.com/en/webhooks/testing-and-troubleshooting-webhooks/viewing-webhook-deliveries
                # # $deliveriesJson = gh api "repos/$fullRepo/hooks/$hookId/deliveries" --limit 1 2>$null
                # instead use th eexisting hook response's last response status just to say if it has run; though that doesn't say when

                # Extract last response details
                $lastStatus = $hook.last_response.status
                $lastCode   = $hook.last_response.code
                $hasEverRun = ($lastStatus -eq 'active')  # active vs unused

                [PSCustomObject]@{
                    Organization = $org
                    Repository   = $repoName
                    HookID       = $hookId
                    Active       = $hook.active
                    URL          = $hookUrl
                    HasEverRun   = $hasEverRun
                    LastStatus   = $lastStatus
                    LastHTTPCode = $lastCode
                    UpdatedAt    = $hook.updated_at
                }
            }
        }
    }

    # Final Output
    if ($results.Count) {
        $results | Export-Csv -Path "./Full_Webhook_Audit_Report.csv" -NoTypeInformation
        Write-Information "Audit complete! $($results.Count) hooks found. Results saved to Full_Webhook_Audit_Report.csv"
    } else {
        Write-Information "No webhooks found across the organizations."
    }

johnlbevan2

12 days ago

ps. As for what to do with the findings.

Filter for `HasEverRun=true`; if it's not run, the secrets can't have been exposed.

Review the URL; this says who you're calling. Purists would say that if you've called any endpoints there's a risk. However for most companies I'd say you can trust services provided by folk like Microsoft (they host GitHub anyway) and Snyk (if you're relying on them for security scanning, you should be able to trust them), so if you see webhooks to Azure DevOps (dev.azure.com) or to Snyk (api.snyk.io) you can assume that anything exposed there isn't a concern. You don't need to worry about proxys on egress - since this is GitHub Cloud, so calls egress from GitHub rather than through your own network. So the only concerns would be in house or third party developed endpoints where you feel those companies (/their employees who have access to their ingress logs) may be a risk.

user

13 days ago

[deleted]

sathomasga

13 days ago

Closing the barn door well after the horses are long gone.

s1mn

13 days ago

Waiting 3 months to disclose this is suboptimal.

monkaiju

13 days ago

How is this not a bigger deal on hn? Was expecting a lot more conversation

silvr68

13 days ago

yeah, I was wondering the same.. kinda big deal that they first of all had the issue and second, that they fixed it in January and only now message about it

thinkindie

12 days ago

I don't understand how this story is not in the front page yet.

suralind

13 days ago

How come it took them so much time to send this notification? I'm so fed up with their bs.