link rel="stylesheet" href="https://unpkg.com/@phosphor-icons/web@2.1.1/src/regular/style.css"

When Assumed Trust Breaks

Lessons from the Red Hat GitLab Incident
Brian Gallagher
General Manager, Koniag Cyber
min. read

On October 2, 2025, Red Hat disclosed unauthorized access to one of its internal GitLab instances used by its consulting organization. While the compromised instance reportedly did not host Red Hat’s core supply chain or sensitive customer data, the breach nonetheless exposed internal consulting artifacts, code snippets, and communications. Red teams, security architects, and development leads should treat this as a wake-up call. The lesson? Even “trusted” internal tooling can become a vector for data or intellectual property exfiltration.

From the vantage of Koniag Cyber, this incident underscores several key risk domains that often get under-prioritized:

  • Third-party or delegated tooling risk
  • Lack of generative “Know Your Developer” (KCD) practices
  • Deficient vulnerability assessment discipline

Let’s unpack each of these and discuss actionable guardrails for embedding resilience into a modern software supply chain that could have prevented this incident.

Third-Party Tool Risk Is Real, Even Inside Your Walls

Organizations often treat internal development tooling (e.g. GitLab instances, CI/CD frontends, build servers) as “ours” and thus implicitly trusted. 

However:

  • These systems are often managed or maintained by third parties (e.g. consulting arms, managed services teams, or outsourced DevOps teams), bringing in external access vectors.
  • Even internal users might be compromised, or their credentials reused elsewhere.
  • Attackers often “pivot sideways” via less-protected internal systems to reach high-value assets.

In Red Hat’s case, the affected GitLab instance was a consulting environment, not the product supply chain. The code and internal plans were exposed, demonstrating that even so-called noncritical systems can yield substantial risk.

For Koniag Cyber and organizations we advise, we see this as a clarion call: No tool in your development ecosystem should be implicitly trusted. Every piece of infrastructure, whether cloud, on-prem, or hosted, should be treated as “untrusted until verified.”

Know Your Developer (KCD) — A Complement to KYC/KYP

We often talk about “Know Your Customer (KYC)” or “Know Your Partner (KYP)” in compliance or third-party risk programs. But in software security, we must understand and implement Know Your Developer (KCD).

KCD means embedding controls and visibility around who is building components in your systems, especially when external or semi-external teams are involved. Some KCD components include:

  • Developer identity proofing and credential hygiene: Mandating organizational identity management (SSO, multi-factor auth, hardware tokens) for all devs, including contractors and consultants.
  • Least privilege access to repos, branches, build systems: Separate access for code vs. operations; scoped permissions per project.
  • Audit logging and tamper evidence: Ensure that code check-ins, merges, build approvals, and artifact promotions are logged with cryptographic integrity checks.
  • Supply chain attestations: Before integrating external modules, require provenance metadata and known origin assertions.
  • Periodic attestation of developer status: Contractors or external developers should be revalidated periodically (background, contractual, and credential checks).

By making KCD part of your engineering DNA, you reduce the risk of rogue insiders or stolen credentials initiating destructive changes.

Being Consistent and Continuous with Vulnerability Assessments

No system is perfect. What separates resilient organizations from casualties is finding weaknesses before adversaries do. Here’s how Koniag Cyber recommends structuring vulnerability assessment practices:

Assessment Type Frequency/ Trigger Scope & Goal Key Outputs
Static Application Security Testing (SAST) On each merge, nightly builds Analyze source code (including third-party libraries) for common code flaws Vulnerability reports, prioritized defect backlog
Dynamic Application Security Testing (DAST) Pre-production staging, CI gating Exercise running application endpoints to find runtime attacks (e.g., injection, auth bypass) Vulnerability scan reports, remediation steps
Software Composition Analysis (SCA) On each build, or at least daily Identify vulnerable OSS dependencies and transitive dependencies Bill of Materials (BOM), CVE alerts, upgrade paths
Penetration Testing/ Red Teaming Quarterly or before major releases Simulate attacker behavior on systems (CI, infra, integration endpoints) Pen test report with exploit paths, recommendations
Infrastructure & Host Vulnerability Scans Weekly or at least monthly Scan build servers, VMs, containers, and OS images for CVEs Patch reports, configuration drift alerts
Supply-chain Attack Simulations At least annually Test the security of upstream dependencies, build pipelines, and artifact signing Attack paths, mitigations, procedural improvements

Crucially, these assessments must feed directly into your engineering backlog with clear ownership, timelines, and metrics. It’s not enough to identify vulnerabilities; they must be triaged, remediated, or mitigated.

Putting It Together: A Secure SDLC Blueprint (with KCD)

Here’s how Koniag Cyber architects a secure software development lifecycle that weaves in KCD and assessment disciplines.

  • Onboarding & Identity
    • All developers (internal or external) must be onboarded via central IAM, with MFA required.
    • Code repositories, build systems, and artifact registries all use unified identity (no standalone accounts).
  • Access Scoping & Branch Controls
    • Use fine-grained access control: restrict branches, limit merge rights, and enforce code review gating.
    • Use vaults or ephemeral credentials for secrets — never embed secrets in code or CI.
  • Early Static and Composition Scans
    • When a dev pushes code, automatically trigger SAST and SCA scans.
    • Enforce thresholds, e.g., no new critical CVEs allowed to enter mainline.
  • Integration & Build Pipeline Hardening
    • CI/CD pipeline must run in isolated, ephemeral containers; builders are immutable, ephemeral.
    • Ensure build isolation, artifact signing, and reproducible builds.
    • Log and monitor all build steps with cryptographic audit trails.
  • Pre-Production Testing (DAST / Pen Test)
    • Deploy candidate builds to staging environments with instrumentation.
    • Run DAST scans, security smoke tests, and prior to production, do a fresh pen test (or use red team heuristics).
  • Production Hardening & Monitoring
    • Use runtime protections, anomaly detection, and WAFs as needed.
    • Continuously scan for new vulnerabilities (infra, container images, dependencies).
    • Must have incident response playbooks ready, including for developer tool compromise.
  • Developer Revalidation & Attestation
    • Semiannual (or annual) revalidation of contractors/externals (background checks, contract reaffirmation).
    • Audit logs of developer access, history, and credential changes.
  • Governance & Oversight
    • A security gate committee must review high-risk dependencies, alt builds, and components from new sources.
    • Risk scorecard per project: developer trust rating, dependency risk, exposure footprint.

Why This Matters (and Why Many Organizations Miss the Mark)

The “trusted tool” fallacy: Security teams often assume that internal tools are safe. Breaches like Red Hat’s disprove that assumption.

Tool sprawl leads to blind spots: As organizations adopt more tooling (multiple Git servers, CI systems, cloud build agents), each becomes a potential attack surface. Without unified oversight, vulnerabilities can slip through.

Human risk (not just code risk): Rogue insiders or compromised credentials can inflict damage faster than zero-day exploits.

Technical debt in security is invisible until disaster: Without continuous assessments, obsolete libraries, misconfigurations, or weak policies accumulate until an incident forces a reckoning.

Supply chain trust is fragile: If you integrate third-party code or depend on vendor services, you are only as strong as the weakest link. KCD helps you know who you depend on.

Key Takeaways and Core Recommendations

Takeaway #1: Even internal, consulting, or semi-external GitLab systems can be targets and reservoirs of value. The Red Hat incident reminds us that the boundaries of trust are blurred.  

Takeaway #2: Incorporate Know Your Developer (KCD) principles to track, validate, and limit who can touch your code and how.

Takeaway #3: Enforce a multi-tiered vulnerability assessment regime (SAST, DAST, SCA, pen testing, infra scanning) as an integral, not optional, part of your SDLC.

Takeaway #4: Demand that identified vulnerabilities be tracked to closure, with accountability and metrics.

Takeaway #5: Audit, revalidate, and monitor developer access; credentials expire, trust ages.

Takeaway #6: Governance functions should review high-risk decisions (e.g., importing external modules, elevating permissions, or breaking isolation).

At Koniag Cyber, we believe that software product security is not just about catching bugs; it’s about managing trust, exposure, and identity across your entire development ecosystem. The Red Hat incident is yet another reminder: assume compromise, verify continuously, and bake security into every stage of your engineering pipeline.

About the resource
What you'll learn
Who is this resource for?
Download When Assumed Trust Breaks: Lessons from the Red Hat GitLab Incident
Download Resource
We appreciate you connecting
A Koniag Cyber team member will be in touch. Thank you.
Oops! Something went wrong while submitting the form.