This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

AI inventory and governance

This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

AI models and machine learning components have become integral parts of modern software development. Just like traditional dependencies, these AI models can introduce operational and security risks to your organization.

You can use Endor Labs to perform the following tasks to help you gain visibility into these risks and make informed decisions about AI model usage.

  • AI model discovery and evaluation: Search through thousands of open source AI models from Hugging Face. You can also evaluate models across security, activity, popularity, and operational integrity.
  • AI model governance and policy management: Configure finding policies to enforce organizational restrictions on AI model usage and quality standards. You can create custom policies to flag specific AI models or providers. You can also track AI model usage across your development pipeline
  • AI-powered developer assistance: Use DroidGPT to find relevant open-source components and troubleshoot scanning errors with intelligent recommendations.
  • AI security review: Identify potential security issues in your pull requests and get recommendations to fix them.
  • Real-time code scanning: Use Endor Labs MCP Server to seamlessly integrate Endor Labs into your IDE to scan both human and AI-generated code in real-time, catching vulnerabilities and issues before they reach production.

The following sections provide information on how to discover AI models, evaluate them, and manage them with Endor Labs.

AI access

The following features in the Endor Labs application access Artificial Intelligence (AI) services to enhance security analysis, code insights, and developer assistance. You can check whether AI access is currently enabled or disabled for these features.

  • LLM code processing Detects AI models from HuggingFace used in Python projects and lists them as dependencies. See View AI model findings.

  • DroidGPT Retrieves data from third-party AI tools and correlates it with Endor Labs’ proprietary risk data to identify open source software packages. See Use DroidGPT.

  • C/C++ embeddings Enhances detection capabilities of C and C++ software composition analysis (SCA) by using AI-generated embeddings. See Scan C and C++ projects.

To modify AI feature settings:

  1. Select Settings > AI Access from the left sidebar.
  2. Click Contact us to submit a request to the support team.
  3. The support team will assist you with enabling or disabling AI features based on your organization’s needs.

AI model findings

Endor Labs’ scan can detect AI models and list them as dependencies. These models are flagged and displayed in the scan results. You can define custom policies to flag the usage of specific AI providers, specific AI models, or models with low-quality scores ensuring the use of secure and reliable AI models in your projects.

See AI models detection for the list of external AI models detected by Endor Labs. Only HuggingFace models are scored, as they are open source and provide extensive public metadata. Models from all other providers are detected but not scored due to limited metadata.

Configure finding policies and perform an endorctl scan to detect AI models in your repositories and review the findings.

  1. Configure finding policy to detect AI models with low scores and enforce organizational restrictions on specific AI models or model providers.

  2. Download and install Semgrep Community Edition on your machine before you run a AI model scan.

    Although Semgrep supports installation with Brew on macOS, it does not allow installing a specific version. To install Semgrep, you must have a Python environment with pip on your system. We recommend that you install Semgrep version 1.99.0.

    pip install semgrep==1.99.0
    
  3. Perform the endorctl scan using the following command:

    endorctl scan --ai-models --dependencies
    
  1. To view all AI model findings detected in your tenant:

    • Navigate to AI Inventory on the left sidebar to view AI findings. AI Models
    • Use the search bar to look for any specific models.
    • Select a model, and click to see its details.
    • You can also navigate to Findings and choose AI Models to view findings. AI model findings
  2. To view AI model findings associated with a specific project,

    • Navigate to Projects and select a project.
    • Navigate to Inventory and click AI Models under Dependencies to view findings. AI model dependencies

By default, AI models are discovered during SCA scans run through GitHub App, Bitbucket App, Azure DevOps App, and GitLab App. You can view the reported AI models under AI Inventory in the left sidebar.

To generate AI model findings:

  1. Configure finding policy to detect AI models with low scores and enforce organizational restrictions on specific AI models or model providers.

  2. Download and install Semgrep Community Edition on your machine before you run a AI model scan.

    Although Semgrep supports installation with Brew on macOS, it does not allow installing a specific version. To install Semgrep, you must have a Python environment with pip on your system. We recommend that you install Semgrep version 1.99.0.

    pip install semgrep==1.99.0
    
  3. View AI Model findings.

  4. To disable AI model discovery, set ENDOR_SCAN_AI_MODELS=false in your scan profile.

The following table lists the AI model providers currently supported by Endor Labs for model detection. For each provider, the table includes supported programming languages, if model scoring is available, and a reference link to the provider’s API documentation.

AI model Supported languages Endor score Reference
HuggingFace Python https://huggingface.co/docs
OpenAI Python, JavaScript, Java (beta), Go (beta), C# https://platform.openai.com/docs/libraries
Anthropic Python, TypeScript, JavaScript, Java (alpha), Go (alpha) https://docs.anthropic.com/en/api/client-sdks
Google Python, JavaScript, TypeScript, Go https://ai.google.dev/gemini-api/docs/sdks
AWS Python, JavaScript, Java, Go, C#, PHP, Ruby https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html#sdk
Perplexity Python https://docs.perplexity.ai/api-reference/chat-completions-post
DeepSeek Python, JavaScript, Go, PHP, Ruby https://api-docs.deepseek.com/api/deepseek-api
Azure OpenAI C#, Go, Java, Python https://learn.microsoft.com/en-us/azure/ai-foundry/

Search for AI Models

An AI model is a computational system designed to simulate human intelligence by performing tasks such as recognizing patterns, making decisions, predicting outcomes, or generating content. Many open source AI models are freely available for use, modification, and distribution. Just like dependencies, these AI models can bring operational and security risks in the organization that uses them. Gaining visibility into these risks can minimize the vulnerabilities introduced by them.

Endor Labs picks the top ten thousand open source AI models available on Hugging Face and assigns Endor scores to them, so that you can make informed decisions before using them in your organization. See AI Model scores for more information.

To look for AI models, navigate to Discover > AI models.

  • Type in the search bar to look for AI Models and click Search AI Models.

    View AI models

  • Select a search result to view more details such as its security, activity, popularity, or quality score. You can also view complete details of an AI model.

    View AI model details

  • Click Go to Hugging face to see more to view the AI model on Hugging Face website.

AI model policies

Endor Labs provides the following finding policy templates for detecting AI models that have low Endor score. See Finding Policies for details on how to create policies from policy templates.

Policy template Description Severity
AI models with low scores Raise a finding if the repository uses an AI model with an Endor score value that is less than the specified threshold value. Low
Restricted AI models Raise a finding if the repository uses an AI model that is restricted based on your organizational policy or usage context. Low
Restricted AI model providers Raise a finding if the repository uses an AI model provider that is restricted based on your organizational policy or usage context. Low

AI model scores

To evaluate AI models effectively, we use a multifactor scoring system that assesses popularity, activity, operational integrity, and security.

Each model is assigned a composite score based on the following criteria:

The popularity score reflects the model’s adoption and recognition within the AI community. Higher scores indicate greater usage and community engagement.

  • Number of downloads: More downloads indicate widespread adoption.

  • Number of likes: More likes suggest a positive reception from users.

  • Published papers: Models with linked academic papers receive higher credibility.

  • GitHub repository: Models with an associated GitHub repository score higher.

  • Number of spaces using the model: More integrations suggest broader utility.

  • Models with many downloads, likes, citations, and integrations score higher.

  • Models with fewer engagements score lower.

The activity score measures how actively a model is discussed and maintained.

  • Discussion posts: Active community discussions contribute positively.

  • Pull requests: Indicates ongoing maintenance and improvements.

  • Models with frequent discussions and active pull requests score higher.

  • Models with limited activity receive lower scores.

The operational score assesses the model’s reliability, transparency, and usability.

  • Reputable provider: Models from well-known sources score higher.

  • Model age: Older, well-maintained models may score higher, but outdated models may receive penalties.

  • Authorization requirements: Restricted-access models score lower for accessibility but may gain points for security.

  • Gated models: If a model requires special access, it may impact usability.

  • License information: Models with clear licensing receive higher scores.

  • License type: Open licenses (permissive, unencumbered) generally score higher than restrictive ones.

The following factors related to the availability of model metadata are also considered.

  • Metric information: Essential for model evaluation.

  • Dataset information: Transparency about training data boosts score.

  • Base model information: Important for derivative works.

  • Training data, fine-tuning, and alignment training information: Increases credibility.

  • Evaluation results: Demonstrates model performance.

Models with comprehensive metadata, reputable providers, and clear licensing score higher.

Models with unclear ownership, restrictive access, or missing details score lower.

The security score evaluates potential risks associated with a model’s implementation and distribution.

  • Use of safe tensors: Secure tensor formats boost safety score.

  • Use of potentially unsafe files: Formats such as pickle, PyTorch, and Python code files pose security risks.

  • Typosquatting risks: Models that could be impersonating popular models receive lower scores.

  • Example code availability: Models that contain example code or code snippets can introduce potential issues and hence receive lower scores.

Models that follow best security practices such as safe tensors, clear documentation, or vetted repositories score higher.

Models receive lower scores if they use potentially unsafe formats such as pickle (.pkl) and unverified PyTorch (.pth) or show signs of typosquatting.

Each category contributes to the overall model score. The final score is a weighted sum of these factors, with weights adjusted based on real-world relevance and risk impact.

Higher scores indicate well-documented, popular, actively maintained, and secure models, while lower scores highlight potential risks or lack of transparency.

This scoring system enables users to make informed decisions when selecting AI models for their projects.

Endor Labs continuously refines and expands its evaluation criteria; this document represents the current methodology snapshot.

DroidGPT

DroidGPT derives data from third-party Artificial Intelligence (AI) tools and coordinates it with Endor Labs’ proprietary risk data to help you to quickly and easily research open source software packages.

  1. Sign in to Endor Labs application and click Droid GPT under Discover.

  2. From Droid GPT, choose an Ecosystem.

  3. Type your questions in the search bar and click Ask DroidGPT. Here are a few examples:

    • What are the best logging packages for Java?
    • What AI packages have most permissive license?
    • Which Go packages have the least known vulnerabilities?
    • What are a few packages similar to log4j?

You’ll receive instant answers. All results include risk scores revealing the quality, popularity, trustworthiness, and security of each package.

Endor Labs integrates with third-party Artificial Intelligence (AI) tools to help you troubleshoot errors while performing software composition analysis, dependency resolution, or generating call graphs during an endorctl scan.

In the event of an error, DroidGPT generates explanations and actionable advice for how to resolve the error on the given host system. These suggestions are displayed as part of the error log messages on the command line and can help you understand why build errors occurred during the scan process and how to resolve them.

Use the ENDOR_SCAN_DROID_GPT environment variable or the --droid-gpt option to enable DroidGPT error logging on your system.

  • Enable error logging while performing a scan.
endorctl scan --droid-gpt
  • Enable error logging while checking the system specifications required for performing a scan.
endorctl host-check --droid-gpt

Here is an example of the recommendations generated by DroidGPT while scanning a Ruby repository where the manifest file is not correctly configured.

DroidGPT suggests the following as a possible remediation:
1. The error message indicates that there is a problem parsing the Gemfile, which is preventing the dependency tree from being generated.
2. Specifically, the error message states that there are no gemspecs at the specified location, which is causing Bundler to fail.
3. To fix this issue, you should check that the Gemfile is correctly configured and that all necessary gemspecs are present.
4. Additionally, you may want to try running `bundle install` to ensure that all dependencies are properly installed.
5. Please note that this advice is generated by an AI and there may be additional factors at play that are not captured in the error message. As such, there is no guarantee that these steps will resolve the issue, and you should proceed with caution.

AI security review

Beta

AI security review provides automated code review capabilities using artificial intelligence to identify potential security issues in your codebase.

After you set up AI security review, creating a pull request triggers an Endor Labs scan on the diff. Endor Labs sends the scan data to an AI model to produce a security analysis and generates a report.

You can view the report in the Endor Labs user interface. You can also enable pull request comments to get a comment on your GitHub pull request with the details of the AI security review.

The following sections provide information on how to set up AI security review, customize a scan profile, and view the AI security review results.

Prerequisites for AI security review

Before you set up AI Security Review, ensure that the following prerequisites are in place:

  • An active Endor Labs subscription with the Security Review license bundle.
  • Administrator access to your GitHub organization.
  • Access to configure scan profiles and policies.
  • Enable Code Segment Embeddings and LLM Processing in Data Privacy settings.
  1. Select Manage > Settings from the left sidebar.

  2. Select SYSTEM SETTINGS > Data Privacy.

    Data Privacy

  3. Select Code Segment Embeddings and LLM Processing.

  4. Click Save Data Privacy Settings.

  1. Select Settings > License from the left sidebar.
  2. Verify that you have Security Review in Products and Features.

Set up AI security review

To set up AI security review, you need to complete the following tasks:

Install the GitHub App if you don’t have it already. See GitHub App for more information.

Ensure that you enable the following settings:

  • Pull Request Scans: Pull Request Scans allows Endor Labs to scan the pull requests. You must enable this setting so that AI security review can proceed for a pull request.
  • Pull Request Comments: Pull Request Comments allows Endor Labs to comment on a pull request in GitHub. This setting is optional, and you need to enable this setting if you want a comment on your GitHub pull request with the details of the AI security review. In addition, you also need to select Pull Request Comments in your scan profile and set up an action policy.

Create a scan profile for AI security review and configure the following options:

  • Pull Request Scans: Mandatory. This setting allows Endor Labs to scan the pull requests.
  • Pull Request Comments: Optional. This setting allows Endor Labs to comment on a pull request in GitHub.
  • AI Security Review Scans: Mandatory. This setting allows Endor Labs to scan the pull requests for AI security review.
  • Disable Code Summary: Optional. This setting allows you to disable the code summary for the AI security review.
  • Custom Prompt: Optional. You can enter a custom prompt to modify how AI security review detects and categorizes security-related changes.

Scan profile for AI security review

After you create the scan profile, assign the scan profile to the projects for which you want to set up AI security review.

See Scan Profiles for more information on creating a scan profile.

Ensure that the Security Review policy is enabled under finding policies.

  1. Select Policies & Rules from the left sidebar.
  2. Select Finding Policies.
  3. Search for Security Review and ensure that the policy is enabled.

Enable finding policy for AI security review

If you want to get comments on your GitHub pull requests, you need to set up an action policy.

  1. Select Settings from the left sidebar.

  2. Select Action Policies.

  3. Click Create Action Policy.

  4. Select Security Review as the Policy Template.

  5. Choose the severity threshold to trigger the AI security review.

    You can choose from the following severity thresholds:

    • Any
    • Low
    • Medium
    • High
    • Critical
  6. Select Pull Request as the Branch Type.

  7. Choose Enforce Policy as the action, and select Warn or Break the Build depending on your preference.

  8. Configure include and exclude patterns for the policy.

  9. Name the policy and provide a description.

  10. Enter tags if required for the policy.

  11. Click Create Action Policy to save the policy.

See Action Policies for more information on setting up an action policy.

Configure action policy for PR comments

View AI Security Review Results

You can view the AI security review results in the Endor Labs UI. You can also enable PR comments to get a comment on your GitHub PR with the details of the AI security review. If you use merge queues, Endor Labs provides security review for the PRs until they are added to the merge queue. Endor Labs does a final security review on the merged commit SHA to the default branch.

  1. Select Projects from the left sidebar.

  2. Select the project for which you want to view the AI security review results.

  3. Select Security Review.

    Security Review

    You can view the AI security review results for all the pull requests raised in the project. You can also search for a specific pull request and view the results.

    You can filter the results by the type of the security issues, the severity of the security issues, the author of the PR, the approvers, and the creation time of the PR. You can select advanced to enter a search query to filter the results.

    For example, you can filter the results to show only the critical security issues that are part of unmerged pull requests:

    (spec.level in ["SECURITY_REVIEW_LEVEL_CRITICAL"] and spec.repository_pull_request_spec.merged != true)

  4. Click on a pull request to view the detailed report.

    Security Review Report

    The report appears in the right sidebar. You can view the security analysis of the PR and the list of security risks along with their severities.

    You can click links against the security analysis to go directly to the lines of code that has the security risk.

    You can also click the links to view the pull request and the specific commit that introduced the security risk.

  5. Select the arrow next to a security risk to view the details of the security risk.

    Security Risk Details

    You can view the analysis of the security risk, the code snippet associated with the risk, and the details of the pull request.

If you configure the action policy to get comments on your GitHub pull requests, Endor Labs comments on the pull request with the security analysis.

Security Review GitHub pull request comment