This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Best practices and implementation

Learn how to integrate Endor Labs most effectively into your organization’s worflows.

The resources described here are designed to help you maximize the effectiveness and efficiency of your Endor Labs implementation. Whether you’re just getting started or seeking to optimize your current setup, this guide offers insights, strategies, and recommendations tailored to various use cases. By following these best practices, you’ll achieve seamless integration with Endor Labs and better meet your organization’s goals.

1 - Best Practices: Branches and workflows

Explore how to effectively use Endor Labs to scan different branches within your organization’s software development workflows. Properly managing branches and integrating robust scanning processes is crucial for maintaining code quality, security, and consistency across your development pipeline.

This guide provides actionable insights and strategies for setting up Endor Labs to seamlessly scan and monitor your branches, ensuring that potential issues are detected and addressed early in the development cycle.

A typical Git Flow may include the following types of branches:

  • main
  • develop
  • release
  • feature
  • hotfix

The two primary branches in Git Flow are main and develop. The main or the develop branch stores the official release history and often serves as the integration branch for features. The feature, release, and hotfix branches can serve as supporting branches with different intended purposes.

Baseline branch

A baseline branch is any branch that falls into one of the following categories:

  • A branch used to maintain release history or as a single source of truth
  • A branch used for managing releases
  • A branch serving as a source of integration for features and bug fixes

In the Git flow model, main, release, and develop can serve as the baseline branch.

The main branch is typically the primary branch and is often chosen as the default branch in a Git repository. It serves as the central integration point for all development efforts and usually contains the most stable and up-to-date version of the codebase, reflecting the latest approved changes that are ready for production or further testing. This is why we recommend using main not only as the baseline branch but also as the default branch for repositories. Endor Labs uses metrics from the default branch as the primary context for displaying statistics and metrics on the dashboards.

Why should you scan the baseline branches

Scan the baseline branches to:

  • Establish a security and quality baseline: Scanning the baseline branch helps establish a reference point for the security and quality standards of your code, allowing you to identify any deviations or new vulnerabilities in subsequent branches.

  • Detect inherited issues: By scanning the baseline branch, you can catch existing issues that might be inherited by other branches, ensuring that these problems are addressed before they proliferate throughout your development workflow. It will help you understand the current state of security.

  • Ensure consistency across development: Regularly scanning the baseline branch ensures that all branches derived from it start from a consistent and secure foundation, reducing the risk of introducing errors or vulnerabilities to your project.

How to scan the baseline branch

Set up a trigger to initiate a scan whenever changes are merged into the baseline branch, or schedule daily scans to ensure continuous monitoring.

Perform a standard scan with additional configuration to enhance the process. By default, Endorlabs uses the first scanned branch as the default branch. You can override this behavior by using the --as-default-branch argument to designate one of your baseline branches as the default branch during your future scans, ensuring the correct context and parameters are applied for displaying statistics on the dashboards.

For more information, see the GitHub Actions templates you can use in your CI pipelines. The repository also includes examples of other CI tools.

Feature or hotfix branch

A feature or hotfix branch is a specialized branch in a version control system used to develop and integrate new features and bug fixes into the existing codebase. Changes are typically introduced into the code through pull requests.

Why should you scan the feature branches through your pull requests

  • Prevent security vulnerabilities: Monitor pull requests to prevent the introduction of new vulnerable dependencies with known vulnerabilities, helping to maintain a secure codebase.

  • Enforce security policies: You can begin enforcing security policies to safeguard your codebase and ensure compliance with established best practices.

  • Perform incremental scans: Since you have already assessed existing vulnerabilities in your baseline branch, you can perform incremental scans to optimize efficiency on your pull requests. Focus on these incremental scans to identify new vulnerabilities, and skip scanning pull requests if a package and its dependencies remain unchanged.

How to scan the feature branches through your pull requests

Set up PR scans to be triggered on pull requests to the baseline branch and specify the following arguments:

  • --pr (For Github Actions use pr: true)
  • --pr-baseline: {baseline_branch} (For Github Actions use pr_baseline: true)
  • --pr-incremental (For Github Actions use additional_args: --pr-incremental)

For more information, see the templates that you can use in your CI pipelines.

For more details on how to perform endorctl scans and scan parameters, see Scan with Endor Labs and endorctl CLI.

2 - Best Practices: API key management

You can use API keys to engage with Endor Labs services programmatically to enable any automation or integration with other systems in your environment. See Manage API keys for more information on how to create and delete API keys.

Ensure that you rotate API keys regularly to limit the window of opportunity for an API key to be compromised.

You can use the Endor Labs API to programmatically create scripts to manage API keys.

Check for expiring API keys

API key expiry can cause interruptions in your workflows. It is a good practice to check for expiring API keys so that you can rotate them before they expire.

You can use the following script (key-expiry.sh) to check for expiring API keys. By default, the script checks for API keys that expire in the next day in the currently configured namespace. You can pass the -d flag with a number to check for API keys that expire in the next n days. You can also pass a namespace with the -n flag followed by the namespace name to check for expiring API keys in a specific namespace. The script uses jq to parse the json response and generate a formatted output. If you do not have jq installed, the script provides a json output.

#!/bin/bash

# Default values. You can update the values here or pass the values as flags to the script.
DAYS=1
NAMESPACE=""
NAMESPACE_FLAG=""

while getopts "n:d:" opt; do
  case $opt in
    n)
      NAMESPACE=$OPTARG
      NAMESPACE_FLAG="-n $NAMESPACE"
      ;;
    d)
      DAYS=$OPTARG
      ;;
    \?)
      echo "Invalid option: -$OPTARG" >&2
      echo "Usage: $0 [-n namespace] [-d days]" >&2
      exit 1
      ;;
    :)
      echo "Option -$OPTARG requires an argument." >&2
      echo "Usage: $0 [-n namespace] [-d days]" >&2
      exit 1
      ;;
  esac
done

TODAY=$(date +"%Y-%m-%d")

# Detect OS type and use appropriate date command
if [[ "$OSTYPE" == "darwin"* ]]; then
    # macOS
    PLUS_DAYS=$(date -v+${DAYS}d +"%Y-%m-%d")
else
    # Other Unix systems
    PLUS_DAYS=$(date -d "+${DAYS} days" +"%Y-%m-%d")
fi

if [ -z "$NAMESPACE" ]; then
    echo "Searching for API keys expiring between $TODAY and $PLUS_DAYS ($DAYS days)"
else
    echo "Searching for API keys in namespace '$NAMESPACE' expiring between $TODAY and $PLUS_DAYS ($DAYS days)"
fi

# Check if jq is available
if command -v jq &> /dev/null; then
    # jq is available, use it for formatted output
    RESULT=$(endorctl api list $NAMESPACE_FLAG -r APIKey \
      --filter="spec.expiration_time >= date($TODAY) AND spec.expiration_time <= date($PLUS_DAYS)" \
      --field-mask "meta.name,spec.expiration_time,meta.created_by,spec.issuing_user.spec.email" -o json)

    if echo "$RESULT" | jq -e '.list.objects | length > 0' &>/dev/null; then
        echo "$RESULT" | jq '.list.objects[] | {name: .meta.name, expiration: .spec.expiration_time, user: .meta.created_by, email: .spec.issuing_user.spec.email}'
    else
        echo "No API keys found expiring in the specified date range."
    fi
else
    # jq is not available, use the regular output
    echo "Note: Install jq for better formatted output"
    endorctl api list $NAMESPACE_FLAG -r APIKey \
      --filter="spec.expiration_time >= date($TODAY) AND spec.expiration_time <= date($PLUS_DAYS)" \
      --field-mask "meta.name,spec.expiration_time"
fi

The script returns the API keys that are expiring in the specified days. The output contains the key name, expiry date, and the information about the user that created the key. You can inform the user that the API key is expiring in the specified days and ask them to rotate the API key. See Create API keys for more information on how to create API keys.

Create a cron job to check for expiring API keys

You can also create a cron job to run the script at a regular interval and fetch the details of the expiring API keys.

The following example shows a cron job script, check_key_expiry_cron.sh, that wraps the key-expiry.sh script, and sends an email to the specified email address if there are expiring API keys. You configure the script with the path to the script, the number of days to check for expiring API keys, the email address to send the report to, and the namespace to check for expiring API keys.


#!/bin/bash

# Configuration - Customize these values according to your needs
SCRIPT_PATH="/path/to/key-expiry.sh"
DAYS=1  # Days to check for expiring API keys
EMAIL="your-email@example.com"
NAMESPACE=""  # Namespace to check for expiring API keys

OUTPUT=$($SCRIPT_PATH -d $DAYS $([[ -n $NAMESPACE ]] && echo "-n $NAMESPACE"))

if [ $(echo "$OUTPUT" | wc -l) -gt 1 ]; then
    echo "$OUTPUT" | mail -s "API Keys Expiring in the Next $DAYS Days" $EMAIL
fi

Run the following command to create a cron job that runs the script at 8 AM every day if the script is located in the home directory.

0 8 * * * $HOME/check_key_expiry_cron.sh

Check for API keys with long expiry

API keys with long expiry can be a security risk. The Endor Labs Create API key endpoint allows you to create API keys with expiry time of over 365 days. Such long expiry times may not be necessary and incompatible with your security policies.

You can use the following script (check_long_expiry_keys.sh) to check for API keys with long expiry. The script checks for API keys with expiry dates longer than 365 days by default on the currently configured namespace. You can pass the -d flag with a number to check for API keys with expiry days according to the number you pass. You can also choose to pass an Endor Labs namespace to search for long expiry API keys in a specific namespace with the -n flag followed by the namespace name. The script uses jq to parse the json response.


#!/bin/bash

# Default values
DAYS=365
NAMESPACE=""
NAMESPACE_FLAG=""

# Parse command line options
while getopts "n:d:" opt; do
  case $opt in
    n)
      NAMESPACE=$OPTARG
      NAMESPACE_FLAG="-n $NAMESPACE"
      ;;
    d)
      DAYS=$OPTARG
      ;;
    \?)
      echo "Invalid option: -$OPTARG" >&2
      echo "Usage: $0 [-n namespace] [-d days]" >&2
      exit 1
      ;;
    :)
      echo "Option -$OPTARG requires an argument." >&2
      echo "Usage: $0 [-n namespace] [-d days]" >&2
      exit 1
      ;;
  esac
done

# Calculate today's date in YYYY-MM-DD format
TODAY=$(date +"%Y-%m-%d")

# Detect OS type and use appropriate date command for calculating the future date
if [[ "$OSTYPE" == "darwin"* ]]; then
    # macOS
    PLUS_DAYS=$(date -v+${DAYS}d +"%Y-%m-%d")
else
    # Linux
    PLUS_DAYS=$(date -d "+${DAYS} days" +"%Y-%m-%d")
fi

# Print info about the search
if [ -z "$NAMESPACE" ]; then
    echo "Searching for API keys with expiration dates longer than $DAYS days from today ($TODAY to $PLUS_DAYS)"
else
    echo "Searching for API keys in namespace '$NAMESPACE' with expiration dates longer than $DAYS days from today ($TODAY to $PLUS_DAYS)"
fi

# Check if jq is available
if command -v jq &> /dev/null; then
    # jq is available, use it for formatted output
    RESULT=$(endorctl api list $NAMESPACE_FLAG -r APIKey \
      --filter="spec.expiration_time > date($PLUS_DAYS)" \
      --field-mask "meta.name,spec.expiration_time,meta.created_by,spec.issuing_user.spec.email" -o json)

    # Check if list.objects exists and is not empty
    if echo "$RESULT" | jq -e '.list.objects | length > 0' &>/dev/null; then
        echo "$RESULT" | jq '.list.objects[] | {name: .meta.name, expiration: .spec.expiration_time, user: .meta.created_by, email: .spec.issuing_user.spec.email}'
    else
        echo "No API keys found with expiration dates longer than $DAYS days."
    fi
else
    # jq is not available, use the regular output
    echo "Note: Install jq for better formatted output"
    endorctl api list $NAMESPACE_FLAG -r APIKey \
      --filter="spec.expiration_time > date($PLUS_DAYS)" \
      --field-mask "meta.name,spec.expiration_time"
fi

The script returns the API keys with expiry dates longer than the number of days you passed with key name, expiry date, and the information about the user that created the key.

Clean up expired API keys

You should regularly check for and delete expired API keys.

Keeping only active and necessary API keys can improve system performance by reducing the volume of data that needs to be processed during authentication checks. Regular cleanup makes it easier to manage and monitor active keys, allowing for better oversight of API access and usage patterns.

You can use the Endor Labs API to check for expired API keys and delete them.

The following script (delete-expired-keys.sh) checks for expired API keys and presents the options to delete them. You can choose to pass an Endor Labs namespace to search for expired API keys in a specific namespace. If you do not pass a namespace, the script checks for expired API keys in the currently configured namespace. The script uses jq to parse the json response.

#!/bin/bash
# Add a namespace to search for expired API keys in a specific namespace
NAMESPACE=""
NAMESPACE_FLAG=""
while getopts "n:" opt; do
  case $opt in
    n)
      NAMESPACE=$OPTARG
      NAMESPACE_FLAG="-n $NAMESPACE"
      ;;
    \?)
      echo "Invalid option: -$OPTARG" >&2
      echo "Usage: $0 [-n namespace]" >&2
      exit 1
      ;;
    :)
      echo "Option -$OPTARG requires an argument." >&2
      echo "Usage: $0 [-n namespace]" >&2
      exit 1
      ;;
  esac
done

TODAY=$(date +"%Y-%m-%d")
if [ -z "$NAMESPACE" ]; then
    echo "Searching for expired API keys (expiration date before $TODAY)"
else
    echo "Searching for expired API keys in namespace '$NAMESPACE' (expiration date before $TODAY)"
fi

check_jq() {
  if ! command -v jq &> /dev/null; then
    echo "Error: This script requires jq to be installed."
    echo "Please install jq and try again."
    exit 1
  fi
}
check_jq

# Get all expired API keys
RESULT=$(endorctl api list $NAMESPACE_FLAG -r APIKey \
  --filter="spec.expiration_time < date($TODAY)" \
  --field-mask "meta.name,spec.expiration_time,uuid" -o json)

# Check if there are any expired keys
if ! echo "$RESULT" | jq -e '.list.objects | length > 0' &>/dev/null; then
  echo "No expired API keys found."
  exit 0
fi

KEY_COUNT=$(echo "$RESULT" | jq '.list.objects | length')
echo "Found $KEY_COUNT expired API key(s)."

echo -e "\nExpired API Keys:"
echo "===================="
echo "$RESULT" | jq -r '.list.objects[] | "ID: \(.uuid)\nName: \(.meta.name)\nExpired: \(.spec.expiration_time)\n"'

echo -e "\nWould you like to delete these expired API keys?"
echo "1) Delete all expired keys"
echo "2) Select keys to delete individually"
echo "3) Exit without deleting"
read -p "Choose an option (1-3): " CHOICE

case $CHOICE in
  1)
    echo -e "\nDeleting all expired API keys..."
    for UUID in $(echo "$RESULT" | jq -r '.list.objects[].uuid'); do
      echo -n "Deleting key with UUID $UUID... "
      if endorctl api delete $NAMESPACE_FLAG -r APIKey --uuid=$UUID &> /dev/null; then
        echo "Success"
      else
        echo "Failed"
      fi
    done
    ;;

  2)
    echo -e "\nSelecting keys to delete individually:"
    for UUID in $(echo "$RESULT" | jq -r '.list.objects[].uuid'); do
      NAME=$(echo "$RESULT" | jq -r ".list.objects[] | select(.uuid == \"$UUID\") | .meta.name")
      EXPIRY=$(echo "$RESULT" | jq -r ".list.objects[] | select(.uuid == \"$UUID\") | .spec.expiration_time")

      echo -e "\nID: $UUID"
      echo "Name: $NAME"
      echo "Expired: $EXPIRY"

      read -p "Delete this key? (y/n): " DELETE
      if [[ $DELETE == "y" || $DELETE == "Y" ]]; then
        echo -n "Deleting... "
        if endorctl api delete $NAMESPACE_FLAG -r APIKey --uuid=$UUID &> /dev/null; then
          echo "Success"
        else
          echo "Failed"
        fi
      else
        echo "Skipped"
      fi
    done
    ;;

  3)
    echo "Exiting without deleting any keys."
    ;;

  *)
    echo "Invalid option. Exiting without deleting any keys."
    ;;
esac

echo -e "\nOperation completed."

Create a cron job to check for expired API keys

You can also create a cron job to run the script at a regular interval.

The following example shows a cron job script, check_expired_keys_cron.sh, that wraps the delete-expired-keys.sh script. You configure the script with the option to run the script to delete or report expired API keys, the path to the script, the email address to send the report to, and the namespace to check for expired API keys.

#!/bin/bash

# Configuration - Customize these values according to you need
SCRIPT_PATH="/path/to/delete-expired-keys.sh"
EMAIL="your-email@example.com"
NAMESPACE=""  # Set the required namespace or leave empty to check API keys in the currently configured namespace
OPERATION="REPORT"  # Set that value as "DELETE" to delete expired API keys

# Create a temporary file for the report
TEMP_REPORT=$(mktemp)

# Function to send email with the report
send_email() {
  local subject="$1"
  cat $TEMP_REPORT | mail -s "$subject" $EMAIL
  echo "Email sent with expired API keys report."
}

if [ "$OPERATION" = "REPORT" ]; then
  if [ -z "$NAMESPACE" ]; then
    echo "3" | $SCRIPT_PATH > $TEMP_REPORT 2>&1
  else
    echo "3" | $SCRIPT_PATH -n $NAMESPACE > $TEMP_REPORT 2>&1
  fi

  if grep -q "Found [1-9][0-9]* expired API key" $TEMP_REPORT; then
    send_email "Expired API Keys Found - Action Required"
  else
    echo "No expired API keys found."
  fi

elif [ "$OPERATION" = "DELETE" ]; then
  if [ -z "$NAMESPACE" ]; then
    echo "1" | $SCRIPT_PATH > $TEMP_REPORT 2>&1
  else
    echo "1" | $SCRIPT_PATH -n $NAMESPACE > $TEMP_REPORT 2>&1
  fi

  if grep -q "Found [1-9][0-9]* expired API key" $TEMP_REPORT; then
    send_email "Expired API Keys Deleted - Action Taken"
  else
    echo "No expired API keys found."
  fi

else
  echo "Invalid OPERATION value: $OPERATION. Must be 'REPORT' or 'DELETE'." > $TEMP_REPORT
  send_email "ERROR: Invalid Expired API Keys Operation"
fi

rm $TEMP_REPORT

You can use the following command to create a cron job that runs the script at 8 AM every day.

0 8 * * * $HOME/check_expired_keys_cron.sh

3 - Best Practices: Scoping scans

Learn how to effectively scope your scans with Endor Labs inclusion and exclusion patterns.

Exclude and include filters help your team to focus their attention on the open source packages that matter most and to improve scan performance. Use inclusion patterns when you have many packages that you want to scan separately and exclusion patterns when you want to filter out packages that are not important to you.

You can include or exclude packages using the following standard patterns:

  1. Include or exclude specific packages.
  2. Include or exclude specific directories.
  3. Include or exclude with a Glob style expressions.
  4. Use include and exclude patterns together to exclude specific directories such as a test directory from a scan.
  5. Use multiple include and exclude patterns together to exclude or include specific directories or file paths.

Scoping scans with endorctl

To include or exclude a package based on its file name when you scan with endorctl.

endorctl scan --include-path="path/to/your/manifest/file/package.json"
endorctl scan --exclude-path="path/to/your/manifest/file/package.json"

To include or exclude a package based on its directory

endorctl scan --include-path="directory/path/**"
endorctl scan --include-path="src/java/**"
endorctl scan --exclude-path="path/to/your/directory/**"
endorctl scan --exclude-path="src/ruby/**"

Examples of scoping scan

The following examples show how you can use scoping scans.

Use --exclude-path="src/java/**" to exclude all files under src/java, including all its subdirectories.

endorctl scan --exclude-path="src/java/**"

Use --exclude-path=src/java/* to only exclude the files under src/java, but not its subdirectories.

endorctl scan --exclude-path=src/java/*

Use --include-path and --exclude-path together to exclude specific directories such as test directories.

endorctl scan --include-path="src/java/**" --exclude-path="src/java/test/**"

Use multiple inclusion patterns together.

endorctl scan --quick-scan --include-path="src/java/**" --include-path="src/dotnet/**"
  • Use multiple exclusion patterns together.
endorctl scan --include-path="src/java/**" --exclude-path="src/java/gradle/**" --exclude-path="src/java/maven/**"

Best practices of scoping scans

Here are a few best practices of using scoping scans:

  • Ensure that you enclose your exclude pattern in double quotes to avoid shell expansion issues. For example, do not use --exclude-path=src/test/**, instead, use --exclude-path="src/test/**".
  • Inclusion patterns are not designed for documentation or example directories. You cannot explicitly include documentation or example directories:
    • docs/
    • documentation/
    • groovydoc/
    • javadoc
    • man/
    • examples/
    • demos/
    • inst/doc/
    • samples/
  • The specified paths must be relative to the root of the directory.
  • If you are using JavaScript workspaces, take special consideration when including and excluding the root package:
    • When using include or exclude patterns, it’s crucial to make sure you never exclude and always include the parent workspace package. Otherwise, all child packages won’t build properly.
    • You can always exclude child packages in the workspace if the root is included.
    • There is only one lock file for the workspace that exists in the workspace root directory. Make sure to include the lock file to perform a successful scan.

4 - Best Practices: Working with monorepos

Learn strategies to best work with large monorepos.

Large monorepos are a reality for many organizations. Since monorepos can have anywhere from tens to even hundreds of packages scanning all packages in a monorepo can take significant periods of time. While the time requirements may vary based on your development team and pipeline times, in general, development teams need quick testing times to improve their productivity while security teams need full visibility across a monorepo. These two needs can conflict without performance engineering or an asynchronous scanning strategy. This documentation outlines some performance engineering and scanning strategies for large monorepos.

Asynchronous scanning strategies

When scanning a large monorepo, a common approach taken by security teams is to run an asynchronous cron job outside a CI/CD-based environment. This is often the point of least friction but is prohibitive. With this approach, inline blocking of critical issues is not generally possible. We would be remiss not to mention this as a scanning strategy for monorepos but this approach is NOT recommended beyond a step to get initial visibility into a large monorepo.

Performance Enhancements for inline scanning strategies

The following performance enhancements may be used with Endor Labs to enable the scanning of large monorepos:

Scoping scans based on changed files

For many CI/CD systems path filters are readily available. For example, with GitHub actions, dorny path filters is a readily accessible way to establish a set of filters by a path. This is generally the most effective path to handle monorepo deployments but does require the highest level of investment in terms of human time. The human time investment is made up for by the time saved by reducing the need to scan everything on each change.

Based on the paths that change you can scope scans based on the files that have actually changed. For example, you can scan only the packages in a monorepo that are housed under the ui/ directory when this path has changed by running a scan such as endorctl scan --include-path=ui/ when this path has been modified.

Using a path filtering approach each team working in a monorepo would need to be responsible for the packages that they maintain, but generally, each team may be associated with one to several pre-defined directory paths.

Parallelizing scans for many packages

When scanning a large monorepo organizations can choose to regularly scan the whole monorepo based on the packages or directories they’d like to scan. Different jobs may be created that scan each directory simultaneously.

Parallelizing with scoped scans

Using scoped scans for monorepos with multiple parallel include patterns is a common performance optimization for monorepos.

The following example shows parallel GitHub action scan that you can use as a reference.

name: Parallel Actions
on:
  push:
    branches: [main]
jobs:
  scan-ui:
    runs-on: ubuntu-latest
    steps:
      - name: UI Endor Labs Scan
        run: endorctl scan --include-path=ui/
  scan-backend:
    runs-on: ubuntu-latest
    steps:
      - name: Backend Endor Labs Scan
        run: endorctl scan --include-path=backend/

In this example, the directories ui/ and backend/ are both scanned simultaneously and the results are aggregated by Endor Labs. This approach can improve the overall scan performance across a monorepo where each directory can be scanned independently.

To include or exclude a package based on its directory.

endorctl scan --include-path="directory/path/"

See scoping scans for more information on approaches to scoping scans.

Parallelizing across languages

For teams that work out of smaller monorepos, it is often most reasonable to parallelize scanning based on the language that is being scanned and performance optimize for individual languages based on need.

Below is an example parallel GitHub action scan that can be used as a reference. In this example, JavaScript and Java are scanned at the same time and aggregated together by Endor Labs. This approach can improve the overall scan performance across a monorepo with multiple languages.

name: Parallel Actions
on:
  push:
    branches: [main]
jobs:
  scan-java:
    runs-on: ubuntu-latest
    steps:
      - name: Java Endor Labs Scan
        run: endorctl scan --languages=java
  scan-javascript:
    runs-on: ubuntu-latest
    steps:
      - name: Javascript Endor Labs Scan
        run: endorctl scan --languages=javascript,typescript

Run the following command to scan a project for only packages written in TypeScript or JavaScript.

endorctl scan --languages=javascript,typescript

Run the following command to scan a project for only packages used for packages written in Java.

endorctl scan --languages=java