Foreword
I participated this time under NUS Greyhats. We did participate in the qualifiers (?) but in any case we got invited by the organizers to the finals. The finals this year is in Attack - Defense format.
Setup
For the tooling, we use ExploitFarm for automatic exploit and flag submission. We also use Tulip for monitoring. We have to patch two web services and two pwn services, and attack the other teams’ services also. In preparation for the event, I wrote a simple multithreaded proxy in Golang, but this does not really work at the day of the competition so we opt not to use this.
For the Tulip setup:
- Every 30 seconds, we generate a TCP dump:
|
|
- And sync with our Tulip server using
rsync
|
|
Attack-Defense Time!
I was in charge of monitoring the 2 web services, and was not really keeping up to whatever the fixes has been done to the services. For web, we essentially have to patch:
- LFI for
include
and we can upload any arbitrary file, and hence a reverse shell can be obtained. We patch this by removing theinclude
, and hence any upload file cannot be run as PHP. - Default credentials for database and JWT cookies. Since the source code for all the services was given to the public, we have to patch by basically changing the default credentials and change the secret for JWT cookies. We encounter some issues in patching the second web service, which is a
JAR
file. We struggle a lot with this, and eventually found this article from Oracle on how to patch/update a JAR file. - Apparently in the first web service anyone can access any user profile, and sometimes this may contain flags in the profile. Our defense was to randomize the new ID of the profile generated, so that any attempt in enumerating the new user profiles are not successful. We encounter a lot of bugs in this fix, and this costed a bit in our SLA.
- Any other attacks that I observe in the logs of Tulip. There are similar attacks on
update-profile
functionality, and so on.
For our pwn
, I was not keeping track of what’s going on, but basically we patch buffer overflow (by somehow making it impossible) and rename common Linux executables to unguessable names, like changing cat
to cccccccccccat
. We held our defense in pwn
quite well, with very few flags stolen. However, this also costed us the attacking part, as we have no idea how the payloads are sent to our server, and we cannot figure out how some of the exploit attacking our server works.
In the end, we learn that flag hoarding is actually a thing in Attack-Defense CTF. Apparently the bigger the score discrepancy between two teams, say team A and team B is (and team B score is less than that of team A), the higher the score team B will get when they pwn team A services. This is a way to earn more points at the end and make a comeback.
We were holding the top 2 for a long while, and were confident that we are going to hold that position. But at the end of the competition, other teams caught up using this trick and new pwn payloads to attack other high ranking teams, plus we were making patches that costed a bit too much of our SLA. Hence we ended up in Top 5 position, which is not bad, considering we were aiming for only not last place before the competition.
