All that is wrong with the world…

June 23, 2011

OS X – Safe, yet horribly insecure

Filed under: Security, Tech — Tags: , , , , , , , — allthatiswrong @ 2:48 am


I have had this article planned since the end of 2009 and have had it as a skeleton since then. I wanted to point out the many problems with OS X security and debunk the baseless myth that OS X is somehow more secure. Despite 18 months passing by before I managed to finish it, not much seems to have changed. I think I am publishing at an interesting time however just as malware for OS X is increasing and Apple are starting to put effort into securing OS X with the soon to be released Lion. There is no FUD in this article, just an analysis of the available evidence and some speculation. My motivation to write this article was the hordes of OS X users who are either blind or have been mislead by false advertising into believing OS X is somehow immune to malware and attacks.

It is one of the most prevalent myths among the computer purchasing public and to a lesser extent those who work in IT, that Apple computers are far more secure than their Windows and perhaps Linux counterparts. The word myth is precisely accurate, as OS X and other Apple software is among the most vulnerable software on consumer devices today. Apple have an appalling attitude towards security which often leaves their users highly vulnerable while hyping their products as secure, simply because they are rarely targeted. It is important before going further to note the difference between a distributed attack and a targeted attack. A distributed attack is one not specific to any one machine or network, but will exploit as many machines as it can affected by a particular set of vulnerabilities, of which OS X has had many. An example of a distributed attack is a drive by download, where the target is unknown, but if the target is vulnerable the exploit should work. Distributed attacks are used to infect large amounts of machines easily, which are then generally joined into a botnet to earn cash.

A targeted attack is more specific, where a single machine or network is attacked. A targeted attack is not blind and is specific to the machine being attacked. Distributed attacks such as drive by downloads are impersonal by nature because they must compromise thousands of machines while the motivation behind a targeted attack tends to be more personal, perhaps to steal confidential files or install some sort of backdoor. The argument always seems limited to distributed attacks which admittedly are nowhere near the problem they are for windows. This is more than likely because Apple has a very low market share of PC’s, simply making it less than worthwhile to invest in writing software to attack as many machines as possible when money is the motivation. I go into this in further detail in a later section.

Using a Mac may certainly be a safer choice for a lot of people as despite being vulnerable they are not targeted. However this is not the same as Macs being secure, something Eric Schmidt erroneously advised recently. I may be able to browse impervious to malware on a Mac at the moment, however I personally would not be comfortable using a platform so easily compromised if someone had the motivation to do so. In this article I address just why OS X is so insecure including the technical shortcomings of OS X as well as Apples policies as a company that contribute to the situation.

A trivial approach to security

One of the most annoying claims made by OS X (and Linux) users is that the UNIX heritage results in a far more secure design, making it more immune to Malware. Nothing could be further from the truth. The Unix Design is significantly less granular than that of Windows, not even having a basic ACL. The UNIX design came from a time when security was less of an issue and not taken as seriously as it did, and so does the job adequately. Windows NT (and later OSes) were actually designed with security in mind and this shows. Windows was not such a target for malware because of its poor security design; it is because the security functionality was never used. When everybody runs as Administrator with no password then the included security features lose almost all meaning. Point for point Windows has a more secure design than OS X, and is used properly the damage can be significantly minimized on a Windows machine than on an OS X machine, UNIX heritage or not.

A lot of OS X users seem to have this idea that Apple hired only the best of the best when it came to programmers while Microsoft hired the cheapest and barely adequately skilled, which not least resulted in OS X being a well designed piece of software completely free of vulnerabilities. In reality OS X machines have always been easily exploited and are among the first to be compromised at various security conferences and competitions. The vast majority of exploits that have been publicly demonstrated could have been used to write a successful virus or worm. Given how lax Apple is with security updates and any kind of proactive protection any prospective attacker would have quite a field day. The only reason this has not happened yet is not because Apple is magically more secure, it’s because no one has bothered to take the opportunity. It isn’t like no OS X viruses exist. Even without the poor approach apple takes to security, there would be no basis for claiming the design of OS X is more secure than that of other platforms.

Apple is generally months behind fixing publicly disclosed vulnerabilities, often only doing so before some conference to avoid media reporting. They often share vulnerabilities with core libraries in other UNIX like systems with samba and java being two examples. They are extremely difficult to deal with when trying to report a vulnerability, seemingly not having qualified people to accept such reports. Even if they do manage to accept a report and acknowledge the importance of an issue they can take anywhere from months to a year to actually fix it properly.

People always get caught up in the hype surrounding viruses and how OS X is seemingly impervious while forgetting that that is not the only type of threat. Personally for me, malware is a minor threat with the impact being negligible as long as you follow basic security practices and can recognize when something looks out of place. The idea of someone targeting me specifically on a network either because it is so vulnerable that it is child’s play or because they want something from my machine is far more worrying. This is significantly harder to protect against on OS X when you can’t rely on the manufacturer to issue patches in anything considering a prompt timeframe or even to acknowledge that vulnerabilities exist. Given that this is the Apple philosophy, it is hard to pretend to be safe on an Apple machine.

Examples and details

Every OS except OS X has a full implementation of ASLR, stack canaries, executable space prevention, sand boxing and more recently mandatory access controls. OS X doesn’t even try to implement most of these basic protections and the ones it does, it does poorly. I don’t understand why security folk use OS X at all, given its plethora of problems. Yes, they are pretty and yes it is UNIX and yes you are every safe using it, but given security folks tend to be working on various exploits and research that they would want to keep private, using a platform so vulnerable to targeted attacks would not seem to be the smartest move.

Apple to date do not have a proper DEP or ASLR implementation, two well known technologies that have been implemented in other OSes for the last five years. Apple did not bother to implement DEP properly except for 64bit binaries, and even then there was no protection on the heap even if it was marked as non executable. Apple technically implements ASLR but in a way that they may not have bothered. The OS X ASLR implementation is limited to library load locations. The dynamic loader, heap, stack or application binaries are not randomized at all. Without bothering to randomize anything except library load locations their implementation is useless aside from perhaps preventing some return to libc attacks. We can see using the paxtest program from the PaX team (the same team who initiated ASLR protections on PC’s) that OS X fails most of these tests (Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J. 2008). Apple’s decision not to randomize the base address of the dynamic linker DYLD is a major failing from a security point of view. Charlie Miller has demonstrated how a ROP payload can be constructed using only parts of the non randomized DYLD binary. Snow leopard unfortunately did not improve on things much except to add DEP protection to the heap, still only for 64 bit applications. This means that most of the applications that ship with OS X (including browser plugins) are far easier to attack than applications on pretty much any other platform.

The firewall functionality in OS X is impressive, but hardly utilized. The underlying technology is ipfw powerful and more than capable of protecting OS X from a wide variety of threats, however Apple barely utilizes it. The OS X firewall is disabled by default and application based meaning it is still vulnerable to low level attacks. Even if the option to block all incoming connections was set it didn’t do this, still allowing incoming connections for anything running as the root user with none of the listening services being shown in the user interface.

Apple introduced rudimentary blacklisting of malware in Snow Leopard via xprotect.pilst, which works so that when files are downloaded via certain applications they set an extended attribute which indirectly triggers scanning of the file. However many applications such as IM or torrent applications do not set the extended attribute, thus never triggering the Xprotect functionality. A fine example of this is the trojan iWorks which was distributed through torrents, and never triggered Xprotect. At the moment it can only detect very few malware items, although as a response to the MacDefender issue this is now updated daily. Only hours after Apple’s update to deal with MacDefender was released a new version that bypasses the protection was discovered, highlighting the shortcomings of the Xprotect approach. Since it relies on an extended attribute being set in order to trigger scanning, any malware writer will target avenues of attack where this attribute will not be set and for drive by download attacks it is completely useless. Still, it is a good first step for Apple acknowledging the growing malware problem on OS X and starting to protect their users.

It has been a shame to see the sandboxing functionality introduced in Leopard not being utilized to anywhere near its full capacity. Apple are in a unique position where by controlling the hardware and the operating system they have creating a truly homogenous base environment. It would be very easy to have carefully crafted policies for every application that ships with the base system, severely limiting the damage that could be caused in the event of an attack. They could go even further and import some of the work done by the SEDarwin team, allowing for even greater control over applications. They would not have to present this to the user and would probably prefer not to yet doing so would put them far ahead of all the other operating systems in terms of security at this point.

Security wise Apple is at the same level as Microsoft in the early 90’s and early 2000’s. Continuing to ignore and dismiss the problems without understanding the risks and not even bothering to implement basic security features in their OS. With an irresponsible number of setuid binaries, unnecessary services listening on the network with no default firewall, useless implementations of DEP and ASLR and a very poor level of code quality with many programs crashing with a trivial amount of fuzzing Apple are truly inadequate at implementing security. This still doesn’t matter much as far distributed attacks go, at least not until Apple climbs higher in market share but I really dislike the idea of someone being able to own my system just because I happened to click on a link. At least with Apple giving regular updates via Xprotect and including a Malware help page in Snow Leopard we have evidence that they are starting to care.

An appalling record

A great example of Apple’s typical approach to security is the Java vulnerability that despite allowing for remote code execution simply by visiting a webpage, Apple left unpatched for more than six months; only releasing a fix when media pressure necessitated that do so. When OS X was first introduced the system didn’t even implement shadow file functionality, using the same password hashing AT&T used in 1979, simply relying on obscuring the password via a pretty interface. This is indicative of the attitude Apple continues to have to this very day, having a horribly secure design at the expense of convenience and aesthetics, only changing when pressure necessitates it. One of the most interesting examples of this is that regularly before the pwn2own contests where Apple’s insecurity is put on display, they release a ton of patches. Not when they are informed of the problem and users are at risk, but when there is a competition that gets media attention and may result in them looking bad.

Being notoriously hard to report vulnerabilities to does not help either. If a company does not want to hear about problems that put their machines and thus customers at risk it is hard to say that they are taking security seriously. As is the case at the moment if you try and report a vulnerability to Apple it will likely get rejected with a denial and after retrying several times it may be accepted, where a patch may be released any number of weeks or months later. Apple still have a long way to go before demonstrating they are committed to securing OS X rather than maintaining an image that OS X is secure. Having a firewall enabled by default would be a start, something Windows has had since XP. Given the homogeneous nature of OS X this should be very easy to get off the ground and it may well be the case with Lion.

The constant misleading commercials are another point against Apple. Constantly misleading users that OS X is secure and does not get viruses (implying that it cannot) or have any security problems what so ever. Not to mention that they exaggerate the problem on Windows machines, they completely ignore the vulnerabilities OS X has. Most recently evidence Apple’s aforementioned attitude can be seen with their initial response to the MacDefender malware. Rather than address the issue and admit that a problem exists they keep their heads in the sand, even going so far as to instruct employees not to acknowledge the problem. To their credit Apple did change their approach a few days later issuing a patch and initiating a regularly updated blacklist of malware. Their blacklist implementation has flaws, but it is a start.

As much as users and fans of Apple may advocate the security of OS X it is very important to note that OS X has never implemented particularly strong security, has never had security as a priority and is produced by a company that has demonstrated over and over that security is a pain which they would rather ignore, leaving their users at risk rather than acknowledge a problem.

Malware for OS X increasing

While it’s true that doomsday for OS X has long been predicted, despite the predictions lacking a precise time reference. An article by Adam O’Donnell has used game theory to speculate that market share is the main cause for malware starting to target a platform, the result of a tradeoff between a lack of protection and a high enough percentage of users to take advantage of to make the investment worthwhile. The article made the assumption that all PC’s were using AV software and assumed an optimistic 80% detection success rate. If the PC defense rate were higher, then OS X would become an attractive target at a much lower market share. According to the article, if PC defenses were at around 90% accuracy, then OS X would be a target at around 6% market share. The estimated percentage from the article is just under 17%, and just as some countries have reached around that number are we starting to see an increase in malware for OS X. It may be a coincidence but I will not be surprised if the trend continues. Given Apple’s horrid security practices and insecurity it’s going to increase quite noticeably unless Apple changes their act. Aside from market share another important factor is the homogeny of the platform, making OS X an extremely ideal target once the market share is high enough.

A lot of people are saying they will believe the time for OS X has come when they see an equivalent to a Code Red type of worm, except that this is never going to happen. Worms shifted from being motivated by fame having a financial motivation, with the most recent OS X malware being linked to crime syndicates. With the security protections available in most OSes these days (aside from OS X) being more advanced it takes more skill to write a virus to infect at the scale of something like Code Red, and the people who do have that skill are not motivated to draw attention to themselves. These days malware is purely about money, with botnets that going out of their way to hide themselves from users. Botnets on OS X have been spotted since 2009 and OS X is going to be an increasing target for these types of attacks without ever making the headlines as Windows did in the 90’s.

Another contributing factor that should not be overlooked is the generally complacent attitude of OS X users towards securing their machines. Never faced with Malware as a serious threat and being shoveled propaganda convincing them that OS X is secure, most OS X users have no idea how to secure their own machines with many unable to grasp the concept that they may be a target for attack. The MacDefender issue already showed how easy it is to infect a large number of OS X users. Windows users are at least aware of the risk and will know to take their computer in to get fixed or to run an appropriate program as where it seems OS X users simply deny the very possibility. As Apple’s market share increases, the ratio of secure users to vulnerable users continues to slide further apart. With more and more people buying apple machines and not having any idea how to secure them or that they even should there are that many more easy targets. Given the insecurity of OS X and the nativity of the users, I do think it is only a matter of time before OS X malware becomes prevalent, although not necessarily in a way that will make the news. This means the problem is going to get worse as users are going to keep getting infected and not realize it while believing their machines are clean and impervious to risk.

People also have to get over the idea that root access is needed for malware to be effective. Root access is only needed if you want to modify the system in some way so as to avoid detection. Doing so is by no means necessary however, and a lot of malware is more than happy to operate as a standard user, never once raising an elevation prompt and silently infection or copying files or sending out data or doing processing, or whatever malicious thing it may do.

Macs do get malware even if it is a significantly smaller amount that what is for windows. Given the emergence of exploit creation kits for OS X it is inevitably malware is inevitably going to increase for OS X. Even if it never gets as bad as it was for Windows in the 90’s it is important not to underestimate the threat of a targeted attack. Rather than encouraging a false sense of security Apple should be warning users that it is a potential risk and teaching users how to look for signs and deal with it. The Malware entry in the Snow Leopard help is a small step in the right direction. There isn’t much Apple can do to prevent targeted attacks, except maybe fixing their OS and being proactive about security in the first place.

Much room for improvement

One thing OS X did get right was making it harder for key loggers to work. As of 10.5 only the root user can intercept keyboards, so any app making use of EnableSecureEventInput should theoretically be immune to key logging. Of course, if remote code execution is possible then that is a very minor concern. This requires the developer to specifically make use of that function, which is automatic for Cocoa apps using a SECURETEXTFIELD. Of course this does not completely prevent keyloggers from working as applications not making use of that functionality will be vulnerable to keylogging, such as was the case with Firefox and anything not using a secure text field. Of course, given the propensity of privilege escalation attacks on OS X it would not be hard to install a keylogger as root. However this is a great innovation and something that I would like to see implemented in other operating systems.

Apple asked security experts to review Lion which is a good sign, as long as they actually take advice and implement protections from the ground up. Security is a process which needs to be implemented from the lowest level, not just slapped on as an afterthought as Apple have tended to do in the past. I think the app store in Lion will be interesting. If Apple can manage to control the distribution channels for software, then they will greatly reduce the risk of malware spreading. At the moment most software is not obtained via the app store and I don’t ever expect it to be, still the idea of desktop users being in a walled garden would be one solution to solving the malware problem.

Lion is set to have a full ASLR implementation (finally) including all 32 bit applications and the heap. As well as more extensive use of sandboxing it looks like Apple is starting to actually lock down their OS, which means they understand the threat is growing. It will be interesting to see if Apple follows through on the claims made for Lion, or if they fall short much like what happened with snow leopard. Personally I think Lion is going to fall short while the malware problem for OS X will get serious, but it won’t be until 10.8 that Apple takes security seriously.

Update 1 – June 28th 2011

Updated minor grammatical mistakes.

It is amazing the knee jerk response I have seen to this article where people start saying how there are no viruses for OS X, which is something I acknowledge above. I guess people don’t care if they are vulnerable as long as there are no viruses? Then people start attacking the claim that OS X has no ACL, which is a claim I never made. I guess the truth hurts and attacking men made of straw helps to ease the pain.


  1. – A list of OS X vulnerabilities.
  2. – Eric Schmidt on OS X.
  3. – A list of OS X viruses from Sophos.
  4. Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J, 2008. OS X Exploits and Defense, p. 269-271.
  5. – Charlie millers talk on snow Leopard security.
  6. – Apple releases an update to deal with MacDefender.
  7. – A variant of MacDefender appeared hours after Apple’s update was released. – Charlie Miller talking about setuid programs in OS X.
  8. – Apple taking 6 months to patch a serious Java vulnerability.
  9. – Apple using password hashing from 1979 in lieu of a shadow file.
  10. – Misleading commercial 1.
  11. – Misleading commercial 2.
  12. – Misleading commercial 3.
  13.– Apple representatives told not to acknowledge or help with OS X malware 1.
  14.” – Apple representatives told not to acknowledge or help with OS X malware 2.
  15. Adam O’Donnell’s article – When Malware Attacks (Anything but Windows)
  16. – OS X market share by region.
  17. MacDefender linked to crime syndicates.
  18. – Many users hit by MacDefender.
  19. – The first exploit creation kits for OS X have started appearing.
  20.” – First OS X Botnet discovered.
  22. – A Firefox bug report about a vulnerability to keylogging.
  23. – Apple letting security researchers review Lion.

Update 1 – August 17 2011

A delayed update, but it is worth pointing out that this article is basically out of date. Apple has indeed fixed most of the problems with security with their release of Lion. At least this article is an interesting look back, and shows why mac users should upgrade to Lion and not trust anything before it. Despite Lion being technically secure, it is interesting to note that Apple’s security philosophy is still lackluster. Here is an interesting article on the lessons Apple could learn from Microsoft and an article showing just how insecure Apple’s DHX protocol is, and why the fact it is deprecated doesn’t matter.

February 7, 2011

The Next Hope and DEFCON 18 – Part 2

Filed under: Security, Tech, Travel — Tags: , , , , , , — allthatiswrong @ 6:25 pm


I was really really looking forward to DEFCON. This was the hacking conference. The HOPE conference was somewhat known to me, but DEFCON was the main one I had been looking forward to and the whole reason I bothered to go to Las Vegas. I was somewhat disappointed with how HOPE turned out and was expecting a lot more from DEFCON and had even been told by a few people at HOPE that DEFCON was indeed the more technical/sophisticated conference. While it was certainly an interesting experience it was still disappointing for much of the same reasons as I was disappointed with HOPE.


After 8 hours of travel and getting in at about midnight I caught a taxi from the airport just to Decatur and Tropicana. A major intersection and not far away, and yet it cost $50. The taxi driver didn’t rip me off as taxis have premium prices especially from the airport late at night. I have no idea why they feel entitled to collect a tip.

I arrived at my hosts house and we got acquainted and talked for a bit until it was about 2am. I only got about 4 hours of sleep. Even so I woke up and then raced to the Rivera after catching the stupidly priced deuce bus. I got to the hotel in time but could not find the lobby for the shootout…I finally found it, and managed to catch a ride with a group heading to the shoot. These were the first people I’ve met from DEFCON which was interesting. It was interesting to note the difference between most DEFCON people I saw that morning and most of the people I had seen at HOPE. The guys at DEFCON seemed a whole lot more showy…Mohawks or at least hair die, military style fashions. Everything screamed hacker wannabe. I did see one woman with what anyone would interpret to be a Tetris tattoo, but was apparently pixels. If you get a tattoo that looks like Tetris but isn’t Tetris you can’t really be upset when people think it is Tetris.

After a conversationless ride to the shoot we had arrived. After a short safety talk we were free to shoot guns. I’d shot guns the last time I was in Las Vegas…, a Desert Eagle and an AK-47, which was fun. This time everybody had brought their guns to the shoot and there was a wide variety of guns to shot from. I think I shot various semi-automatic and automatic shotguns, rifles and pistols. Probably the most interesting was a gun that looked like a sniper rifle and had a scope, but was apparently not. It was quite fun to take out oranges at 100 feet or so with a single shot and see them explode.

The pricing was interesting…..some people were offering guns completely for free while others were charging at various prices. I shot a lot of AC’s guns staying close to him for no particular reason, and it wasn’t until the end that I found Joshua, who let me shoot quite a lot of guns for free. Considering how broke I was that really ensured I had a great time…Awesome! Joshua and a few others also provided many free drinks for anyone to take, which I thought was amazing….considering how broke I was it was absolutely fantastic. I went back with one of the guys who had been letting me shoot his guns and helped pack up…I could only afford to give him $20 when it should have probably been $30 or $40 based on his prices….but since I helped him pack up I can’t feel to bad.

After helping him pack up I decided to walk from the Riviera to New York New York to catch the bus home. The bus that runs up and down the strip is ridiculously expensive and not often faster than walking. What I had not expected was the tan I received. For being out in Red Rock for 3 hours or so then a walk home I was firmly bronzed and not in the least bit burned (or if at all very slightly…no peeling), which was nice.

I had to borrow money to make sure I had enough for the conference, and after my taxi rides to and from airports was quite short. A friend of mine lent me some money and put it in my bank account…it was when I went to go back on Thursday night to get the badge that I realized my ATM card had expired. A friend drove me to the hotel and was circling around…she would have lent me the money I was short but by the time she came back around the registration had closed. I rushed home and managed to transfer it to my card in time so I could withdraw it the next morning.


On Friday I got to the registration desk early enough, but there was still a line. It progressed fast enough given the length however so it wasn’t too annoying. What I thought was strange was that they ran out of lanyards and guidebooks, although I was lucky enough to get one of the last electronic badges. I can understand running out of electronic badges but why would you make less lanyards then badges? That just seems like a poor design decision. This badge seemed interesting as well although it seemed to do less at first glance. It had a small LCD screen that could be interacted with, which was mainly used by people trying to unlock a code to get access to the Ninja Party.

I decided to take my push scooter to get to the Riviera, partly to see how long it would take and partly because I am always interested in people’s reactions to me riding it. The scooter worked OK and it took me about 30 minutes to get to the strip and then 30 minutes to get down it. Not a much greater time difference when busses are factored in, but without the cost and with the benefit of exercise. The only negative thing was having to lug it around all day, but luckily I was able to leave it with the bellman and pick it up whenever I liked. This was awesome since I was not staying at the hotel, and came in handy when I wanted to use the pool later.

The Keynote which was marked as Top Secret and had a lot of people curious turned out to be something quite mundane, “Perspectives on Cyber Security and Cyber Warfare” which was basically just a summary and some speculation. Surely the Powershell or WPA Too talks would have been more fitting for a Keynote?

I just kind of wandered around for a bit after this, checking out the pool and various sections like CTF and such or trying to socialize with people in the smoking area, where I met a few cool people. The next talk I saw “Build a Lie Detector/Beat a Lie Detector” had a strange rap introduction that seemed out of place and was to hear the lyrics to. It then became essentially a history of the lie detector and a bit on why the machines were easy to fool. So very very basic….and seemed quite short only going for half an hour. Why was a talk like this given at DEFCON 18?

The next talk I saw was “The Law of Laptop Search and Seizure”. This talk seemed to just be highlighting the fact that you have no say in customs searching through your laptop when entering a country. It is an interesting issue and I would have been interested to hear possible solutions rather than just a summary.

I then saw “Air Traffic Control Insecurity 2.0” which I had thought would be about exposing problems with the networks they were on. For instance, were any connected to the internet? Instead the talk highlighted that there were some security problems such as a lack of encryption but given that they are on a closed network that you need authorization to access, it isn’t really a problem. Riiight. Then the talk was about the information that some planes broadcast and potential ways this could be abused, which was interesting to speculate about but note a risk practically.

At this point I was extremely sweaty and had to go home to change. Coming back on the scooter was a different experience, as it was very hard to go down the strip with crowds…and I’m pretty sure one person threw ice at me from a balcony. It was much hotter then as well, making it far less appealing. I underestimated the time it took to go home and come back, and didn’t get back to the conference until about 18:00. This was annoying as I had really wanted to see the Evilgrade and Driversploit talks. Although given that the Evilgrade talk was only 30 minutes long I think I got most of it from the slides.

It seems that many of the videos of the talks or at least audio are available on the DEFCON 18 Archive page, which is cool. Perhaps this is why people don’t care about talks so much except for the very few that are significant, as they can always catch up later. I think this is my lesson learned if I attend next year…I will just focus on CTF and various activities.

I went and saw the talk “Getting Root: Remote Viewing, Non-local Consciousness, Big Picture Hacking, and Knowing Who You Are”, although as it turned out to be spiritual crap, I left. I don’t think I even went as far as getting a seat….I just wanted to see a talk and it seemed more interesting than whatever Kaminsky was rambling on about…or did I?

I didn’t see any more talks that day mainly as the two or three left didn’t seem appealing, although I kind of wish I had seen the Internet Wars panel given that the other panels I saw were all quite entertaining. I thought I would check out the pool…which was basically empty, although nice to swim in the Las vegas summer. Of course my board shorts were falling down, but after trying to find a safety pin luckily the security desk had a whole tray full of them. Even though the pool closed at 10:30 or 11, I would have thought there would be more people. Especially since there was a pool party which just meant free DJ’ing. But no….empty apart from some showboating skinny gay dude with an unfairly hot girl. I had a look in one of the chillout rooms but it had the same electronic music and lack of life that was at HOPE, so given my lack of money I decided to go back home and see what the next day may bring.

It was then that I randomly saw a girl from HOPE walking past quickly following a keg…in which she invited me to follow. Then the leader from AlphaOne labs came who really didn’t seem too interested in talking…just shy I guess. We moved around a bit from room to room as they tried to find a place where they could check ID’s and be allowed to drink. Finally I gave up as nothing interesting was happening, so I decided to go home. During this time I had been talking to a girl who seemed very into me and I felt like I could have pursued her, somehow I was just not interested. The fact that I was not that attracted to her probably had some bearing on that. So ended a Friday night in Las Vegas at a hacker convention, managing to live up to all the nerdy stereotypes long associated with such a thing.


I arrived too late to catch the “Exploiting SCADA” talk as it took just about an hour to get down half the strip on the deuce bus. Who would have thought? I had really wanted to see this as SCADA is a hot topic at the moment and something I don’t know much about. I wasn’t too worried given that there were many SCADA talks at the conference though.

I managed to see the “Jackpotting ATM’s” talk which was surprising as the line was huge and it looked like there was no way I could make it. I ended up being able to get in at the very end, and even managed to make my way down to the front which was awesome. I was excited about this talk as I knew it had been given at Blackhat and I had been disappointed with the DEFCON talks so far. ATM’s are an interesting subject, not least because they promise untapped cash to those who can hack them. Given many are notoriously insecure running a version of Windows or OS/2 it would be interesting to know if they are connected to the internet at all or what their private network is like and just how they are protected. Alas, the talk did not touch on that at all, but rather showed how if you can get physical access to the ATM motherboard how easy it can be to reprogram it. It was a very entertaining talk but honestly it isn’t anything new. It has always been the case that it’s pretty easy to take over a machine if you have physical access. The criminals who managed to reprogram ATM’s to spit out $100 notes instead of $20 notes or whatever was much more impressive.

I then wandered around a bit as the next talk I wanted to see was not for an hour. There was nothing happening at the pool so I ended up just wondering between the smoking area and display areas. One thing I did notice was that the EFF does not seem to understand the meaning of the word donation. There was a police shooting simulator that looked interesting to try, which you could only do if you made a $20 donation. As far as I understand it a donation can be any amount and not require anything in return. What the EF was doing was charging a $20 fee to play a very basic video game, which is just lame.

I then went and saw “From No Way to 0 Day” which seemed elementary. I had thought the focus would be on showing when DoS attacks which are often dismissed can actually be used to craft a privilege escalation exploit.. It didn’t really seem to cover this at all and was far more to do with basic attacks on the Linux kernel.

I managed to catch the “SCADA and ICS for Security Experts: How to avoid Cyberdouchery” talk. SCADA systems are something that have a lot of hype and as a result a lot of misinformation is being spread around. I don’t know that much about them and was interested to learn more and this seemed like a talk that would cover a lot of basic stuff, especially since I missed the talk in the morning. Well, no. This talk took about an hour to say that the problems are overhyped and don’t really exist (without bothering to address the arguments claiming the opposite) in different ways while avoiding ‘cyberdouchery’ boiled down to not hyping problems or blowing things out of proportion. It’s an interesting opinion but the talk may have been easier to respect had he explain why his opinion was contrary to the majority of other talks being given on SCADA issues.

I regretted not seeing the “WPA Too” talk instead as that talk was actually revealing a new attack technique for the first time. Of course no one was talking about it and I didn’t read the description carefully enough to notice. I think I was partly disillusioned by the wireless security talk at HOPE so I just dismissed it. Damn.

I think I went back to the pool at this time. I wondered around the display areas for a bit more and in the interests of socializing thought I would go to the pool. Of course no one was really socializing with each other and there was hardly anyone there. I had wanted to see the talk on “The Chinese Cyberarmy – An Archaeological Study from 2001 to 2010” which was unfortunately and interestingly cancelled. Wayne Huang had a lot of interesting talks and I didn’t get to see any of them…..

I ended up puttering around for another hour then went and saw “DEFCON Security Jam III: Now in 3D?”. This was an awesome talk and easily the most entertaining I had seen so far. It was just a panel of a lot of the regular staff/attendees whatever each sharing stories of fail with games and prizes such as bacon beer to be had and specially made waffles being given out to a few lucky people.

After this it was time for the Freakshow party held by IO labs. It had appeared that you needed a pass to get in but of course it was free for all attendees. Given that there was free beer and I had to show my badge to get in, I think I actually got some value from paying for the badge. It was quite a party, in the penthouse of the Riverra with gladiator style games and some dancing and mingling. People were smoking inside…we weren’t mean to but everyone has. Captain Crunch came up to admonish us and the 3 guys I was talking to at the time just dismissed him, not even knowing who he was. After I pointed out who he was they ran off to apologize….which was funny to me. I got a chance to talk to Captain Crunch himself which was nice and the flowing beer didn’t hurt.

During this time the ultra secret Ninja Party was still on my mind. Somehow I was confident I would be able to get entry as it just seemed like a big display. Riding in a limo bus with free drinks on the way was very nice. Once arriving they did appear to have some bouncers a list, but the people I made friends with at HOPE got me in without any issues. It seemed like if you had the smallest amount of streetsmarts you could get in, yet apparently all these “hackers” could only regurgitate textbook knowledge. Not the type of person I thought the word hacker ever applied to. At this point it was again time to head home, walking to New York New York and then catching my bus. I thought about scoping out the various casinos but without money, there didn’t seem much point.


I woke up too late which wasn’t surprising given the night before. I was kind of annoyed I missed the handcuff talk, but it was too expected. I bought a monorail pass of a guy the day before for $1 and got didn’t realize it put me so far from the hotel. I had to walk from the Hilton and by the time I got there people were telling me it was 13:00. I thought I would then go and watch Samy Kamkar’s talk at which time I realized it was actually 12:30 and I was in some other talk. I rushed out to try and catch the Powershell talk but at 12:30 the line was already too long. I wanted to make sure I caught Samy’s talk so gave up.

Samy Kamkar’s talk, “How I Met Your Girlfriend” was very entertaining, easily the best talk I saw throughout the whole conference. The Security Jam and ATM talks were entertaining but didn’t really have interesting details of a new attack like this talk did. Samy Kumar is a fantastic public speaker as well as being a decent security research. Something sorely lacking at these conferences were people coming up with innovative attacks and presenting them in a decently.

There was nothing else to do until the closing ceremonies, so to the pool I did go. Finally, there were people in the pool, playing games even! A girl I originally met going to the shoot didn’t acknowledge me at all when I tried to say hello, although I managed to get back a guy from HOPE who whooped me in the gladiator games which was fun. It was a pretty fun time just relaxing in the water under the Las Vegas sun.

Finally, it was time to head back for the last talk I would see before the closing ceremonies, Sniper Forensics. Alas, it was cancelled and replaced with a Spot the Fed panel. I have no idea what happened to the Sniper Forensics talk for it to disappear from both HOPE and DEFCON but at least the slides are available. Odd. Spot the Fed was fun with just a lot of joking and guessing, although somehow it wasn’t what I thought it would be. Probably because the feds turned out to be analysts getting paid to go, not FBI agents scoping out threats undercover.

A bit more wandering around and then it was time for the closing ceremonies. It wasn’t too much of a big deal, just everyone being thanked and some prizes being awarded. Free stuff was being given out during although I didn’t get any. After wandering around a bit after everyone left I ended up with a random box of stuff….a motherboard, some music CD’s and a random cell phone data backup device. I also got a 8” floppy disc which I managed to swap for a lanyard, in the hope that I could sell my badge on eBay. Something I have yet to do.

Before leaving home I had a meeting with representatives from one of the big security companies who were present at DEFCON to talk about my business plan. This was very exciting for me, and they seemed eager to help. I last talked to them in September and have not made any progress, but as I finish writing this article I know that I need to get more on that.

That was it, DEFCON was now over. While walking home I stopped for a smoke outside a 7-11 and ran into some friends from HOPE who were happy to see me. We shared contact details and agreed to meet up when I was back in NYC. I arrived home where I just chilled out for the next week, still not having my laptop and deciding what to do while looking for accommodation. Then the rest of my adventures in Las Vegas began.


Much like HOPE, I had very high expectations for DEFCON. Perhaps a bit more as I was not entirely sure I would be able to attend, so when I was able to make it my excitement and expectations increased even more so. DEFCON was the much bigger convention than HOPE, the one where many famous attacks and discoveries had been presented and where many fun hijinks had ensued. Alas, DEFCON was nothing like I expected and I found many of the same criticisms I had of HOPE applied to DEFCON.

There was the same lack of knowledge from people attending and most talks being introductory rather than groundbreaking. I was surprised when I asked to borrow someone’s laptop to check my email, and they thought I was crazy for doing so. How could you take the risk, with all these hackers around? Apparently they had never heard of SSL and certificate authentication. To be fair there was a risk of a keylogger, but I thought it unlikely in this case. There just seems something wrong with people attending a computer security conference fearing magical hackers.

One of the more interesting things I noted about DEFCON was that I did not get checked for my badge. Not once. For a security conference that seems awfully lackluster. I could easily go to any talk I wanted and take full advantage of the conference without having to pay $140. I wonder if it has always been this way or if the goons were unusually slack this year.

The pool parties were quite pathetic, although I was very impressed with the Freakshow party put on by IO labs. The Ninja party was likewise impressive, although neither of these parties can really be attributed to the DEFCON organizers. One thing I did notice at DEFCON was there seemed to be a lot of attractive girls. As sexist as it is, I wonder if they were perhaps just girlfriends of the male attendees. To be fair, I didn’t end up seeing them in any talks or taking part in any of the games. It was a nice distraction from almost everyone getting a Mohawk. So many people with Mohawks… many wannabe hackers….so sad. To be fair, the EFF was doing a fundraising campaign offering Mohawks for a price, but many of the people already had them before coming to the conference. Aye.

Speaking of which, the EFF “donations” were really disappointing. If you want to raise money by charging fees for products or services do it. The whole point of a donation is that it is voluntary, not that you pay a fixed price for something.

I do wonder how much DEFCON has changed and if perhaps it is no longer as relevant as it once was. Dallas said during the closing ceremonies that this was the quietest DEFCON he had seen…..which perhaps explained why it wasn’t what I had imagined it to be. Perhaps the previous generation have all grown up, replaced by a generation of attendees with no understanding and a desire to be seen a certain way rather than actually acquire knowledge or skills.

While most of the talks I saw were introductory, there did appear to be quite a few technical talks and I may have just chosen poorly. The Powershell talk, WPA Too and the talk on Farmville all looked like they had innovative research to present and I plan on watching the videos at some point. I was quite disappointed that Wayne Huang had all of his talks pulled, as he looked like he had a lot of interesting stuff to say. Hopefully he will be able to present them next year. If I do go next year, I will make a point to only see key talks and spend more time with CTF or Wargames, although I’m not sure if Wargames are still happening as I don’t recall seeing them.

I do think the crowd at DEFCON is more technical than the crowd at HOPE, although they all tend to stick to socializing or competitions. Trying to discuss things with people outsides there was a clear difference between those who were competing and those who were trying to understand everything being said in the introductory talks.

I would like to make a special note of TheCotMan. What a fucking retard. I understand there are some idiots or troublemakers on the forums, but when people ask for help and have done research you don’t just ban them. He is the type of arrogant idiot that thinks he knows best and has heard it all before, when he has no fucking clue. Although given the average poster on the forum, the type who like to get Mohawks and call themselves hackers without understanding some simple key concepts, he seems to be with his peers.

I honestly don’t know if I will go again this year. I can’t say that it is worth a flight out to Las Vegas again, and given the lack of innovative talks and any kind of decent party, it isn’t really that appealing to me. On the other hand I have good friends in Las Vegas and have my laptop back now, so it could be a fun learning experience. If not, I will defiantly go to DEFCON 20 which should be a big celebration. Despite everything I did have a good time and am glad I went.

Comparison between HOPE and DEFCON

It was interesting to compare the differences between The Next HOPE and DEFCON 18. HOPE was far better organized with people always on staff if you had any questions, with something being available 24/7 during the conference. I liked that you could not enter the conference areas unless your badge was displayed. HOPE was much better prepared for crowds, having overflow rooms setup which projected whoever was speaking in the main room. There were more activity villages at HOPE, more talks, fun things to try like Segways and such. DEFCON didn’t have anything like this really.

On the other hand DEFCON had a much greater number of people, has the well known competitions such as CTF and Wargames and tends to attract a greater number of high profile speakers. DEFCON 18 was terribly organized with it being impossible to get to a room as the only route was a skinny hallway, and they had no overflow rooms or the like. Badges were not checked for once, and there was very little to do that didn’t cost money outside of talks.

One thing I did appreciate is that DEFCON makes videos, audio and slides of the talks available while HOPE charges for this. It isn’t a big issue as most of the talks can be found on youtube, but I think it is a nice gesture to allow people to view them for free. It is a nice gesture and representative of helping people to learn and acquire knowledge, something that is meant to characterize these communities.

I also liked that HOPE allows for preordering for a cheaper price, and it is really something that DEFCON should consider. Running out of lanyards but not badges for example is just ridiculous. Having a cheaper price for those who are regular attendees and having the option to avoid lines would also be quite nice. Perhaps also consider appointing a non douche forum moderator.

I do wonder where these conferences are heading, or if they are losing relevance. As the industry is paying far more attention to computer security the field is going to be restricted down to a much smaller number of professionals, simply because the need will decrease. What about the non computer security aspects such as phreaking, electronics, lockpicking and such? Well, it seemed to me that these are all supplementary and the main theme is still computer security. It will be interesting to see if there is a DEFCON 30, if hacking will go the way of phreaking, or if the culture will grow and adapt. Given my impression of the community as it stands today, I don’t think that is too likely. Which is sad.

The Next Hope and DEFCON 18 – Part 1

Filed under: Security, Tech, Travel — Tags: , , , , , , — allthatiswrong @ 6:17 pm

So now just over six months since these conferences have ended I have managed to write up my thoughts on them. So much procrastination, traveling and other things to write. Still – better late than never. I’ve had a strong interest in computer security for at least the last 10 years and have dreamed of going to these conferences since at least that long. Of course, I could never afford the cost of an expensive overseas flight and accommodation and when I have been in the states before it was never in summer or on the west coast. This year however things worked out well, being in the right place at the right time.

I had such high expectations for these conferences. Surrounded by some of the most skilled and prominent people in the field, listening and learning from new talks being given, a chance to play some of the games going on and learn or prove myself in the process. There was certainly a lot to look forward to; unfortunately I found both conferences to be a huge disappointment. I found most of the people I interacted with to have a very poor understanding of even basic security concepts which was reflected in the fact that the majority of talks seemed introductory rather than groundbreaking; very few relied on a presumption of basic knowledge – something I thought would be common to the majority of attendees. I was also disappointed in the opinions held by many in response to certain issues such as piracy or the whole Bradley Manning case.

In any case, I have written about my experiences attending these conferences for the first time which some may find interesting. I will be posting it in two separate parts with the first being my experience at hope, followed by my experience at DEFCON and a comparison between the two conferences.

The Next Hope


I got to the Hotel Pennsylvania pretty much on time and waited in line to get my badge which really didn’t take long at all. The badge was interesting to observe although not having much of an understanding of electronics I couldn’t make much of it. Having never been to one of these conferences before, I had expected it to be a lot more active and packed with people than it was, while it was actually quite moderate. I wandered around for a bit as I was hoping there things would be a bit more social, but really nothing seemed to be going on.

I then decided to go and attend the first lecture, “Light, Color and Perception”. It was an interesting talk, although some of what was talked about was a bit over my head. Still, there were some interesting demonstrations and I learned a few new things.

After that, I decided to catch the talk on wireless security, “Wireless Security: Killing Livers, Making Enemies”. I had thought it would be these types of talks that were the reason I came to the conference. Unfortunately, this talk was a disappointment. It was incredibly basic and boiled down to rehashing that WEP is bad. I would have hoped that everyone in the conference would have at least known that, even if they didn’t understand the underlying details as to why. The talk mainly consisted of a few stories demonstrating how easy it is to fool people into joining rogue networks and why this was bad. There were no innovative ideas given for solutions and no talk of the more recent attacks. An hour of how you can screw with people who don’t know any better is not what I was expecting. I was pretty dissatisfied with the talk but thought it would be an exception and looked forward to some of the more technical talks that would have a bit more substance to them. I was mistaken.

I then decided to go and see the keynote which at the least should be interesting. It was by Dan Kaminsky, someone who I have never felt too highly of. He has always seemed to me to be a drama queen, being overly cocky without cause and often simply getting things wrong. I also can’t think much of a security expert who uses 5 letter long root passwords and fails to comprehend the threat of dll hijacking.

I felt he lived up to some of my perceptions during his talk, which was basically talking about the problems in languages that allow for bugs. This is a fairly well understood area of ongoing research and certainly did not seem the worthy subject of a Keynote. It was essentially a slideshow summary of the problem and what has been suggested by many people as solutions. Again, no groundbreaking new ideas or revelations, just a basic summary that a lot of people in attendance would/should have been familiar with.

I’m not too sure what I did at this time. I think I wondered around the mezzanine looking for the segways which had closed for the day. There was nothing really going on that seemed too interesting, so I may have gone home briefly.

The next talk I saw was “Tor and Internet Censorship”, which was actually interesting. I would have hoped that most people in the audience would be familiar with tor and the goals of the project and while the talk was a summary, it also revealed a lot more info on what the guys are trying to accomplish. It was interesting to hear how they deal with countries trying to block the software and the various cat and mouse games they are forced to play. In today’s world the importance of projects such as tor cannot be underrated and it was great to see them keeping in touch with the community and getting the message out.

After this I saw the “Easy Hacks on Telephone Entry Systems” talk, which I hoped would be interesting. I have very little knowledge of telephone infrastructure and still have not gotten around to playing around with Asterix to get a better idea. I had hoped I would be able to pick up some things from the talk even without having the background knowledge. Well, the talk didn’t require any background knowledge. The talk was basically showing that a lot of entry systems still use default passwords and/or have the access control panel only protected by a very flimsy piece of metal. It was interesting to learn those facts…, but it really should have been one of the 20 minute lightning talks. How it stretched out to an hour I don’t know.

After this, I went home as I was exhausted. I wasn’t sure why I didn’t stay for the hotel talks talk as that seems interesting, probably I had had enough talks for one day though and there was nothing in the mezzanine more appealing than sleep.


The next day I arrived kind of late due to trying to work out why the bank had suspended my access to my funds I got there in time for most of the “Grand Theft Lazlow” talk. This talk was one of, if not the most disappointing of the talks I saw throughout all of HOPE and DEFCON. The guy was a developer for Rockstar Games and started talking about his views in piracy. Piracy is a complex area (precisely because it is NOT theft) but this guy would not acknowledge that, simply considering it completely wrong and actually pushing for greater restrictions. Many of the things he was saying were just ignorant and it was disheartening to hear the audience cheer.

An interesting moment was when someone got up to ask a question, making the point that $80 or so is too much to spend on a game without knowing the quality, so he will often pirate and buy the game if he felt it was worth it. Lazlows response was not to comment on the legitimacy of doing that but to accuse the guy of being a liar, to which the crowd cheered. This was meant to be a community of people capable in critical thinking and understanding new ideas, but that crowd was anything but.

Then it was time for the keynote which was said to be given by Julian Assange. At this time I was still catching up with the whole Wikileaks phenomenon so was surprised just how much the feds did want him and how big a thing it would be if he did show up. Obviously he didn’t end up showing and a talk was given by Jacob Applebaum from the Tor project covering why Wikileaks is important, and what they stand for.
Ironically I have not made up my mind on Wikileaks because there is so much contradicting information, and credible claims made against them.

The points given in the talk however were interesting, especially the points of privacy. All of the talk of no secrets reminded me of the Asimov story, The Dead Past, in which privacy is eliminated. Obviously the Wikileaks people are not calling for an end to personal privacy, but even so I find it hard to imagine a world in which governments as powerful as the US are completely transparent. Reason being, there is a lot of justification for a government to keeps things secret from its population, at least for a while.

The most interesting thing about the Keynote was not that Julian Assange did not appear as that is to be expected. It’s that he didn’t teleconference in, or even prerecord something. This is a hacker conference full of people supposedly ahead of the curve when it comes to many issues, not least of which is technology. It’s hard for me to believe there was no one able to set it up so he could talk, an action which would have sent a message all by itself. Was there really no one capable of setting up a webcast to go through anonymous proxies? We could have even gotten the guys from the Pirate Bay or something to host it, by the time a warrant would have been issued the webcast would have been long over. Or, maybe he had his own personal issues to deal with. In place of Assange Jacob Applebaum of the Tor project gave a great presentation and then made an amusingly dramatic exit.

The next talk I saw was “Modern Crimeware” which turned out to be a very basic explanation of how people make money through malware with botnets…not particularly interesting or enlightening. Again, I would expect most people in attendance to already understand this basic stuff. I was hoping for some interesting details on how “cyber-criminals” protect botnets or something….researchers at universities regularly publish far more interesting papers in this vein.

I then saw “Surfs Up” – a well presented and entertaining explanation of CSRF attacks but still a very basic explanation…., I mean, were talks like this presented at the last conference? Why are such introductory talks the norm? I don’t have anything against all these speakers as I think their presentations were fine for what they were, I just don’t understand why they were given at a HOPE conference.

The “Social Engineering” talk was interesting. It was a panel with a lot of famous faces, not least of which were Emmanuel Goldstein, Kevin Mitnick and Captain Crunch. It was interesting just to hear some of the stories these guys were telling and just how easy the technique still works to this day. Definitely an entertaining panel and honestly I wished there had been more like it.

The last talk I saw of the day was “Net Wars Over Free Speech, Freedom, and Secrecy or How to Understand the Hacker and Lulz Battle Against the Church of Scientology”. I was hoping this talk would highlight some of the attacks on free speech that have been instigated by Scientology in the name of religion – instead it was mainly a summary of some of the pranks Anonymous have pulled. Which anyone who keeps up with this type of news would have been aware of?

After that I was disappointed to see that the “Hacker Cinema” was not showing something entertaining or relevant, but rather a documentary. IIRC it was “Get Lamp” which was a documentary on something to do with the history of early text games (corrected due to commenter Pan) which just didn’t seem appealing at 11pm on a Saturday night in New York City.

I was hanging around with some of the other people I had met while we were looking for more information on Lazlow and maybe to see if some other games or something were going on. Nothing much was going on except for a party in the mezzanine with horrible, horrible video game music. So many people actually dancing to repetitive loops of Mario dying and hitting mushrooms. It was just so bad. Somehow, that seemed to say a lot. New York City was right outside on a Saturday night, but dancing to video game sounds was more preferable for many people. Aye


Sunday was the last day of a conference that had so far been disappointing, but still had potential. I got there later than I expected…I think it was due to some problem with the trains. The first talk I saw was not until 13:00, and was “DMCA and ACTA vs Academic & Professional Research”. I was hoping this talk would give some insight into the ACTA treaty as I had not kept up with it and it has largely been kept secret. There was nothing on ACTA except that it was often mentioned in the same breath as the DMCA as being evil. The DMCA had been around for about 10 years so explaining it again resulted in this talk being yet another introductory talk.

The main problem I had with this talk is that the speakers would continual talk about the DMCA as being evil when the problem is not with the legislation. The problems in almost all of the examples given were from people or organizations misusing the DMCA. There was no mention of companies who would refuse to acknowledge counter-claims due to being too scared of being sued. There are actions you can take against this to stop further abuse; failing to realize that and take action does not mean that the DMCA is evil.

The next talk I saw was “Into the Black: DPRK Exploration”, which was slightly entertaining but hardly informative. The first 20 slides or so are just meant to be humorous and were skimmed through, with the rest basically dismissing every claim because North Korea doesn’t have its shit together. A fair argument, but it might have carried more weight if it wasn’t presented as just mocking the country. It’s hard to tell the talk was meant to be considered authoritative or speculation given the way it was presented.

While I was watching the DPRK talk I had no idea how significant the informant’s panel was and that Lamo would be there, so I only managed to catch the last 20 minutes of it or so. I never thought too highly of Lamo before, with his previous claims to fame for “hacking” seeming to be designed to give his as much media exposure as possible rather than actually contributing in any useful way. I didn’t know enough about the Wikileaks situation at the time to take sides but managed to get the gist of things. The main thing I noticed here was that most people were angry at him and insulting him, having made up their minds before he even attempted to defend his actions.

Regardless of if his actions were right or wrong, I do believe in this case he thought he was doing the right thing. What he did is not a simple issue and it made me sad to see everyone dismiss his actions as wrong without bothering to actually give the issue the thought it deserves. It was when question time came that I saw the real character of most of the attendees.

One girl asked him about restraining orders and his ex-girlfriends claims of abuse, which true or not have absolutely no relevance to the issue that was at hand. I expected more from this community than trying to discredit someone further because you disagree with their actions. Another attendee didn’t bother to ask a question but outright accused him of treason (which showed that persons ignorance, as per the definitions Lamo’s actions are closer to patriotism) only for everybody to cheer and applaud. I am still undecided on the entire issue as it is so very complex.

What I do think personally is that Lamo showed a lot of courage by joining in the panel and attempting to justify his actions, knowing the public opinion and abuse he would likely face. Most of the people in the audience would likely not have that courage or strength of will to do what they think is right, are more comfortable criticizing from behind the scenes.

That finished at about 3:30 or 4, and then it was some more hanging around with people in the mezzanine or outside. I had wanted to see the talk on Sniper Forensics but didn’t bother because it was going to be at DEFCON. Of course, it wasn’t, but it didn’t end up being given at HOPE either. Luckily I have my slides on the CD from DEFCON.

Closing Ceremonies

The closing ceremonies were interesting. They started very annoyingly as most of the seats were filled, and one fucking douchenozzle told me his seat was taken, when it was just his bag….and he wouldn’t let anyone sit the entire conference. I was planning to do something about that, but he left before I could learn his douche name.

I saw one girl who was basically ignoring the ceremonies, playing Scrabble on Facebook or tweeting about shoes the whole time on her iPad…yet she was continuously moving toward the front….why? Why worry about being at the front if you aren’t paying attention? Then there was a guy who kind of crept up behind me when I was leaning against a pillar, only to somehow swoop in when I left it for a few seconds. He was filming the entire thing and would just keep tapping me to move out of his way….apparently he was unable to speak or say please, despite him being fully capable of speech. I just don’t understand what it is with this crowd and the bizarre lack of social skills. The ceremonies themselves were fine….just what could be expected. Some prizes were awarded and thanks were given as well as talk of a possible next conference.

After that I didn’t feel like going home just yet, so volunteered for a few hours. After I started I heard talk of some prizes for the volunteers which kept me going. Several hours of lifting heavy cables, crates, lights and such…all for nothing. It was at least 4 hours or so of hot sweaty work and I was hoping for a copy of a book or something. Alas…nothing. Funnily enough I don’t really feel guilty about downloading the ebook.


That was the end of The Next HOPE, my first ever conference. I had always been interested in computer security and the associated underground culture. Ever since I saw the movie Hackers which was directly inspired by the 2600 community I had wanted to see what it was all about. Growing up where I did I never really had the opportunity to join in, but as I became a computer security professional and learned more about the community which I had always held in high esteem I couldn’t wait to one day go to one of these conferences and participate.

It was then all the more disappointing to go to a conference and witness in most people an inability to think originally or creatively and to accept popular ideas without giving them the critical analysis they need, that is required to have an opinion worth anything. For most of the talks to be introductory and retreading well known topics was also a disappointment. Where were the new discoveries, the innovative attacks or just the passionate discussions on matters important to our community and society?

One of the interesting things to note was how amazed people were that I could make the badge remain blue and just how incapable they were to figure it out themselves. The badge had LED’s which blinked intermittently in no discernible pattern. When you touched a sensor on the back, it would remain blue. So, in order to keep it as blue, you just had to keep the connection somehow. Yet, no one was able to figure out this very simple hack with people being puzzled at how I accomplished it.

I should also mention the disappointing views that manifested in the talks about piracy or governments or Adrian Lamo. I think at some point in the Informants talk someone made the comment that all nation states are a fundamentally bad thing to which everyone cheered…loudly. Apparently many people in the audience are naïve fucking anarchists, not the caliber of person I would have expected at a HOPE conference.

Club Matte also deserves a mention. This drink was disgusting but hyped up as being from Germany and what hackers in Germany were drinking. Having lived in Berlin for over a year recently I had never seen it before and it tasted like shit. At $4 a bottle it really wasn’t worth the price, yet people were buying it in droves. I can understand if it were to try something new, but people kept buying it to be seen as cool. To be seen as fitting in and being part of the scene. There is just something very sad about that.

One of the highlights of the conference for me was to see Kevin Mitnick and Captain Crunch in person…icons who in many ways shaped the culture and community. I have to wonder what they think of the current state of the communities. Both Mitnick and Crunch did what they did by thinking originally and looking for solutions to problems, the very opposite of just accepting whatever they are told which seems to characterize the current community.

Despite everything I did notice a very strong sense of community, a sense of unity that I had not seen before. What I wonder is has a community once defined by solving problems in innovative ways and creating things never envisioned simply become a hobby group for people who have an interest in technology? Perhaps my expectations were too high or I was looking for the wrong thing, but somehow I don’t think that’s it. I didn’t see a few talks such as the https fragile talk, which may have been technically interesting. Even so, seeing the views of people attending was discouraging. However for $50 it was worth it and I would do it again hoping for a better turn of events.

Update 1 – February 8th 2011
Corrected one typo and fixed the description of the movie Get Lamp.

November 22, 2010

Adobe Reader X

Filed under: Security — Tags: , , , , , — allthatiswrong @ 11:20 am

A few days the long awaited Adobe Reader X was released. Given that Adobe Reader and Flash have been the primary attack vector on PC’s for the last few years (with them being responsible for over 80% of attacks in 09 alone) a secure version of Reader is long overdue. It is a sad state of affairs that a PDF viewer needs a sandbox in the first place, but given the reality of the situation it is good to see Adobe finally stepping up. The question is, did they do a good job? Adobe have an atrocious track record when it comes to security, but going by their blog it seems they worked closely with experts, so hopefully it is as good as can be expected.

The initial impressions upon first using Reader X were not great. The setup file is quite larger, 35mb as compared to 26mb for 9.4. Nothing really seems to have changed except for the sandbox, and the ability to comment pdf’s built in to the reader, which I guess is nice. The toolbar seems to be using a different widget set and it now looks cartoonier, which I don’t like at all. I had originally thought the toolbar had disappeared from the browser plugin which would make it harder to navigate pages, but it is actually a minimal toolbar on autohide at the bottom of the screen. While not intuitive it is a big improvement. For some reason the installer still places a shortcut on your desktop as it has for years. I’ve never understood that, as I have no desire to stare at a grey screen.

The security changes seem interesting. The reader is now using marked as a low integrity process in addition to the sandbox, as well as having full DEP and ASLR support. There are no customization options for the sandbox that I could find, but then none are really needed. The sandbox is only for the Windows version, so OS X, Linux and Android users are still left unprotected. As per the Adobe blog post above all write attempts are sandboxed by default. This should effectively stop most drive by download attempts in their tracks. It isn’t terribly easy to tell if protected mode is on or not, requiring to view the advanced properties of the pdf you are currently viewing. It seems however Adobe is aware of this and other problems and will work towards them on future releases. I am actually having trouble finding any further detailed information on the new protected mode, as clicking on the link on the website simply shows me a nice generic image of Adobe Reader.

I often see the point come up that using an alternative PDF reader such as Foxit or Sumatra will provide better security. This is simply false. Neither Sumatra nor Foxit have DEP or ALSR support (which is trivial to implement) and act buggy if they are forced to run as a low integrity process. They also lack an equivalent to the Enhanced Security Mode present in Adobe Reader since 9.3, requiring confirmation for certain actions. PDF exploits are often reader independent, in which case Adobe Reader actually has better mitigation techniques than any other reader. The gain in security via obscurity by using these other readers is far less than the mitigations techniques present in Reader X. With the introduction of a sandbox, Adobe Reader X is clearly the most secure choice at the moment. In addition to security aspects, other readers are simply not good enough to be a replacement yet as they have problems with overly large files or lack compatibility entirely for features such as forms.

I wonder when Flash will gain a similar to sandbox, as it is another primary attack vector these days if not more so than PDFs. Flash is still being targeted such as in this recent attack yet I have no heard no plans for Adobe to make security a priority for flash as they have for Reader, which is kind of strange.

What the last few years and various PDF and Flash exploits have shown is that DAC continues to be a poor access control framework for a modern desktop environment. There is simply no reason that a program started as a user should inherit the full rights of that user. If we had an easy to use MAC implementation that was mostly transparent, than most of these exploits would not be an issue, in fact they probably would not exist due to them not being possible in the first place. It seems the industry is slowly heading in that direction and features like sandboxing and integrity levels for processes are a good start. At least they will suffice for the meantime until such a time when operating systems allow us to easily sandbox risky or untrusted applications instead of relying on each program implement their own version. In the meanwhile for applications that are not sandboxed, it is possible to do so using Sandboxie, however it is not as effective on 64bit versions of Windows due to Kernel Patch Protection. I am not aware of any sandboxing applications on OS X and of course on Linux you can use a jail or one of the main MAC implementations.

November 5, 2010

Thoughts on Firesheep

Filed under: Security — Tags: , , , , , , — allthatiswrong @ 3:25 am

The last week there has been a lot of discussion over the release of the Firesheep addon for Firefox. Firesheep made the news because it allows anyone to impersonate someone on the same network on the vast majority of websites on the net. This is known as session hijacking or “sidejacking”. The problem occurs because most websites only encrypt the login process which prevents people from sniffing your username and password, but they then redirect to a non-ssl site which allows people to steal your session cookie – the unique identifier that tells a site who you are after you have logged in. If someone has hijacked your session they don’t have your username and password, but will still be logged in as you.

There has been a bit of controversy over Firesheep because some people are convinced that the person who wrote the extension should be held accountable or at the least did something wrong. Nay. Those people are simply misguided. Releasing a tool like Firesheep is the essence of responsible disclosure. The generally agreed process for dealing with exploits is to contact the developer privately to work on a fix, and reveal the exploit with the fix so both the company and researcher get credit. If the developer refuses to fix the flaw, then a proof of concept exploit is released to push the developer into doing the right thing. Firesheep is simply another proof of concept exploit for a problem that has been around for many many years. It isn’t like people were not made aware at least once before.

Facebook seems to have been getting the most press, although most sites are vulnerable. Most web email services, Amazon, other social networking sites, forums….pretty much anything you can think of. The strange thing is that most people don’t seem to care. This is generally because people don’t understand the risk or think that they won’t be a target. This is why the release of something like Firesheep is a good thing, a fantastic attempt at actually illustrating the threat. No doubt its use will become more widespread and as more people start to get taken advantage of, there will be a greater push for security that will benefit everyone.

I found it interesting that somebody went to the trouble of writing FireShepherd. FireShepherd is a tool that exploits a bug in FireSheep to prevent it from working. It doesn’t accomplish anything, and will likely be rendered obsolete in the next version. If something like FireShepherd was to be useful it should pollute the waves with fake sessions, although even this would not work terribly well.

I wanted to clear up some misconceptions that have sprung up in the wake of Firesheeps release, as a lot of bad advice seems to be being given out. First of all, logging out does not automatically make you safe. Many websites do not necessarily invalidate the session upon logout, which means even if you have logged out whoever is hijacking your session can continue to do so.

You are not automatically protected by using an encrypted wireless network. WEP does not encrypt client traffic for authorized clients and besides can be cracked in seconds. WPA/2 PSK means it uses a Pre-Shared Key. This means anyone with that key can decrypt traffic for the other clients. Firesheep may not work, but it would not be difficult to adapt it to do this. Even on a wired network you may not be safe due to ARP poising or MAC address overflow attacks.

Some people have recommended using a VPN or SSH tunneling which are one of the best solutions. They are not immune however it is a whole lot less likely that someone is sniffing anything from your ISP’s connection upwards than it is that there will be some douchebag at Starbucks looking for someone to take advantage of. Either of these solutions are the best at present as they allow you to encrypt all traffic to a point where only employees or authorized personnel would have access to take advantage of your unencrypted sessions.

What most people have been suggesting is to use an extension that forces sites to use SSL all the times. The two most suggested addons are HTTPS Everywhere by the EFF and Force-TLS. NoScript also has this capability. There is a similar addon for Chrome called KB SSL Enforcer however it is quite insecure at present. Due to the subpar Chrome extensions framework every site request will be http first, so session cookies will still be leaked and can be abused.

Each of these addons make use of HSTS which rely on the server to have support. If the server does have support then the entire session can be encrypted. Unfortunately not many sites support this at present, and forcing an SSL session by rewriting http requests is not ideal. Some websites will break if you try this, such as chat no longer working in FaceBook. Some sites may not load at all as they will depend on http content for various reasons, such as hardcoded links or content from another domain. Even if a website supports wholly SSL sessions there may still be information leaks, such as AJAX requests or a fallback to HTTP happened to Google a few years ago.

The only decent solution is for websites to implement sitewide SSL . Secure cookies, and SSL for everything from the login and after. No fallback to just HTTP, at all. Of course, this approach has gotten some criticisms. People claim that SSL is too expensive, however this simply isn’t true. Google showed earlier this year that having SSL as the default option for Gmail increased server load only %1. If Google can manage this, Facebook and Amazon sure as hell can. People are saying the performance will be diminished because you can’t cache with ssl however this is false, you just have to set Cache-control: public. Then there are people who complain about needing a dedicated IP, which is also false. Basically, in this day and age, there is very little reason not to have an entire encrypted session for anything remotely valuable. It is appalling that so many companies and sites have ignored this problem for so long and I think it is great that Firesheep has brought attention to the problem, again.

Of course, I don’t expect anything to change although in another few years there will probably be a similar tool released and the discussion will start up again, at which point there truly won’t be any excuse for default non-encrypted sessions to be so prevalent.

Update – November 11th 2010

A few days after I wrote the above, a tool called BlackSheep was released which aims to help people detect someone using Firesheep on a network. The tool does as I said above, sending fake sessions and reporting when one is accessed. I haven’t looked at it in detail but I would think it could easily be mitigated either by learning to identify the type of sessions BlackSheep sends out or finding some other way to detect or disable it. This just gets into a never ending cat and mouse game all the while ignore the real problem.

I was made aware of an addon for Firefox, SSLPasswdWarning. This addon will alert you if your password or sensitive information will be transmitted over an insecure connection, so is useful in helping to determine if there is a risk or not.

October 11, 2010

Thoughts on the recent soft hyphen exploit

Filed under: Security — Tags: , , , , — allthatiswrong @ 3:00 am

Recently there has been discussion of crafting malicious URLs by making use of the soft hyphen character. The soft hyphen character is only meant to be rendered if and when the text breaks onto a new line, which is almost never the case with URLs. The problem is not so much a security risk on an individual level, rather by incorporating the ­ character in URLs, it allows some spam catching software to be bypassed.

I think the real problem this issue highlights is that it is still unsafe in 2010 to trust website links. This issue actually reminded me of the Unicode URL attack which came to light in 2005, where it was possible to register a domain that looked like a common domain using different characters. This soft hyphen attack could allow for some of these malicious Unicode domains to be treated as legitimate.

Perhaps the first step is to educate people about SSL certificates, and have them check. But it isn’t enough that people simply check that their domain is trusted, as it can be easy to get a domain automatically trusted by most browsers. Instead, we would have to educate and get users to examine the certificate details for every important site they visit. This is unlikely, and since it shifts responsibility to the user, not so great a solution.

An easy solution may be to have a very restrictive set of characters allowed for URLs. At present a domain with soft hyphens encoded within appears as a normal domain in Firefox 4.06b, Opera 10.62, IE 9 and Chrome 6.0.472.63. This could be easily solved by forcibly rendering the soft hyphen character or in some way indicating the URL contains special characters. Likewise there should be an indicator when a URL combines different character sets.

These types of simple exploits will continue because there is just so much to work with and security has not been considered until too late. Browsers (and any internet aware program) should be designed with security in mind from the ground up, in which case they would have implemented something like a restricted character set or warning, and both the soft hyphen exploit and Unicode attack would not have been possible.

March 25, 2010

Facebook’s security check is anything but.

Filed under: Security, Tech — Tags: , , , , , — allthatiswrong @ 10:31 pm

When logging into Facebook from either a different location, a security check will come up with an alert asking you to identify yourself. I tend to travel around a lot, and so this is very annoying. I use Facebook quite infrequently, I can only imagine how annoying it would be for those that travel and use the site daily. What’s worse here is that a different location is not necessarily a different city or even country, but can just be a different computer. For example, if you login at an internet café 10 minutes from your house for whatever reason, the warning will come up. This also adds to Facebook’s poor history of privacy, as for this check to work they must be maintaining a record of all the locations you use Facebook from. Logfiles are one thing, but actively maintaining a record of your location history for commercial gain is something else.

The fact that an account is signing on from a different location is in no way an indication of malicious activity. I don’t really understand the moronic reasoning that could have thought this was a good idea. Perhaps if the account was active in two different locales within a reasonable time difference, but simply from a different location? As stupid as the security check may be in the first place, it is made worse in that it is not effective in any way. The only information it asks you to enter to authenticate yourself is your birthday. Information that most people on Facebook make publically available without a second thought. Even if they don’t, it’s not exactly the hardest info to find out. Why not ask for the user to reenter their password, which would help protect against many type of session stealing attacks, or to confirm the location they last logged in from. At least something that wasn’t entirely security theater because at present it accomplishes nothing and is just a frustration.

What about if the attacker doesn’t know your birthday, or you used a fake birthday to signup and don’t remember what it was? In this case Facebook will send out a security code to one of your registered email addresses. This also allows for a breach of privacy, in that all email addresses will be exposed here, regardless of if they are marked as private or not. If the attacker does not have access to one of these email accounts then this might work OK. However even this security check is flawed, as it never changes. I.E. Every time that you fail to correctly enter your birthday, the exact same security code will be emailed out! This only means you need one million attempts to successfully brute force this code. This would take several days, but for someone who doesn’t use their Facebook account that often it would allow for it to be cracked. I have not investigated too deeply, but Facebook does not seem to have any preventative measures against bruteforcing this security check.

I find it hard to believe the Facebook developers could be this stupid. It seems much more likely that this “Security Check” is actually a measure to make sure their location information for users is accurate, disguised as security theater. Then again, Never attribute to malice that which can be adequately explained by stupidity.

February 27, 2010

Is making use of unprotected Wi-Fi stealing?

Filed under: Security, Tech — Tags: , , , , , , , — allthatiswrong @ 7:18 pm

Table of Contents

Does WEP count as unprotected?
The “unlocked door” analogy
Is it really stealing?
Whose responsibility?
There is no excuse
Legal issues


I have seen this issue popup quite a lot and it is an interesting topic of discussion. There have been many interesting cases of people being arrested for accessing open Wi-Fi connections as legal systems adapt to the presence of this new technology. Unfortunately most of the prominent court cases have been based around prosecuting the defendant with unauthorized computer access and in some cases theft. Certainly many people seem to share the opinion that making use of a connection you did not have access to is in some way stealing, or at least morally wrong.

In this article I argue that accessing an unprotected Wi-Fi network is not stealing, nor is it in any way morally wrong. I have chosen the term unprotected Wi-Fi to remove any ambiguity, so as to refer to all Wi-Fi networks that have encryption disabled, and offer a DHCP lease and network access to any device that requests it. This is in contrast to the term open Wi-Fi, which some people take to mean Wi-Fi networks that are intended to be open for anyone to access.

Does WEP count as unprotected?

It is important to distinguish between an unsecured Wi-Fi network, and a Wi-Fi network with any form of security. Many consider a Wi-Fi network with only WEP encryption to be unsecured, however for the purposes of this article a WEP protected Wi-Fi network is considered a secure network. The reason many people consider WEP to be equivalent to an unsecured network is because it is a very old form of security, and it can be bypassed these days often in just a few minutes.

The fact that WEP is all but useless technically is irrelevant here. What is important is that the owner made an attempt to secure their network, and by doing so made it clear that the network was not open for anyone to use. This is a big difference between leaving a Wi-Fi connection unsecured, and making an attempt to secure it. An attempt to secure a wireless network however weak is still making the intention of the owner clear. This is equivalent to hanging a keep out or no trespassing sign on a private property. While it can be easily bypassed, there can be no question that doing so is going against the owners wishes.

This is in contrast to an unsecured Wi-Fi network where the intention of the owner is ambiguous, and we only have limited information to go by. With an unsecured Wi-Fi network not even the most minimal of efforts have been made to keep people out, despite it often being trivial to do so. With a device that broadcasts its presence and allows anyone to join upon request it is fair to assume you are allowed to use this network. This situation is similar to trespassing law in many countries, whereby it cannot be considered trespassing if no effort was made to warn people of the private property.

The “unlocked door” analogy

One of the most common arguments against using an unsecured Wi-Fi connection is to compare the situation to someone leaving the door to their house unlocked and arguing that this is analogous to leaving a Wi-Fi connection unsecured. Unfortunately this analogy is specious as best.

A Wireless AP broadcasts radio waves in all directions for hundreds of feet announcing its presence and in many cases supplying the authorization and information necessary to join a network. An AP is set by its owner to either allow anyone to join the network, or to only allow people with the key to join. In this way the AP is a somewhat intelligent device acting as a gatekeeper following the orders of its owner. Unlike a door which is passive and awaits interaction, an AP is an active device sending out broadcasts and responding authoritively to requests for access.

This is what prevents any comparison to an unlocked door from being accurate. When people connect to an unsecured Wi-Fi network, they are doing so because the network advertised itself as available, and when they requested to join they were granted access. This is not the case with an unlocked door which can be physically identified as belonging to a particular residence, and which generally would require other boundaries to be broken before it can be accessed.

Unlike a locked door a wireless access point has no physically distinguishing characteristics to aid in determining if the access point was intended to be private or public. Except for the fact that is generally trivial to enable security or even change the SSID to reflect the owner’s intentions. Without even the smallest effort taken to communicate that a wireless network is intended to be private it is only reasonable to assume it is intended to be public.

From summary it would appear that the wireless network had been deliberately configured to accept connections from anyone. Just because someone configures their devices so that it broadcasts and authorizes anyone who wishes to join either due to ignorance or disregard, should someone really be arrested for then joining that network?

Understandably many people see this issue as purely a moral issue, and so wish to reduce the argument to its simplest form. Simply because someone, for whatever reasons leaves something unlocked or available, that does not give an opportunist the right to take advantage of that. Unfortunately the situation with unsecured wireless APs is more complex than a simple unlocked door analogy, and reducing it to such a simple form leaves us with a useless analogy that is no longer accurate.

Is it really stealing?

Many people are quick to label using an unsecured Wi-Fi network that the owner did not want to be used, as stealing that person’s connection or service. Most of the time however this is simply inaccurate for the same reasons as downloading a song or a movie is not stealing. The core concept of stealing let alone any dictionary definition requires that the owner be deprived of their property or service in some way for some amount of time. This is rarely the case with Wi-Fi ‘theft’.

In much of the world internet access is not metered, nor is there any finite usage limit. There is no theft of service in thiese situations as the owner of the internet connection is not deprived of using his service in anyway. They may not even be aware that someone is making use of the connection.

There are of course exceptions to this. In countries where interest access is ridiculously limited such as Australia or in developing countries, or where someone takes advantage of the network and saturates the connection, then it is accurate to say that the owner is being deprived of something which is theirs, so an argument can be made that it is indeed stealing.

There is also the viewpoint that since the wireless AP is broadcasting onto your private property without needing any effort on your part it is not stealing. Similar to how accessing television or radio signals broadcast over public frequencies cannot be considered stealing, nor can accessing a publicly broadcasted Wi-Fi signal. I am not convinced this argument is similar enough to the Wi-Fi situation o have merit, however I find it irrelevant as accessing unsecured Wi-Fi cannot be said to be stealing as the deprivation criterion has not been met.

Some people also hold the view that accessing an unsecured Wi-FI for internet access without explicit permission from the owner is not just stealing from the owner, but also from the ISP who provides the internet service. This is simply incorrect. The ISP is irrelevant here unless the contract specifically disallows providing internet access through unsecured Wi-Fi. Most contracts however simply contain a clause against reselling and not against making it freely accessible have. Accessing the internet through an unsecured Wi-Fi connection is no more stealing from the ISP than watching a friends cable TV is stealing from the cable company.

For the most part however people who connect to an unsecured Wi-Fi connection are in all likelihood only going to only be visiting some simple web -pages and perhaps emailing. There will always be some people who will take unfair advantage of such a connection or use up a download quote on metered connections, but these are exceptions to the rule. Generally, making use of an unsecured Wi-Fi connection does no harm to the owner and should not be considered stealing.

Whose responsibility?

There is also the question of who should bear the responsibility when someone connects to an unsecured Wi-Fi network. Most people are quick to blame the clients along with accusations of theft, but is this really fair? As I point out a bit further down, when someone installs a wireless access point most of the time they must consciously leave security disabled, or at least be aware that they have done so.

If someone leaves their car unlocked with the keys in the ignition and it gets stolen, many places will hold the owner of the car responsible. Many regions have explicit laws against this situation which find the owner at fault. As much as taking advantage of a car with the keys left in it is wrong, we live in a world where crime is inevitable, as such people must be responsible to a reasonable degree for their possessions.

That is not a terrific example as stealing a car is indisputably morally wrong – no question about it. This is less true for connecting to an unsecured Wi-Fi however. While an analogy can be made between leaving an AP unsecured and broadcasting and leaving a car unlocked with keys inside, it is obvious that the car belongs to somebody and is not yours to take. It is rare that it is as obvious that an unsecured Wi-Fi network is not intended to be used.

In most cities there are literally hundreds of networks, some may be provided by a city or council, some may be provided by some sort of organization or business and some may be provided by people genuinely happy to share their connection. With Wi-FI being near ubiquitous how are most people to know if a network is intended to be free for use by anyone or intended to be private? The answer is they cannot, nor should they have to. The onus must be on the owner of the AP to read the manual that came with their device, or to have someone to set it up for them if they lack the knowledge to do so.

It is also important to keep in mind that many laptops and devices will connect to an unsecured wireless network by default without user intervention. Many people may not realize they are not authorized to be on an unsecured wireless network and will only note that their computer has picked up free internet. To prosecute people for using their devices as intended when enabled only due to the ignorance or laziness of an AP owner is wrong.

A more accurate analogy to stealing a car with keys in the ignition might be accessing a public website. If a family or individual decided to place some private photos on a website and only share them with friends, yet made no effort to protect them and someone stumbled across them, who is at fault? Websites are intended to be public unless locked down, so how could the person who stumbled across an apparently public website be expected to know any better?

The ignorance of the owner is not an excuse here. There is no need to know about .htaccess or such as hosting companies provide a simple interface to password protect directories and generally provide free support. Just as a person accessing a public website could not know the owner did not intend for the website to be public, most people accessing an unsecured Wi-Fi network would have no reason to assume the network was not intended to be open. In all cases the owner must bear responsibility for securing their property without exception.

There may be many reasons that people deliberately leave their wireless network unsecured, while being aware of the risks. Whether motivated by convenience, compatibility or any other reason is irrelevant. If you are aware of the risks and decide to leave your wireless network unsecured then you must accept the possibility of people connecting to and using it. Leaving your front door unlocked may be more convenient, but you could not expect any insurance payout in the event of a robbery.

There is no excuse

These days there really is very little excuse to leave a wireless network unsecured as a result of ignorance. Every router or modem in made in at least the last five years either stresses the importance of enabling security for your wireless AP, or more commonly will force you to make a decision when first setting up the device.

There is no prerequisite of technical knowledge required to enable security. Wireless AP manufacturers are well aware that most people do not have even a basic technical understanding and their devices are designed accordingly. When setting up the majority of devices, you must explicitly choose an option equivalent to “No security” which is generally advised and warned against, or choose one of the security methods and provide a password or key. Indeed, many devices these days go so far as to have a default key or passphrase as a sticker on the box allowing the devices to be secure by default and making it easy for clients to join.

Some of the most common Wireless AP’s for consumers are the NetGear WGT624, NetGear WNR350, Linksys WAG160N, Dlink DI-524 and the Belkin Wireless G router. All of these routers have been around for around 5 years or more. They all provide strong, hard to miss advice recommending against leaving security off, and all force you to explicitly disable or enable security during the quick setup.

Unfortunately the only argument I have ever encountered against this rather basic axiom was based on lies and misrepresentation. The fact of the matter is it is difficult to avoid making a choice to enable security on any router from the last 5 years. If we acknowledge that the majority of people operating an unsecured Wi-Fi network explicitly chose not to enable security, it is only reasonable to assume they have no problem with people openly using their unsecured Wi-Fi network.

In the event that someone honestly may not have any technical knowledge at all and finds it all too confusing, it is likely they will have someone qualified to set things up for them. The qualified person will either enable security for this person, or explain the choices in which case again the owner of the wireless network must make a choice to disable security.

Even if someone happened to have an exceptionally old or uncommon router that does not advise or force you to make a choice regarding security, there are still many other avenues in which users will have been informed:

  • ISPs
  • Their Operating System
  • News/Media
  • Whoever is responsible for doing computer maintenance or repair

Additionally, in the event that someone owns a very old router and had somehow managed to avoid all the above avenues which make clear the ramifications of running an open Wi-Fi, it Is likely that the router firmware will be updated at some point, then making it necessary to make a choice regarding security.

The only counterpoint I have seen to this was a rather idiotic argument trying to compare people deliberately choosing to have security disabled for whichever reason to people being taken advantage of in Nigerian mail scams. This is also the prevailing problem with the most popular view on this issue today, equating a willful ignorant owner of a wireless AP with being a hapless victim taken advantage of.

Of course there are exceptions to my argument. There may be people with very old pre-2005 routers who never needed to update the router firmware or read the manual. These people never needed to hire anyone more qualified because their setup already worked. They didn’t pay attention to warnings mention by media about leaving your Wi-Fi networks unsecured and didn’t consider warnings from the OS to be relevant.

These people are the exception to the rule and as such do not negate what should be considered the general guiding rule. If someone left their Wi-Fi unsecured it was most likely intentional regardless of the reasons for doing so. It takes a very special case to claim true ignorance and to ignore all of the various methods people are made aware of the risks of running an unsecured Wi-Fi network.

Legal issues

The legal issues of connecting to an unsecured Wi-Fi network are complex and tend to differ substantially by country or state. Unfortunately many of the most public cases from the US and UK of people accessing an unprotected Wi-Fi network have resulted in people being prosecuted for stealing or theft. This is wrong on many levels, not least because it ignores any culpability attributable to the owner

Much legislation against illegally accessing a computer or computer network makes use of the word authorization. In a purely technical context there should be no issue here, as when connecting to an unsecured Wi-Fi network you must be authorized by the AP to do so. Of course, the law is about the intent of people and not the actions of machines. Even so, should the fact that owners of Wi-Fi access points decline to explicitly protect their network not be taken as an implicit authorization?

In Australia, Canada, the UK and likely most countries it is illegal to access an unsecured Wi-Fi network without explicit permission from the owner. In contrast to common sense where you would assume an unprotected network is open, you must instead make sure you have the consent of the owner of the network you are connecting to. This creates a somewhat dangerous situation…what if a small café or business offered free Wi-Fi and decided they didn’t like a particular customer? Under these laws they could have that person prosecuted if he accessed their open Wi-Fi network, despite not having done anything wrong. Recently the UK has announced they will attempt to practically outlaw all open or unsecured Wi-Fi networks with their Digital Economy Bill, making network owners equal to ISPs as well as being responsible for all content transmitted over their network and having to keep detailed logs and access records.

In the US at least it would appear to not be a federal crime. According to Title 18 (Crimes and criminal procedure) of the United States Code, Part I (Crimes), Chapter 119 (Wire and electronic communications interception and interception of oral communications) it is not illegal to access electronic communications readily accessible to the general public. Readily accessible to the general public is defined as not being encrypted and broadcast over public frequencies. However I have not seen or heard of any case coming under federal jurisdiction with most cases being handled by state legislation instead. In Texas and Florida the situation is similar to that of Canada and Australia, where the owner bears no responsibility for operating an unsecured Wi-Fi network and anyone accessing it without permission of the owner has committed an illegal act.

The state of New York has a saner approach in that owners bear the responsibility for protecting their networks, and unauthorized access occurs only when protections are intentionally bypassed. The proposed House Bill 495 for New Hampshire was interesting, being similar to the New York legislation in which the responsibility was placed on the owner for securing their network. Unfortunately however this bill was ultimately not passed.

In Germany there does not seem to be any issue with connecting to an unsecured Wi-Fi network as it is clear that an unsecured AP is open and available for anybody to use. Instead the law seems to target the owners who fail to secure their network in that they must bear responsibility and making them liable for certain actions. Out of all of the legislation and cases I have seen Germany and New York seem to have the most practical and rational laws. The UK seems to have the absolute worst laws, with people accessing an unsecured Wi-Fi network possibly facing heavy penalties as well as making the networks owners soon to be responsible for all contents transmitted over their network. This is disappointing, although not surprising given the direction the UK has been heading in the last few years.

What about operating an unsecured Wi-Fi network for defensive reasons, similar to how Bruce Schneier advocates? Many people are of the opinion that leaving their Wi-Fi network accessible will be a valid defense in the event they are accused of some illegal act. This issue however is quite complex, and depends both on the region you are in, and if it is a civil or criminal case.

In a criminal case the prosecution has a higher burden of proof and must show you to be guilty beyond a reasonable doubt, or the equivalent in countries other than the US. In a civil trial however the plaintiff only has to prove guilt on the Preponderance of the evidence (or equivalent). The RIAA for example has successfully prosecuted many people for filesharing based on nothing more than an IP address.

With some regions considering Wi-Fi owners to be liable for what is downloaded over their connection – such as the recent case in the UK where a pub owner was fined for a movie download, having an open Wi-FI network is no defense at all. In other regions network operators are not responsible for what other people do on the network, provided there is nothing incriminating on their computer or in their residence. In such a case a person may be successful in establishing the equivalent of reasonable doubt; however this is far from a sure thing.

An interesting parallel can be drawn between cordless phones and unsecured Wi-Fi networks. When cordless phones were introduced to the market there was no security, and it was easy for anyone within distance to eavesdrop on conversations. This actually went to court in the US with the judge ruling that since the owner was broadcasting over public airwaves there could be no expectation of privacy. Ten years later the situation has been completely reversed.

Rather than making it easy for innocent people to be prosecuted with vague and ambiguous “unauthorized access” legislation as a result of the ignorance and/or laziness of wireless network owners, I would much rather see some sort of computer trespassing law, putting the onus on the network owner to inform people they are not authorized.

Prosecuting people for connecting to an unsecured Wi-Fi for doing nothing more than checking their email or the news is wrong. It sets a dangerous precedent, shifting the responsibility of properly configuring their equipment onto the user who saw a Wi-Fi network advertised as available and connected accordingly.


Technically there should be no problem here as it is hardware and software working exactly as it was designed to do. Legally it can depend on your country and jurisdiction. Morally there is generally no issue as it has been established that most users decide to leave such networks open. While there may be some people who take unfair advantage of an unsecured wireless network, these people are the extreme, and are distinct from simple accessing the wireless network.

As noted while there are exceptions to the rule they are irrelevant to what should be the general attitude. Just as it is reasonable in society to assume it is reasonable to open a shop during business hours if the door is open, it is reasonable to assume an unprotected Wi-FI network is free to be used. Entering a shop with an open door that was actually closed should not result in being prosecuted for trespassing, just as connecting to an unsecured wireless network should not result in prosecution for theft.

It should be noted that my argument and opinion is limited to the English speaking countries and Europe. It would also apply in countries where the same AP manufactures have market dominance similar to those of the English speaking countries such as D-Link and Linksys. If there are countries where a local or niche brand is more popular, then that brand of AP may well not offer security as an explicit configuration step or enabled by default and so would be an additional exception to my argument.

Anecdotally it seems the same people who consider accessing an unsecured Wi-Fi network to be stealing are the same people with aging moral principles who don’t think the owners of a Wi-Fi network should bear any responsibility for their actions. That people should be free to setup their wireless network and anyone accessing it without permission is in the wrong. This view is incorrect for all the reasons outlined above, and yet it is unfortunately the view of the majority.

It is also worth mentioning that many APs allow for multiple SSID’s to be setup. What this means is that a secure network with encryption can be setup, as well as a deliberately unsecured network for guests to access. It is quite probably that a signficiant portion of unsecured Wi-Fi networks are deliberate making use of this functionality.

Update 1 – April 4th 2011

A Dutch court recently ruled that deliberately hacking into a WiFi AP is not a criminal offense, as they do not consider a router to be a computer. My understanding behind the courts logic is that since the router does not store personal information it should not be classed the same as a PC. Interesting, but I don’t really see how that is relevant. By intentionally breaking into a network, you now have direct access to other PC’s, can screw with the network and do all sorts of nefarious deeds. To be clear, it is still considered a civil offense so that the owners may seek damages and I would think that if the PC’s behind the network were attacked in some fashion it would be a criminal offense. It’s an interesting take and the more I think about it, the more sense it seems to make.


  1. A blog entry by Mike Egan, advocating the view that accessing unsecured Wi-Fi is not stealing.
  2. – Bruce Schneier advocating leaving Wi-Fi networks open for the good of society.
  3. – A BBC commentary asking if stealing Wi-FI is wrong.
  4.,2817,1565274,00.asp – John Dvorak advocating the view that the owner of a Wi-FI network should bear responsibility, not the people who access it.
  5. – An interesting SecurityFocus article on the subject of authorization in the context of unprotected Wi-Fi
  6. A Wi-Fi Alliance survey from 2006, showing that 70% of Americans enable security on their wireless devices, and 83% consider stealing wrong.
  7. – A sophos survey from 2007, showing 54% of people see nothing wrong with accessing unsecured Wi-FI connections. As with filesharing, when legislation makes an activity the majority of the population perform illegal, it’s time to change the law.
  8. – According to WeFi statistics, approximately 30% of WiFi AP’s in the world are unsecured. I wonder just how many were left unsecured intentionally?
  9. – A newspaper hacks into a site by guessing the URL. Most people would consider the onus is on the site owner to protect their content since it is a public website. Why is this position reverse so often for Wi-Fi networks, when joining an unsecured network is significantly less effort than guessing a URL?
  10. – Windows XP advising that a network is unsecured. You actually have to tick a box to acknowledge this and connect anyway.
  11. – An Ars Technica article about a case where having an unsecured Wi-Fi network was no defense.
  12. – Man arrested for stealing Wi-Fi in St. Petersburg Florida
  13. – Man arrested for using free Wi-Fi from a café in Michigan
  14. – A man arrested in London for using an unsecured Wi-Fi network.
  15. – Two people cautioned for accessing unsecured Wi-Fi networks.
  16. – A German court ruling Wi-Fi network operators are not responsible for what other users do on their network.
  17.,1000000085,39909136,00.htm?tag=mncol;txt – A contrasting case in the U.K. where a pub was fined £8000 and found responsible for downloads guests made on its Wi-Fi network.
  18.,1000000085,40057470,00.htm?tag=mncol;txt – The UK is planning to outlaw open/unsecured Wi-Fi networks with its Digital Economy Bill.
  19. – A sixth circuit court ruling on the expectation of privacy for cordless phones
  20. – A criticism of Michigans computer crime statute
  21. – An Article in the CRi journal, arguing that Wi-Fi roaming should be not be illegal
  22. – A Wired article on the proposed House Bill 495
  23. – Federal wiretap law from USDOJ
  24. – WiFi networks to be required to be protected in India, to prevent terrorists making use of them.
  25. – A woman calling in to Leo Laporte who had been using an unsecured wifi for over a year without realizing. Obviously ignore all of his nonsense about it being illegal and stealing.

January 20, 2010

The insecurity of OpenBSD

Filed under: Security — Tags: , , , , , , , , , , , , , , — allthatiswrong @ 11:29 pm

Table of Contents

Secure by default
Security practices and philosophy
No way to thoroughly lock down a system
The need for extended access controls
Extended access controls are too complex


Firstly, I would to apologize for, and clarify the title of this article. I wanted to use a title which would hold attention and encourage discussion while remaining true to the argument I make. I certainly don’t mean to imply that OpenBSD is a horribly insecure operating system – it isn’t. I do however need to highlight that OpenBSD is quite far removed from a secure operating system, and will attempt to justify this position below.

To start, we must clarify at a bare minimum what a secure operating system can be considered to be. Generally, this would be taken to mean an operating system that was designed with security in mind, and provides various methods and tools to implement security polices and limits on the system. This definition cannot be applied to OpenBSD as OpenBSD was not designed with security in mind and provides no real way to lock down and limit a system above standard UNIX permissions, which are insufficient.

Despite this OpenBSD is widely regarded as being one of the most secure operating systems currently available. The OpenBSD approach to security is primarily focused on writing quality code, with the aim being to eliminate vulnerabilities in source code. To this end, the OpenBSD team has been quite successful, with the base system having had very few vulnerabilities in "a heck of a long time". While this approach is commendable, it is fundamentally flawed when compared to the approach taken by various extended access control frameworks.

The extended access control frameworks that I refer to are generally implementations of MAC, RBAC, TE or some combination or variation of these basic models. There are many different implementations, generally written for Linux due to its suitability as a testing platform. The most popular implementations are summarized below.

  • SELinux is based on the FLASK architecture, is developed primarily by the NSA, and ships with some Linux distributions by default, such as Debian and Fedora. SELinux implements a form of MAC known as Domain and Type Enforcement.
  • RSBAC is developed by German developer Dr. Amon Ott, and is an implementation of the GFAC architecture. RSBAC provides many models to choose from such as MAC, RBAC and an extensive ACL model. RSBAC ships with the Hardened Gentoo distribution.
  • GRSecurity is not primarily an access control framework, but a collection of security enhancements to the Linux kernel, such as JAIL support, PID randomization and similar things, as well as having an ACL and RBAC implementation.
  • AppArmor is a simple yet powerful MAC implementation, which relies on pathnames to enforce policies. Relying on pathnames is a weaker approach than that used by the above frameworks; however this is considered acceptable because it is easier to use. AppArmor ships with and is enabled is versions of Ubuntu and OpenSUSE.

There are other simpler implementations such as SMACK and Tomoyo which are officially in the Linux kernel, as well as implementations for other platforms such as TrustedBSD and Trusted Solaris. Each of these access control frameworks provide for additional security to be setup when compared to what can be done with OpenBSD by default.

Secure by default

OpenBSD is widely touted as being ‘secure by default’, something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to ‘Only two remote holes in the default install, in a heck of a long time!’. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install.

An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems.

Used as an indicator to gauge the security of OpenBSD however, it is worthless. The reason being is that as soon as a service is enabled or software from the ports tree installed, it is no longer the default install and the possibility of introduced vulnerabilities is equal to any other platform. Much like software certified against the common criteria, as soon as an external variable is introduced the certification, or in this case the claim can no longer be considered relevant.

It is important to note also that only the base system is audited. The OpenBSD ports tree is not audited, and much of the software available in the ports tree is several releases behind current versions, meaning that there is a strong possibility that software will be obtained from outside of the ports tree. Given that a default install of OpenBSD has all network services are disabled by default, it is very likely that software will be installed or a service enabled if the server is going to be used to actually provide any kind of service.

Since the majority of attacks are not against the base system but against software operating at a higher level actively listening over the network, it is likely that if an OpenBSD machine were attacked, it would be through such software. This is where OpenBSD falls down, as it provides no means to protect from damage in the event of a successful attack.

Providing a default secure configuration is an important practice, and one that is employed by the majority of operating systems these days. OpenBSD followed this practice in the early part of the last decade when most other operating systems did not bother, and for that the OpenBSD team should be praised. While it is a good practice it is specious at best to take this as a measure of the actual security OpenBSD provides.

It should also be noted that the OpenBSD team uses a different definition of security vulnerability, limited to vulnerabilities that are allow for remote arbitrary code to execute. While most people may consider a DOS attack or local privilege escalation problems to be vulnerabilities, the OpenBSD team disagrees. If we use a more generally accepted definition of security vulnerability, OpenBSD suddenly has a far greater number than two remote holes in the default install a heck of a long time.

Security practices and philosophy

The OpenBSD team seems very reluctant to actually admit security problems and work towards fixing them. One such example is this CoreSecurity advisory from 2007. Instead of working and testing to see the extent of the damage that could be caused by a particular vulnerability, they prefer to dismiss and assume arbitrary code execution is impossible until pushed by Core releasing proof of concept code to show otherwise. This is similar to behavior observed by many corporations. Unfortunately this seems to be typical behavior rather than an exception going by the various mailing list threads when a vulnerability is reported.

OpenBSD was never designed with security in mind. OpenBSD was started when Theo de Raadt left the NetBSD project, with the goal of providing complete access to the source repositories. The focus on security came at a later stage, along with the “secure by default” slogan. As noted above, a secure operating system is not synonymous with a lack of vulnerabilities, and certainly not with a lack of vulnerabilities limited to the base install. This should be contrasted with the various extended access control frameworks, which despite being patches to an existing project, were designed from the ground up with a focus on security.

OpenBSD by itself contains a feature set similar in comparison to the GRSecurity patch for Linux without the ACL or RBAC implementation. GRSecurity and the Openwall project actually pioneered many of the protections that occurred later in OpenBSD such as Executable Space Protection, chroot restrictions, PID randomization and attempts to prevent race conditions. OpenBSD is often credited with pioneering many advances in security when this is not the case. OpenBSD tends to add protections much later, and only when absolutely necessary as they continue to erroneously believe that eliminating vulnerabilities in the base system is sufficient.

It is also odd that for a project that claims to be focused on security, sendmail is still their MTA of choice and BIND is still their DNS server of choice. Sendmail and BIND are old, and they both have atrocious security records. To look through OpenBSD’s security history, many of the vulnerabilities can be attributed to BIND or Sendmail. Why would anyone choose these programs for a security focused operating system, when far more secure alternatives designed from the ground up to be secure are available? Examples might include Exim or Postfix and MaraDNS or NSD.

It is interesting to compare OpenBSD to its cousin, FreeBSD. While FreeBSD does not claim to have a focus on security, it is in fact a far more secure operating system than OpenBSD due to its implementation of the TrustedBSD projects work. FreeBSD implements proper access control lists event auditing, extended file system attributes, fine-grained capabilities and mandatory access controls which allow for a system to be completely locked down and access controlled as necessary to protect against users or break in attempts.

Despite the TrustedBSD codebase being open and available for OpenBSD to implement or improve, they reject it simply because they consider it to be too complex and unnecessary. Even if the OpenBSD team did not want to implement extended access controls they could implement proper auditing through the OpenBSD project, which they still reject as unnecessary.
It is no wonder then that when governments or organizations look for a secure operating system, they look to systems that have proper access control lists and auditing, something OpenBSD is not concerned about. A good example of this is China choosing FreeBSD as the base of their secure operating system, as OpenBSD was considered insufficient to meet the criteria.

The library calls strlcpy and strlcat should also be mentioned here. These library calls were developed by Todd Miller and Theo de Raadt as a way to eliminate buffer overflows by ensuring strings are always null terminated. However this approach is controversial, and can actually result in further problems and security vulnerabilities than they solve. While they may have their place, they should certainly not be relied on, and doing so shows a poor understanding of computer security.

No way to thoroughly lock down a system

This is the main problem with OpenBSD, and what prevents it from being able to be considered a secure system. No matter how quality the codebase or how free of vulnerabilities, there is no sufficient way to restrict access other than with standard UNIX permissions. OpenBSD team leader Theo de Raadt has openly stated that he is against anything more powerful such as MAC being implemented which is a shame. There is no good reason to avoid implementing extended access controls when the greater security and control they provide is irrefutable.

OpenBSD does offer some basic protections to protect a running system, namely the chroot functionality, chflags and securelevels. The chroot implementation is a secure version much improved over the standard UNIX chroot, but still far lacking when compared to a proper jail implementation such as that provided by FreeBSD. The consensus among OpenBSD developers and community is that you can achieve the same result using chroot and systrace. Which means they rely on a third party tool to implement a secure design that is present by default in FreeBSD, NetBSD and numerous other unices.

Securelevels are an interesting concepts and they do help with security somewhat. Securelevels can only be increased not decreased on a running system. The higher levels prevent writing to /dev/mem and /dev/kmem, removing file immutable flags, loading kernel modules and changing pf rules. These all help to restrict what an attacker can do, but do absolutely nothing to prevent reading or changing database records, obtaining user info, running malicious programs etc. These protections do absolutely nothing to stop information leakage. Making files immutable or appendable only is a poor option when contrasted with the ability to prevent reading and writing/appending to only specific users or processes.

The OpenBSD project and community had access to a tool for policy enforcement named systrace. Systrace is a third party tool developed by Niels Provos, and has never been embraced by the OpenBSD team. Systrace lacks the versatility of a proper MAC implementation, and had similar weaknesses to AppArmor since it relies on pathnames for enforcement. Systrace is a form of system call interposition, which has been shown to be insecure.

The only software even close to a MAC implementation is rejected by the OpenBSD team, and is insecure. Despite this, systrace is still maintained and offered/recommended by the community as the preferred way to sandbox and restrict applications. Given this obvious deficit, it would seem even more prudent for OpenBSD to make use of the TrustedBSD project.

This is the main reason why OpenBSD is unable to offer a secure environment in the event an attacker is successful. Instead of implementing a form of extended access controls and ensuring the system is secure even in the event of a successful attack, they prefer to remove as many vulnerabilities as possible. This approach is naïve at best and arrogant at worst.

The need for extended access controls

The main argument against OpenBSD is that it provides very limited access controls. OpenBSD attempts to remove the source of vulnerabilities by producing quality code, and has such faith in this approach that very little is provided to deal with a situation when a machine is exploited, and root access obtained. Perhaps inevitably. It is this lack of access controls and protection mechanisms that prevent OpenBSD from being the secure system it is often credited as being.

It is also the reason the aforementioned frameworks such as SELinux and RSBAC have an inherent security advantage over any OpenBSD machine. Due to the use of some sort of MAC, RBAC, TE or other advanced access control used by these frameworks, a level of control is possible above that in traditional DAC systems. With a traditional DAC system, the user has complete ownership over their files and processes, and the ability to change permissions at their discretion. This leads to many security concerns, and is the reason most attacks can be successful at all.

When a computer is hacked regardless of if it is due to a drive by download targeting an insecure browser on a user’s computer or a targeted attack exploiting a server process, the malicious process or user will inherit the access of the browser or process that was attacked. The prevalence of the DAC architecture throughout most operating systems is still the primary cause of many security issues today. With many server processes still running as a privileged user this is a large concern.

It is also something that is hard to fix without changing to a different design paradigm. Many of the technologies that were developed to help prevent attacks such as privilege separation; executable space protection and process ID randomization help, but are not sufficient for a majority of cases. This is why the need for an extended access control framework is present. With the use of something like SELinux or RSBAC, the significance of individual user accounts or processes as an attack vector is decreased.

With these systems every single aspect of your system can be controlled to a fine grained level. Every file, directory, device, process, user, network connection etc can be controlled independently allowing for extremely fine grained policies to be defined. This is something that simply is not possible with current DAC systems which include OpenBSD .

As an example of what is possible with extended access controls, it a web server process running as root could be set to only have append access(as opposed to general write access available in a DAC system) to specific files in a specific directory, and to only have read access to specific files in a specific directory. If some files need to execute, then that file itself (or the interpreter if a script) can be restricted in a similar way. This alone would prevent web site defacement and arbitrary code execution in a great many cases.

On present systems using DAC if a targeted attack is successful and access to the root account is gained, there is nothing the attacker cannot do. Run their own malicious executables, alter files etc. This is why OpenBSD is necessarily less secure than any system making use of advanced access control frameworks, and also why OpenBSD is not a secure system. While OpenBSD has many innovative technologies that make it harder for an attacker to gain access, it does not provide any way to sufficiently protect a system from an attacker who has gained access.

It is possible for example to restrict something like perl or python with extended access controls. On OpenBSD if a user or an attacker has access to perl or python, then they can run whichever scripts they like. With extended access controls, it is possible to restrict only certain scripts to have access to an interpreter (and additionally make those scripts immutable), and prevent the interpreter from running at all unless called by those specific scripts. There is no equivalent fine grained granularity on OpenBSD.

Another way in which extended access controls can help is to protect against users. Even on a desktop system there is a significant security advantage. At the moment most malware requires or tries to obtain root privileges to do damage or propagate. What most people don’t realize is that even malware running as a normal user can do significant damage as it has complete access to a users files under the current DAC model. With some form of MAC, if a user decided to demonstrate the dancing pigs problem and run an untrusted piece of malware, it could be restricted from having any access to a users files or being able to make network connections.

Even windows implements a form of MAC – Mandatory Integrity Controls. While not terribly powerful, and not used for much at the moment, it still provides increased protection and allows for more security than an OpenBSD box can provide. If even Microsoft can understand the need and significance of these technologies after their track record, why is OpenBSD the only project still vehemently rejecting this technology?

Extended access controls are too complex

Some people are of the view that extended access controls are simply added complexity, which increases the scope for vulnerabilities without providing any meaningful additional security. This position makes little sense. While it is true that adding extended access controls increases complexity, the meaningful increase in security cannot be denied. There are plenty of examples of exploits being contained due to such technology…exploits that would have allowed full access to the system if OpenBSD had been the targeted platform.

It has also been said such systems only serve to shift the attack point. Instead of attacking a particular piece of software, they simply have to attack the access control framework itself. This is simply a myth. While the frameworks themselves can be attacked and even possibly exploited, the increase in security far outweighs any risk. The extended access control framework can be extensively audited and made secure while allowing policies to be enforced. Having one relatively small section of code that is easily maintained and audited and responsible for maintaining security is not a decrease in security, but an increase.

Ideally, a proper extended access control framework would also be formally verified, as I believe is the case with SELinux and RSBAC, based on the FLASK and GFAC architectures respectively. This basically means that these systems have been mathematically proven to meet all of their specifications, making it extremely unlikely that it will be possible for the systems to fail in an unexpected way and be vulnerable to attack.

In almost 10 years, there have been no vulnerabilities reported for these major systems that allowed the framework to be bypassed. The times when there has been a problem, it has been due to poor policy. The example everybody likes to mention is the cheddar bay exploit that Brad Spender(author of GRSecurity) made public in July 2007. It’s true that this exploit allowed for disabling SELinux, but this was due to a stupid policy that allowed 0 to be mmaped for the purposes of allowing WINE to work. Only the RHEL derived distributions were affected. This is not a valid example of the framework being vulnerable, and it certainly does nothing to discredit the technology as a whole.

Due to limitations of certain hardware platforms, it is possible that with the right kernel level vulnerability, an extended access control framework could be subverted. These cases however are quite rare, and with the use of technologies like PaX they become even more unlikely to succeed. In fact, as of writing this article, I am not aware of example of an extended access control being able to be successfully subverted to the contrary. There are however, examples of extended access controls successfully protecting against certain kernel vulnerabilities such as SELinux preventing a /proc vulnerability that could lead to local root.

Some of these frameworks have been criticized for being too complex, in particular SELinux. While I don’t think this is entirely justified, as the SELinux team has made great progress with making this easier with tools such as setroubleshoot and learning mode, I can understand it may be a valid concern. Even so it only applies to a specific implementation. RSBAC, which is just as powerful as SELinux has far clearer error messages and is much easier to craft a policy for. Other implementations such as that of GRSecurity are far simpler yet again. The point here is that the technology is powerful and should be embraced as the added security advantaged is undeniable.

If complexity and user unfriendliness was the main concern the OpenBSD team had then they could still embrace the idea while making the implementation simple to use and understand Instead, they flat-out reject the idea, believing antiquated Unix permissions are more than enough. Unfortunate in this day and age this is no longer the case. Security should not be grafted on, it should be integrated into the main development process. This does not mean patching in protections for specific attacks along the way which is the approach favored by the OpenBSD team. The OpenBSD approach has resulted in a very impressive and stable fortress built upon sand.


While the implementation of various policy frameworks will mature and grow as needed, OpenBSD will remain stale. With a refusal to implement options for properly restricting users or a system in the event an attacker does gain access, the OpenBSD system will be considered a less reliable and trustworthy platform for use as a server or user operating system.

Extended access control frameworks should not be considered a perfect solution, or the be all and end all of security. There are still many situations where they are insufficient such as large applications that necessarily require wide ranging access to properly function. Even so, the level of control these frameworks provide are the best tools we have to secure systems as best we can.

It is interesting to note that even with Linux not really caring about security and having a non disclosure policy, things still end up being more secure than OpenBSD because of the presence of extended access controls. Being able to restrict access in such a powerful way which reinforces that simply trying to eliminate all bugs at the code level while noble, is an inferior approach.

As much as I am disappointed with the fix silently without disclosure approach to security the Linux kernel project has taken since Greg K-H took over, and having to rely on sites like to learn about security problems that were fixed, Linux is the only real project making progress with testing and improving extended access control frameworks. With continued development and support the implementations will become easier to use and the problems eradicated until such technology is common, as it should be.

OpenBSD cannot be considered a secure system until it makes some effort towards facilitating locking down a system with more than the standard UNIX permissions model which has been shown to be insufficient, and stop discounting the possibility that a system will be secure because all bugs have been removed. While well intentioned and accurate to a small extent, it is ultimately meaningless if even just one vulnerability is present.The OpenBSD team consists of highly skilled programmers who have an interest in security and have shown excellent skill at auditing code and identifying and fixing vulnerabilities in software. Unfortunately, they have shown no interest in extending OpenBSD to implement extended access controls as almost all other operating systems have done, leaving their system inherently more vulnerable in the event of a successful intrusion. The OpenBSD serve a useful role in the community, similar to dedicate security analysts or advisors, and for this they should be celebrated.

Note: I am aware that many people use OpenBSD for nothing more than a router, and for this it indeed ideal. For the use of a router, extended access controls would not provide much benefit. I wrote this argument however because many people seem convinced that OpenBSD has suerior security in all instances and including as a network server or user operating system. I became tired of reading these comments and people simply dismissing extended access controls as too complex and not providing any real security.



  1. SELinux –
  2. RSBAC –
  3. GRSecurity –
  4. AppArmor –
  5. The TrustedBSD Project –
  6. Core Security OpenBSD Advisory –
  7. Marc Espie talking about security complexity and calling MAC security theater-
  8. Theo de Raadt stating that MAC should not be included in OpenBSD –
  9. An older similar argument on the OpenBSD misc mailing list –
  10. A simple argument now out of date, that makes a similar argument without going into detail –
  11. Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools –
  12. Exploiting Concurrency Vulnerabilities in System Call Wrappers –
  13. Bob Beck talking about systrace –
  14. China chooses FreeBSD as basis for secure OS –
  15. An example of SELinux preventing an exploit on RHEL 5 –
  16. Dan Walsh talking about SELinux successfully mitigating vulnerabilities –
  17. The start of the thread where Brad Spender’s Cheddar Bay exploit is introduced and discussed –
  18. Details on the SELinux policy that allowed for the Cheddar Bay exploit –
  19. SELinux preventing a kernel vulnerability from succeeding –
  20. A second example of a vulnerability that SELinux prevented, due to the users not having the required socket access-
  21. A Phrack article detailing the ways current security frameworks can be exploited, and how to prevent against this –
  22. A primer on OpenVMS security, a highly secure OS designed with security in mind at every level –
  23. Presentation introducing Strlcpy and strlcat –
  24. Start of a mailing list thread where strlcpy and strlcat are discussed and criticized –

Update 1 – January 23rd 2010

I have updated the article to talk about the benefit of formal verification, and address the possibility of an EACL framework being bypassed with a kernel vulnerability.

Update 1 – April 23rd 2010

I have updated the article to reflect the correct status of the Openwall project (i.e. not abandoned), thanks to a comment by Vincent.

December 2, 2009

Keyloggers and virtual keyboards/keypads are not secure

Filed under: Security, Tech — Tags: , , , , , , — allthatiswrong @ 2:20 pm

There seems to be a common misconception that online keyboards or keypads are a useful tool in defeating keyloggers. This is only true in the case where the online keyboard is randomized or a one time password is used, which unfortunately is the exception rather than the rule. I am not aware of other people discussing this, so here goes.

Most modern software keyloggers will not only records keystrokes, but will also records the mouse coordinates each time a mouse is clicked. This is exactly why an online keyboard does nothing to negate a keylogger unless it is randomized. If I see a mouseclick at x60,y60, and subsequent mouseclicks at x48,y60 and x52, y60, then I can likely workout which keys were clicked.

The keylogger will record the site that was visited, and since the authentication page is necessarily open to anybody it allows for an attacker to workout the distance between virtual keys and the starting location of the virtual keyboard. Those mouse coordinates above can now be translated to mean that the ‘u’ key was clicked first, followed by the ‘q’ and ‘e’ keys.

Some people believe that using the windows or another virtual keyboard program is secure and will protect against keyloggers. If anything, this is worse, as the attacker does not even have to use the mouse coordinates to work out which keys were pressed. Virtual keyboard programs tend to send the same WM_KEYUP and WM_KEYDOWN events when a key is clicked, which sends the same signals as if a hardware key is pressed.

At present, relying on virtual keyboards or keypads for an extra layer of security is useless, unless they are randomized. The only way to be sure is to ensure your system is clean, by following good practices or perhaps using a virtual machine if you wish to be extra cautious.

Unfortunately most banks or secure services can’t be bothered to implement a proper system. Several of the largest banks through Australia, the USA and Europe that I have experience only have a simple text password field. This is less secure since it is directly vulnerable to keyloggers. The banks that do tend to have some sort of online keypad tend not have it randomized in any way, making it vulnerable to the attack described above. This is worse than a simple text field due to instilling a false sense of security. It is only a few banks, generally the smaller ones that actually implemented a one time password or randomized keypad.

I’m not sure why the sites trying to make a secure authentication system are not aware of this, or perhaps they simply don’t care. Perhaps like so many others, they feel that giving an illusion of security is sufficient. Customers are already protected from fraud by most laws, so it would seem the incentive to provide to increase security would favour the banks rather than customers. Which means that apparently they are not being hurt enough by fraud(despite it being one of the largest growing attacks against bank customers), which is interesting.

Older Posts »