All that is wrong with the world…

September 19, 2011

Another minor Facebook security issue

Filed under: Tech — Tags: , , — allthatiswrong @ 11:38 pm

I noticed a recent flaw in Facebooks security resolution process recently. After being asked to confirm my identity simply because I was using a different computer, I apparently took too long to identify my friends in their photos. However, I was able to try two more times before being locked out. In which case Facebook provided the exact same photos with the same selection of people to name in order to confirm my identity. What this means is that I could conceivably attempt to logon to a victims Facebook account from an unauthorized device to get such a prompt, and then take my time to research the answers.

Twenty minutes was the approximate time before my session expired, which gives roughly one hour to come up with the answers. This may not seem terribly difficult given the proclivity with which people tag their friends or publish photos on blogs. It would be even easier if the victim and attacker had a mutual friend in common on Facebook, as they would likely be able to see a lot more photos. In fact, perhaps even searching each name in Facebook could show the face, which would allow for the questions to be answered correctly.

This isn’t a minor flaw in any sense of the word, however it does seem quite possibly that the process as it is now implemented could be abused in conjunction with other vulnerabilities to gain access to someone’s account. I hope that at the least this will foster some interesting discussion on why what I have described is a non issue, or result in a fix.

June 23, 2011

OS X – Safe, yet horribly insecure

Filed under: Security, Tech — Tags: , , , , , , , — allthatiswrong @ 2:48 am


I have had this article planned since the end of 2009 and have had it as a skeleton since then. I wanted to point out the many problems with OS X security and debunk the baseless myth that OS X is somehow more secure. Despite 18 months passing by before I managed to finish it, not much seems to have changed. I think I am publishing at an interesting time however just as malware for OS X is increasing and Apple are starting to put effort into securing OS X with the soon to be released Lion. There is no FUD in this article, just an analysis of the available evidence and some speculation. My motivation to write this article was the hordes of OS X users who are either blind or have been mislead by false advertising into believing OS X is somehow immune to malware and attacks.

It is one of the most prevalent myths among the computer purchasing public and to a lesser extent those who work in IT, that Apple computers are far more secure than their Windows and perhaps Linux counterparts. The word myth is precisely accurate, as OS X and other Apple software is among the most vulnerable software on consumer devices today. Apple have an appalling attitude towards security which often leaves their users highly vulnerable while hyping their products as secure, simply because they are rarely targeted. It is important before going further to note the difference between a distributed attack and a targeted attack. A distributed attack is one not specific to any one machine or network, but will exploit as many machines as it can affected by a particular set of vulnerabilities, of which OS X has had many. An example of a distributed attack is a drive by download, where the target is unknown, but if the target is vulnerable the exploit should work. Distributed attacks are used to infect large amounts of machines easily, which are then generally joined into a botnet to earn cash.

A targeted attack is more specific, where a single machine or network is attacked. A targeted attack is not blind and is specific to the machine being attacked. Distributed attacks such as drive by downloads are impersonal by nature because they must compromise thousands of machines while the motivation behind a targeted attack tends to be more personal, perhaps to steal confidential files or install some sort of backdoor. The argument always seems limited to distributed attacks which admittedly are nowhere near the problem they are for windows. This is more than likely because Apple has a very low market share of PC’s, simply making it less than worthwhile to invest in writing software to attack as many machines as possible when money is the motivation. I go into this in further detail in a later section.

Using a Mac may certainly be a safer choice for a lot of people as despite being vulnerable they are not targeted. However this is not the same as Macs being secure, something Eric Schmidt erroneously advised recently. I may be able to browse impervious to malware on a Mac at the moment, however I personally would not be comfortable using a platform so easily compromised if someone had the motivation to do so. In this article I address just why OS X is so insecure including the technical shortcomings of OS X as well as Apples policies as a company that contribute to the situation.

A trivial approach to security

One of the most annoying claims made by OS X (and Linux) users is that the UNIX heritage results in a far more secure design, making it more immune to Malware. Nothing could be further from the truth. The Unix Design is significantly less granular than that of Windows, not even having a basic ACL. The UNIX design came from a time when security was less of an issue and not taken as seriously as it did, and so does the job adequately. Windows NT (and later OSes) were actually designed with security in mind and this shows. Windows was not such a target for malware because of its poor security design; it is because the security functionality was never used. When everybody runs as Administrator with no password then the included security features lose almost all meaning. Point for point Windows has a more secure design than OS X, and is used properly the damage can be significantly minimized on a Windows machine than on an OS X machine, UNIX heritage or not.

A lot of OS X users seem to have this idea that Apple hired only the best of the best when it came to programmers while Microsoft hired the cheapest and barely adequately skilled, which not least resulted in OS X being a well designed piece of software completely free of vulnerabilities. In reality OS X machines have always been easily exploited and are among the first to be compromised at various security conferences and competitions. The vast majority of exploits that have been publicly demonstrated could have been used to write a successful virus or worm. Given how lax Apple is with security updates and any kind of proactive protection any prospective attacker would have quite a field day. The only reason this has not happened yet is not because Apple is magically more secure, it’s because no one has bothered to take the opportunity. It isn’t like no OS X viruses exist. Even without the poor approach apple takes to security, there would be no basis for claiming the design of OS X is more secure than that of other platforms.

Apple is generally months behind fixing publicly disclosed vulnerabilities, often only doing so before some conference to avoid media reporting. They often share vulnerabilities with core libraries in other UNIX like systems with samba and java being two examples. They are extremely difficult to deal with when trying to report a vulnerability, seemingly not having qualified people to accept such reports. Even if they do manage to accept a report and acknowledge the importance of an issue they can take anywhere from months to a year to actually fix it properly.

People always get caught up in the hype surrounding viruses and how OS X is seemingly impervious while forgetting that that is not the only type of threat. Personally for me, malware is a minor threat with the impact being negligible as long as you follow basic security practices and can recognize when something looks out of place. The idea of someone targeting me specifically on a network either because it is so vulnerable that it is child’s play or because they want something from my machine is far more worrying. This is significantly harder to protect against on OS X when you can’t rely on the manufacturer to issue patches in anything considering a prompt timeframe or even to acknowledge that vulnerabilities exist. Given that this is the Apple philosophy, it is hard to pretend to be safe on an Apple machine.

Examples and details

Every OS except OS X has a full implementation of ASLR, stack canaries, executable space prevention, sand boxing and more recently mandatory access controls. OS X doesn’t even try to implement most of these basic protections and the ones it does, it does poorly. I don’t understand why security folk use OS X at all, given its plethora of problems. Yes, they are pretty and yes it is UNIX and yes you are every safe using it, but given security folks tend to be working on various exploits and research that they would want to keep private, using a platform so vulnerable to targeted attacks would not seem to be the smartest move.

Apple to date do not have a proper DEP or ASLR implementation, two well known technologies that have been implemented in other OSes for the last five years. Apple did not bother to implement DEP properly except for 64bit binaries, and even then there was no protection on the heap even if it was marked as non executable. Apple technically implements ASLR but in a way that they may not have bothered. The OS X ASLR implementation is limited to library load locations. The dynamic loader, heap, stack or application binaries are not randomized at all. Without bothering to randomize anything except library load locations their implementation is useless aside from perhaps preventing some return to libc attacks. We can see using the paxtest program from the PaX team (the same team who initiated ASLR protections on PC’s) that OS X fails most of these tests (Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J. 2008). Apple’s decision not to randomize the base address of the dynamic linker DYLD is a major failing from a security point of view. Charlie Miller has demonstrated how a ROP payload can be constructed using only parts of the non randomized DYLD binary. Snow leopard unfortunately did not improve on things much except to add DEP protection to the heap, still only for 64 bit applications. This means that most of the applications that ship with OS X (including browser plugins) are far easier to attack than applications on pretty much any other platform.

The firewall functionality in OS X is impressive, but hardly utilized. The underlying technology is ipfw powerful and more than capable of protecting OS X from a wide variety of threats, however Apple barely utilizes it. The OS X firewall is disabled by default and application based meaning it is still vulnerable to low level attacks. Even if the option to block all incoming connections was set it didn’t do this, still allowing incoming connections for anything running as the root user with none of the listening services being shown in the user interface.

Apple introduced rudimentary blacklisting of malware in Snow Leopard via xprotect.pilst, which works so that when files are downloaded via certain applications they set an extended attribute which indirectly triggers scanning of the file. However many applications such as IM or torrent applications do not set the extended attribute, thus never triggering the Xprotect functionality. A fine example of this is the trojan iWorks which was distributed through torrents, and never triggered Xprotect. At the moment it can only detect very few malware items, although as a response to the MacDefender issue this is now updated daily. Only hours after Apple’s update to deal with MacDefender was released a new version that bypasses the protection was discovered, highlighting the shortcomings of the Xprotect approach. Since it relies on an extended attribute being set in order to trigger scanning, any malware writer will target avenues of attack where this attribute will not be set and for drive by download attacks it is completely useless. Still, it is a good first step for Apple acknowledging the growing malware problem on OS X and starting to protect their users.

It has been a shame to see the sandboxing functionality introduced in Leopard not being utilized to anywhere near its full capacity. Apple are in a unique position where by controlling the hardware and the operating system they have creating a truly homogenous base environment. It would be very easy to have carefully crafted policies for every application that ships with the base system, severely limiting the damage that could be caused in the event of an attack. They could go even further and import some of the work done by the SEDarwin team, allowing for even greater control over applications. They would not have to present this to the user and would probably prefer not to yet doing so would put them far ahead of all the other operating systems in terms of security at this point.

Security wise Apple is at the same level as Microsoft in the early 90’s and early 2000’s. Continuing to ignore and dismiss the problems without understanding the risks and not even bothering to implement basic security features in their OS. With an irresponsible number of setuid binaries, unnecessary services listening on the network with no default firewall, useless implementations of DEP and ASLR and a very poor level of code quality with many programs crashing with a trivial amount of fuzzing Apple are truly inadequate at implementing security. This still doesn’t matter much as far distributed attacks go, at least not until Apple climbs higher in market share but I really dislike the idea of someone being able to own my system just because I happened to click on a link. At least with Apple giving regular updates via Xprotect and including a Malware help page in Snow Leopard we have evidence that they are starting to care.

An appalling record

A great example of Apple’s typical approach to security is the Java vulnerability that despite allowing for remote code execution simply by visiting a webpage, Apple left unpatched for more than six months; only releasing a fix when media pressure necessitated that do so. When OS X was first introduced the system didn’t even implement shadow file functionality, using the same password hashing AT&T used in 1979, simply relying on obscuring the password via a pretty interface. This is indicative of the attitude Apple continues to have to this very day, having a horribly secure design at the expense of convenience and aesthetics, only changing when pressure necessitates it. One of the most interesting examples of this is that regularly before the pwn2own contests where Apple’s insecurity is put on display, they release a ton of patches. Not when they are informed of the problem and users are at risk, but when there is a competition that gets media attention and may result in them looking bad.

Being notoriously hard to report vulnerabilities to does not help either. If a company does not want to hear about problems that put their machines and thus customers at risk it is hard to say that they are taking security seriously. As is the case at the moment if you try and report a vulnerability to Apple it will likely get rejected with a denial and after retrying several times it may be accepted, where a patch may be released any number of weeks or months later. Apple still have a long way to go before demonstrating they are committed to securing OS X rather than maintaining an image that OS X is secure. Having a firewall enabled by default would be a start, something Windows has had since XP. Given the homogeneous nature of OS X this should be very easy to get off the ground and it may well be the case with Lion.

The constant misleading commercials are another point against Apple. Constantly misleading users that OS X is secure and does not get viruses (implying that it cannot) or have any security problems what so ever. Not to mention that they exaggerate the problem on Windows machines, they completely ignore the vulnerabilities OS X has. Most recently evidence Apple’s aforementioned attitude can be seen with their initial response to the MacDefender malware. Rather than address the issue and admit that a problem exists they keep their heads in the sand, even going so far as to instruct employees not to acknowledge the problem. To their credit Apple did change their approach a few days later issuing a patch and initiating a regularly updated blacklist of malware. Their blacklist implementation has flaws, but it is a start.

As much as users and fans of Apple may advocate the security of OS X it is very important to note that OS X has never implemented particularly strong security, has never had security as a priority and is produced by a company that has demonstrated over and over that security is a pain which they would rather ignore, leaving their users at risk rather than acknowledge a problem.

Malware for OS X increasing

While it’s true that doomsday for OS X has long been predicted, despite the predictions lacking a precise time reference. An article by Adam O’Donnell has used game theory to speculate that market share is the main cause for malware starting to target a platform, the result of a tradeoff between a lack of protection and a high enough percentage of users to take advantage of to make the investment worthwhile. The article made the assumption that all PC’s were using AV software and assumed an optimistic 80% detection success rate. If the PC defense rate were higher, then OS X would become an attractive target at a much lower market share. According to the article, if PC defenses were at around 90% accuracy, then OS X would be a target at around 6% market share. The estimated percentage from the article is just under 17%, and just as some countries have reached around that number are we starting to see an increase in malware for OS X. It may be a coincidence but I will not be surprised if the trend continues. Given Apple’s horrid security practices and insecurity it’s going to increase quite noticeably unless Apple changes their act. Aside from market share another important factor is the homogeny of the platform, making OS X an extremely ideal target once the market share is high enough.

A lot of people are saying they will believe the time for OS X has come when they see an equivalent to a Code Red type of worm, except that this is never going to happen. Worms shifted from being motivated by fame having a financial motivation, with the most recent OS X malware being linked to crime syndicates. With the security protections available in most OSes these days (aside from OS X) being more advanced it takes more skill to write a virus to infect at the scale of something like Code Red, and the people who do have that skill are not motivated to draw attention to themselves. These days malware is purely about money, with botnets that going out of their way to hide themselves from users. Botnets on OS X have been spotted since 2009 and OS X is going to be an increasing target for these types of attacks without ever making the headlines as Windows did in the 90’s.

Another contributing factor that should not be overlooked is the generally complacent attitude of OS X users towards securing their machines. Never faced with Malware as a serious threat and being shoveled propaganda convincing them that OS X is secure, most OS X users have no idea how to secure their own machines with many unable to grasp the concept that they may be a target for attack. The MacDefender issue already showed how easy it is to infect a large number of OS X users. Windows users are at least aware of the risk and will know to take their computer in to get fixed or to run an appropriate program as where it seems OS X users simply deny the very possibility. As Apple’s market share increases, the ratio of secure users to vulnerable users continues to slide further apart. With more and more people buying apple machines and not having any idea how to secure them or that they even should there are that many more easy targets. Given the insecurity of OS X and the nativity of the users, I do think it is only a matter of time before OS X malware becomes prevalent, although not necessarily in a way that will make the news. This means the problem is going to get worse as users are going to keep getting infected and not realize it while believing their machines are clean and impervious to risk.

People also have to get over the idea that root access is needed for malware to be effective. Root access is only needed if you want to modify the system in some way so as to avoid detection. Doing so is by no means necessary however, and a lot of malware is more than happy to operate as a standard user, never once raising an elevation prompt and silently infection or copying files or sending out data or doing processing, or whatever malicious thing it may do.

Macs do get malware even if it is a significantly smaller amount that what is for windows. Given the emergence of exploit creation kits for OS X it is inevitably malware is inevitably going to increase for OS X. Even if it never gets as bad as it was for Windows in the 90’s it is important not to underestimate the threat of a targeted attack. Rather than encouraging a false sense of security Apple should be warning users that it is a potential risk and teaching users how to look for signs and deal with it. The Malware entry in the Snow Leopard help is a small step in the right direction. There isn’t much Apple can do to prevent targeted attacks, except maybe fixing their OS and being proactive about security in the first place.

Much room for improvement

One thing OS X did get right was making it harder for key loggers to work. As of 10.5 only the root user can intercept keyboards, so any app making use of EnableSecureEventInput should theoretically be immune to key logging. Of course, if remote code execution is possible then that is a very minor concern. This requires the developer to specifically make use of that function, which is automatic for Cocoa apps using a SECURETEXTFIELD. Of course this does not completely prevent keyloggers from working as applications not making use of that functionality will be vulnerable to keylogging, such as was the case with Firefox and anything not using a secure text field. Of course, given the propensity of privilege escalation attacks on OS X it would not be hard to install a keylogger as root. However this is a great innovation and something that I would like to see implemented in other operating systems.

Apple asked security experts to review Lion which is a good sign, as long as they actually take advice and implement protections from the ground up. Security is a process which needs to be implemented from the lowest level, not just slapped on as an afterthought as Apple have tended to do in the past. I think the app store in Lion will be interesting. If Apple can manage to control the distribution channels for software, then they will greatly reduce the risk of malware spreading. At the moment most software is not obtained via the app store and I don’t ever expect it to be, still the idea of desktop users being in a walled garden would be one solution to solving the malware problem.

Lion is set to have a full ASLR implementation (finally) including all 32 bit applications and the heap. As well as more extensive use of sandboxing it looks like Apple is starting to actually lock down their OS, which means they understand the threat is growing. It will be interesting to see if Apple follows through on the claims made for Lion, or if they fall short much like what happened with snow leopard. Personally I think Lion is going to fall short while the malware problem for OS X will get serious, but it won’t be until 10.8 that Apple takes security seriously.

Update 1 – June 28th 2011

Updated minor grammatical mistakes.

It is amazing the knee jerk response I have seen to this article where people start saying how there are no viruses for OS X, which is something I acknowledge above. I guess people don’t care if they are vulnerable as long as there are no viruses? Then people start attacking the claim that OS X has no ACL, which is a claim I never made. I guess the truth hurts and attacking men made of straw helps to ease the pain.


  1. – A list of OS X vulnerabilities.
  2. – Eric Schmidt on OS X.
  3. – A list of OS X viruses from Sophos.
  4. Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J, 2008. OS X Exploits and Defense, p. 269-271.
  5. – Charlie millers talk on snow Leopard security.
  6. – Apple releases an update to deal with MacDefender.
  7. – A variant of MacDefender appeared hours after Apple’s update was released. – Charlie Miller talking about setuid programs in OS X.
  8. – Apple taking 6 months to patch a serious Java vulnerability.
  9. – Apple using password hashing from 1979 in lieu of a shadow file.
  10. – Misleading commercial 1.
  11. – Misleading commercial 2.
  12. – Misleading commercial 3.
  13.– Apple representatives told not to acknowledge or help with OS X malware 1.
  14.” – Apple representatives told not to acknowledge or help with OS X malware 2.
  15. Adam O’Donnell’s article – When Malware Attacks (Anything but Windows)
  16. – OS X market share by region.
  17. MacDefender linked to crime syndicates.
  18. – Many users hit by MacDefender.
  19. – The first exploit creation kits for OS X have started appearing.
  20.” – First OS X Botnet discovered.
  22. – A Firefox bug report about a vulnerability to keylogging.
  23. – Apple letting security researchers review Lion.

Update 1 – August 17 2011

A delayed update, but it is worth pointing out that this article is basically out of date. Apple has indeed fixed most of the problems with security with their release of Lion. At least this article is an interesting look back, and shows why mac users should upgrade to Lion and not trust anything before it. Despite Lion being technically secure, it is interesting to note that Apple’s security philosophy is still lackluster. Here is an interesting article on the lessons Apple could learn from Microsoft and an article showing just how insecure Apple’s DHX protocol is, and why the fact it is deprecated doesn’t matter.

November 22, 2010

Adobe Reader X

Filed under: Security — Tags: , , , , , — allthatiswrong @ 11:20 am

A few days the long awaited Adobe Reader X was released. Given that Adobe Reader and Flash have been the primary attack vector on PC’s for the last few years (with them being responsible for over 80% of attacks in 09 alone) a secure version of Reader is long overdue. It is a sad state of affairs that a PDF viewer needs a sandbox in the first place, but given the reality of the situation it is good to see Adobe finally stepping up. The question is, did they do a good job? Adobe have an atrocious track record when it comes to security, but going by their blog it seems they worked closely with experts, so hopefully it is as good as can be expected.

The initial impressions upon first using Reader X were not great. The setup file is quite larger, 35mb as compared to 26mb for 9.4. Nothing really seems to have changed except for the sandbox, and the ability to comment pdf’s built in to the reader, which I guess is nice. The toolbar seems to be using a different widget set and it now looks cartoonier, which I don’t like at all. I had originally thought the toolbar had disappeared from the browser plugin which would make it harder to navigate pages, but it is actually a minimal toolbar on autohide at the bottom of the screen. While not intuitive it is a big improvement. For some reason the installer still places a shortcut on your desktop as it has for years. I’ve never understood that, as I have no desire to stare at a grey screen.

The security changes seem interesting. The reader is now using marked as a low integrity process in addition to the sandbox, as well as having full DEP and ASLR support. There are no customization options for the sandbox that I could find, but then none are really needed. The sandbox is only for the Windows version, so OS X, Linux and Android users are still left unprotected. As per the Adobe blog post above all write attempts are sandboxed by default. This should effectively stop most drive by download attempts in their tracks. It isn’t terribly easy to tell if protected mode is on or not, requiring to view the advanced properties of the pdf you are currently viewing. It seems however Adobe is aware of this and other problems and will work towards them on future releases. I am actually having trouble finding any further detailed information on the new protected mode, as clicking on the link on the website simply shows me a nice generic image of Adobe Reader.

I often see the point come up that using an alternative PDF reader such as Foxit or Sumatra will provide better security. This is simply false. Neither Sumatra nor Foxit have DEP or ALSR support (which is trivial to implement) and act buggy if they are forced to run as a low integrity process. They also lack an equivalent to the Enhanced Security Mode present in Adobe Reader since 9.3, requiring confirmation for certain actions. PDF exploits are often reader independent, in which case Adobe Reader actually has better mitigation techniques than any other reader. The gain in security via obscurity by using these other readers is far less than the mitigations techniques present in Reader X. With the introduction of a sandbox, Adobe Reader X is clearly the most secure choice at the moment. In addition to security aspects, other readers are simply not good enough to be a replacement yet as they have problems with overly large files or lack compatibility entirely for features such as forms.

I wonder when Flash will gain a similar to sandbox, as it is another primary attack vector these days if not more so than PDFs. Flash is still being targeted such as in this recent attack yet I have no heard no plans for Adobe to make security a priority for flash as they have for Reader, which is kind of strange.

What the last few years and various PDF and Flash exploits have shown is that DAC continues to be a poor access control framework for a modern desktop environment. There is simply no reason that a program started as a user should inherit the full rights of that user. If we had an easy to use MAC implementation that was mostly transparent, than most of these exploits would not be an issue, in fact they probably would not exist due to them not being possible in the first place. It seems the industry is slowly heading in that direction and features like sandboxing and integrity levels for processes are a good start. At least they will suffice for the meantime until such a time when operating systems allow us to easily sandbox risky or untrusted applications instead of relying on each program implement their own version. In the meanwhile for applications that are not sandboxed, it is possible to do so using Sandboxie, however it is not as effective on 64bit versions of Windows due to Kernel Patch Protection. I am not aware of any sandboxing applications on OS X and of course on Linux you can use a jail or one of the main MAC implementations.

November 5, 2010

Thoughts on Firesheep

Filed under: Security — Tags: , , , , , , — allthatiswrong @ 3:25 am

The last week there has been a lot of discussion over the release of the Firesheep addon for Firefox. Firesheep made the news because it allows anyone to impersonate someone on the same network on the vast majority of websites on the net. This is known as session hijacking or “sidejacking”. The problem occurs because most websites only encrypt the login process which prevents people from sniffing your username and password, but they then redirect to a non-ssl site which allows people to steal your session cookie – the unique identifier that tells a site who you are after you have logged in. If someone has hijacked your session they don’t have your username and password, but will still be logged in as you.

There has been a bit of controversy over Firesheep because some people are convinced that the person who wrote the extension should be held accountable or at the least did something wrong. Nay. Those people are simply misguided. Releasing a tool like Firesheep is the essence of responsible disclosure. The generally agreed process for dealing with exploits is to contact the developer privately to work on a fix, and reveal the exploit with the fix so both the company and researcher get credit. If the developer refuses to fix the flaw, then a proof of concept exploit is released to push the developer into doing the right thing. Firesheep is simply another proof of concept exploit for a problem that has been around for many many years. It isn’t like people were not made aware at least once before.

Facebook seems to have been getting the most press, although most sites are vulnerable. Most web email services, Amazon, other social networking sites, forums….pretty much anything you can think of. The strange thing is that most people don’t seem to care. This is generally because people don’t understand the risk or think that they won’t be a target. This is why the release of something like Firesheep is a good thing, a fantastic attempt at actually illustrating the threat. No doubt its use will become more widespread and as more people start to get taken advantage of, there will be a greater push for security that will benefit everyone.

I found it interesting that somebody went to the trouble of writing FireShepherd. FireShepherd is a tool that exploits a bug in FireSheep to prevent it from working. It doesn’t accomplish anything, and will likely be rendered obsolete in the next version. If something like FireShepherd was to be useful it should pollute the waves with fake sessions, although even this would not work terribly well.

I wanted to clear up some misconceptions that have sprung up in the wake of Firesheeps release, as a lot of bad advice seems to be being given out. First of all, logging out does not automatically make you safe. Many websites do not necessarily invalidate the session upon logout, which means even if you have logged out whoever is hijacking your session can continue to do so.

You are not automatically protected by using an encrypted wireless network. WEP does not encrypt client traffic for authorized clients and besides can be cracked in seconds. WPA/2 PSK means it uses a Pre-Shared Key. This means anyone with that key can decrypt traffic for the other clients. Firesheep may not work, but it would not be difficult to adapt it to do this. Even on a wired network you may not be safe due to ARP poising or MAC address overflow attacks.

Some people have recommended using a VPN or SSH tunneling which are one of the best solutions. They are not immune however it is a whole lot less likely that someone is sniffing anything from your ISP’s connection upwards than it is that there will be some douchebag at Starbucks looking for someone to take advantage of. Either of these solutions are the best at present as they allow you to encrypt all traffic to a point where only employees or authorized personnel would have access to take advantage of your unencrypted sessions.

What most people have been suggesting is to use an extension that forces sites to use SSL all the times. The two most suggested addons are HTTPS Everywhere by the EFF and Force-TLS. NoScript also has this capability. There is a similar addon for Chrome called KB SSL Enforcer however it is quite insecure at present. Due to the subpar Chrome extensions framework every site request will be http first, so session cookies will still be leaked and can be abused.

Each of these addons make use of HSTS which rely on the server to have support. If the server does have support then the entire session can be encrypted. Unfortunately not many sites support this at present, and forcing an SSL session by rewriting http requests is not ideal. Some websites will break if you try this, such as chat no longer working in FaceBook. Some sites may not load at all as they will depend on http content for various reasons, such as hardcoded links or content from another domain. Even if a website supports wholly SSL sessions there may still be information leaks, such as AJAX requests or a fallback to HTTP happened to Google a few years ago.

The only decent solution is for websites to implement sitewide SSL . Secure cookies, and SSL for everything from the login and after. No fallback to just HTTP, at all. Of course, this approach has gotten some criticisms. People claim that SSL is too expensive, however this simply isn’t true. Google showed earlier this year that having SSL as the default option for Gmail increased server load only %1. If Google can manage this, Facebook and Amazon sure as hell can. People are saying the performance will be diminished because you can’t cache with ssl however this is false, you just have to set Cache-control: public. Then there are people who complain about needing a dedicated IP, which is also false. Basically, in this day and age, there is very little reason not to have an entire encrypted session for anything remotely valuable. It is appalling that so many companies and sites have ignored this problem for so long and I think it is great that Firesheep has brought attention to the problem, again.

Of course, I don’t expect anything to change although in another few years there will probably be a similar tool released and the discussion will start up again, at which point there truly won’t be any excuse for default non-encrypted sessions to be so prevalent.

Update – November 11th 2010

A few days after I wrote the above, a tool called BlackSheep was released which aims to help people detect someone using Firesheep on a network. The tool does as I said above, sending fake sessions and reporting when one is accessed. I haven’t looked at it in detail but I would think it could easily be mitigated either by learning to identify the type of sessions BlackSheep sends out or finding some other way to detect or disable it. This just gets into a never ending cat and mouse game all the while ignore the real problem.

I was made aware of an addon for Firefox, SSLPasswdWarning. This addon will alert you if your password or sensitive information will be transmitted over an insecure connection, so is useful in helping to determine if there is a risk or not.

October 11, 2010

Thoughts on the recent soft hyphen exploit

Filed under: Security — Tags: , , , , — allthatiswrong @ 3:00 am

Recently there has been discussion of crafting malicious URLs by making use of the soft hyphen character. The soft hyphen character is only meant to be rendered if and when the text breaks onto a new line, which is almost never the case with URLs. The problem is not so much a security risk on an individual level, rather by incorporating the ­ character in URLs, it allows some spam catching software to be bypassed.

I think the real problem this issue highlights is that it is still unsafe in 2010 to trust website links. This issue actually reminded me of the Unicode URL attack which came to light in 2005, where it was possible to register a domain that looked like a common domain using different characters. This soft hyphen attack could allow for some of these malicious Unicode domains to be treated as legitimate.

Perhaps the first step is to educate people about SSL certificates, and have them check. But it isn’t enough that people simply check that their domain is trusted, as it can be easy to get a domain automatically trusted by most browsers. Instead, we would have to educate and get users to examine the certificate details for every important site they visit. This is unlikely, and since it shifts responsibility to the user, not so great a solution.

An easy solution may be to have a very restrictive set of characters allowed for URLs. At present a domain with soft hyphens encoded within appears as a normal domain in Firefox 4.06b, Opera 10.62, IE 9 and Chrome 6.0.472.63. This could be easily solved by forcibly rendering the soft hyphen character or in some way indicating the URL contains special characters. Likewise there should be an indicator when a URL combines different character sets.

These types of simple exploits will continue because there is just so much to work with and security has not been considered until too late. Browsers (and any internet aware program) should be designed with security in mind from the ground up, in which case they would have implemented something like a restricted character set or warning, and both the soft hyphen exploit and Unicode attack would not have been possible.

March 25, 2010

Facebook’s security check is anything but.

Filed under: Security, Tech — Tags: , , , , , — allthatiswrong @ 10:31 pm

When logging into Facebook from either a different location, a security check will come up with an alert asking you to identify yourself. I tend to travel around a lot, and so this is very annoying. I use Facebook quite infrequently, I can only imagine how annoying it would be for those that travel and use the site daily. What’s worse here is that a different location is not necessarily a different city or even country, but can just be a different computer. For example, if you login at an internet café 10 minutes from your house for whatever reason, the warning will come up. This also adds to Facebook’s poor history of privacy, as for this check to work they must be maintaining a record of all the locations you use Facebook from. Logfiles are one thing, but actively maintaining a record of your location history for commercial gain is something else.

The fact that an account is signing on from a different location is in no way an indication of malicious activity. I don’t really understand the moronic reasoning that could have thought this was a good idea. Perhaps if the account was active in two different locales within a reasonable time difference, but simply from a different location? As stupid as the security check may be in the first place, it is made worse in that it is not effective in any way. The only information it asks you to enter to authenticate yourself is your birthday. Information that most people on Facebook make publically available without a second thought. Even if they don’t, it’s not exactly the hardest info to find out. Why not ask for the user to reenter their password, which would help protect against many type of session stealing attacks, or to confirm the location they last logged in from. At least something that wasn’t entirely security theater because at present it accomplishes nothing and is just a frustration.

What about if the attacker doesn’t know your birthday, or you used a fake birthday to signup and don’t remember what it was? In this case Facebook will send out a security code to one of your registered email addresses. This also allows for a breach of privacy, in that all email addresses will be exposed here, regardless of if they are marked as private or not. If the attacker does not have access to one of these email accounts then this might work OK. However even this security check is flawed, as it never changes. I.E. Every time that you fail to correctly enter your birthday, the exact same security code will be emailed out! This only means you need one million attempts to successfully brute force this code. This would take several days, but for someone who doesn’t use their Facebook account that often it would allow for it to be cracked. I have not investigated too deeply, but Facebook does not seem to have any preventative measures against bruteforcing this security check.

I find it hard to believe the Facebook developers could be this stupid. It seems much more likely that this “Security Check” is actually a measure to make sure their location information for users is accurate, disguised as security theater. Then again, Never attribute to malice that which can be adequately explained by stupidity.

January 20, 2010

The insecurity of OpenBSD

Filed under: Security — Tags: , , , , , , , , , , , , , , — allthatiswrong @ 11:29 pm

Table of Contents

Secure by default
Security practices and philosophy
No way to thoroughly lock down a system
The need for extended access controls
Extended access controls are too complex


Firstly, I would to apologize for, and clarify the title of this article. I wanted to use a title which would hold attention and encourage discussion while remaining true to the argument I make. I certainly don’t mean to imply that OpenBSD is a horribly insecure operating system – it isn’t. I do however need to highlight that OpenBSD is quite far removed from a secure operating system, and will attempt to justify this position below.

To start, we must clarify at a bare minimum what a secure operating system can be considered to be. Generally, this would be taken to mean an operating system that was designed with security in mind, and provides various methods and tools to implement security polices and limits on the system. This definition cannot be applied to OpenBSD as OpenBSD was not designed with security in mind and provides no real way to lock down and limit a system above standard UNIX permissions, which are insufficient.

Despite this OpenBSD is widely regarded as being one of the most secure operating systems currently available. The OpenBSD approach to security is primarily focused on writing quality code, with the aim being to eliminate vulnerabilities in source code. To this end, the OpenBSD team has been quite successful, with the base system having had very few vulnerabilities in "a heck of a long time". While this approach is commendable, it is fundamentally flawed when compared to the approach taken by various extended access control frameworks.

The extended access control frameworks that I refer to are generally implementations of MAC, RBAC, TE or some combination or variation of these basic models. There are many different implementations, generally written for Linux due to its suitability as a testing platform. The most popular implementations are summarized below.

  • SELinux is based on the FLASK architecture, is developed primarily by the NSA, and ships with some Linux distributions by default, such as Debian and Fedora. SELinux implements a form of MAC known as Domain and Type Enforcement.
  • RSBAC is developed by German developer Dr. Amon Ott, and is an implementation of the GFAC architecture. RSBAC provides many models to choose from such as MAC, RBAC and an extensive ACL model. RSBAC ships with the Hardened Gentoo distribution.
  • GRSecurity is not primarily an access control framework, but a collection of security enhancements to the Linux kernel, such as JAIL support, PID randomization and similar things, as well as having an ACL and RBAC implementation.
  • AppArmor is a simple yet powerful MAC implementation, which relies on pathnames to enforce policies. Relying on pathnames is a weaker approach than that used by the above frameworks; however this is considered acceptable because it is easier to use. AppArmor ships with and is enabled is versions of Ubuntu and OpenSUSE.

There are other simpler implementations such as SMACK and Tomoyo which are officially in the Linux kernel, as well as implementations for other platforms such as TrustedBSD and Trusted Solaris. Each of these access control frameworks provide for additional security to be setup when compared to what can be done with OpenBSD by default.

Secure by default

OpenBSD is widely touted as being ‘secure by default’, something often mentioned by OpenBSD advocates as an example of the security focused approach the OpenBSD project takes. Secure by default refers to the fact that the base system has been audited and considered to be free of vulnerabilities, and that only the minimal services are running by default. This approach has worked well; indeed, leading to ‘Only two remote holes in the default install, in a heck of a long time!’. This is a common sense approach, and a secure default configuration should be expected of all operating systems upon an initial install.

An argument often made by proponents of OpenBSD is the extensive code auditing performed on the base system to make sure no vulnerabilities are present. The goal is to produce quality code as most vulnerabilities are caused by errors in the source code. This a noble approach, and it has worked well for the OpenBSD project, with the base system having considerably less vulnerabilities than many other operating systems.

Used as an indicator to gauge the security of OpenBSD however, it is worthless. The reason being is that as soon as a service is enabled or software from the ports tree installed, it is no longer the default install and the possibility of introduced vulnerabilities is equal to any other platform. Much like software certified against the common criteria, as soon as an external variable is introduced the certification, or in this case the claim can no longer be considered relevant.

It is important to note also that only the base system is audited. The OpenBSD ports tree is not audited, and much of the software available in the ports tree is several releases behind current versions, meaning that there is a strong possibility that software will be obtained from outside of the ports tree. Given that a default install of OpenBSD has all network services are disabled by default, it is very likely that software will be installed or a service enabled if the server is going to be used to actually provide any kind of service.

Since the majority of attacks are not against the base system but against software operating at a higher level actively listening over the network, it is likely that if an OpenBSD machine were attacked, it would be through such software. This is where OpenBSD falls down, as it provides no means to protect from damage in the event of a successful attack.

Providing a default secure configuration is an important practice, and one that is employed by the majority of operating systems these days. OpenBSD followed this practice in the early part of the last decade when most other operating systems did not bother, and for that the OpenBSD team should be praised. While it is a good practice it is specious at best to take this as a measure of the actual security OpenBSD provides.

It should also be noted that the OpenBSD team uses a different definition of security vulnerability, limited to vulnerabilities that are allow for remote arbitrary code to execute. While most people may consider a DOS attack or local privilege escalation problems to be vulnerabilities, the OpenBSD team disagrees. If we use a more generally accepted definition of security vulnerability, OpenBSD suddenly has a far greater number than two remote holes in the default install a heck of a long time.

Security practices and philosophy

The OpenBSD team seems very reluctant to actually admit security problems and work towards fixing them. One such example is this CoreSecurity advisory from 2007. Instead of working and testing to see the extent of the damage that could be caused by a particular vulnerability, they prefer to dismiss and assume arbitrary code execution is impossible until pushed by Core releasing proof of concept code to show otherwise. This is similar to behavior observed by many corporations. Unfortunately this seems to be typical behavior rather than an exception going by the various mailing list threads when a vulnerability is reported.

OpenBSD was never designed with security in mind. OpenBSD was started when Theo de Raadt left the NetBSD project, with the goal of providing complete access to the source repositories. The focus on security came at a later stage, along with the “secure by default” slogan. As noted above, a secure operating system is not synonymous with a lack of vulnerabilities, and certainly not with a lack of vulnerabilities limited to the base install. This should be contrasted with the various extended access control frameworks, which despite being patches to an existing project, were designed from the ground up with a focus on security.

OpenBSD by itself contains a feature set similar in comparison to the GRSecurity patch for Linux without the ACL or RBAC implementation. GRSecurity and the Openwall project actually pioneered many of the protections that occurred later in OpenBSD such as Executable Space Protection, chroot restrictions, PID randomization and attempts to prevent race conditions. OpenBSD is often credited with pioneering many advances in security when this is not the case. OpenBSD tends to add protections much later, and only when absolutely necessary as they continue to erroneously believe that eliminating vulnerabilities in the base system is sufficient.

It is also odd that for a project that claims to be focused on security, sendmail is still their MTA of choice and BIND is still their DNS server of choice. Sendmail and BIND are old, and they both have atrocious security records. To look through OpenBSD’s security history, many of the vulnerabilities can be attributed to BIND or Sendmail. Why would anyone choose these programs for a security focused operating system, when far more secure alternatives designed from the ground up to be secure are available? Examples might include Exim or Postfix and MaraDNS or NSD.

It is interesting to compare OpenBSD to its cousin, FreeBSD. While FreeBSD does not claim to have a focus on security, it is in fact a far more secure operating system than OpenBSD due to its implementation of the TrustedBSD projects work. FreeBSD implements proper access control lists event auditing, extended file system attributes, fine-grained capabilities and mandatory access controls which allow for a system to be completely locked down and access controlled as necessary to protect against users or break in attempts.

Despite the TrustedBSD codebase being open and available for OpenBSD to implement or improve, they reject it simply because they consider it to be too complex and unnecessary. Even if the OpenBSD team did not want to implement extended access controls they could implement proper auditing through the OpenBSD project, which they still reject as unnecessary.
It is no wonder then that when governments or organizations look for a secure operating system, they look to systems that have proper access control lists and auditing, something OpenBSD is not concerned about. A good example of this is China choosing FreeBSD as the base of their secure operating system, as OpenBSD was considered insufficient to meet the criteria.

The library calls strlcpy and strlcat should also be mentioned here. These library calls were developed by Todd Miller and Theo de Raadt as a way to eliminate buffer overflows by ensuring strings are always null terminated. However this approach is controversial, and can actually result in further problems and security vulnerabilities than they solve. While they may have their place, they should certainly not be relied on, and doing so shows a poor understanding of computer security.

No way to thoroughly lock down a system

This is the main problem with OpenBSD, and what prevents it from being able to be considered a secure system. No matter how quality the codebase or how free of vulnerabilities, there is no sufficient way to restrict access other than with standard UNIX permissions. OpenBSD team leader Theo de Raadt has openly stated that he is against anything more powerful such as MAC being implemented which is a shame. There is no good reason to avoid implementing extended access controls when the greater security and control they provide is irrefutable.

OpenBSD does offer some basic protections to protect a running system, namely the chroot functionality, chflags and securelevels. The chroot implementation is a secure version much improved over the standard UNIX chroot, but still far lacking when compared to a proper jail implementation such as that provided by FreeBSD. The consensus among OpenBSD developers and community is that you can achieve the same result using chroot and systrace. Which means they rely on a third party tool to implement a secure design that is present by default in FreeBSD, NetBSD and numerous other unices.

Securelevels are an interesting concepts and they do help with security somewhat. Securelevels can only be increased not decreased on a running system. The higher levels prevent writing to /dev/mem and /dev/kmem, removing file immutable flags, loading kernel modules and changing pf rules. These all help to restrict what an attacker can do, but do absolutely nothing to prevent reading or changing database records, obtaining user info, running malicious programs etc. These protections do absolutely nothing to stop information leakage. Making files immutable or appendable only is a poor option when contrasted with the ability to prevent reading and writing/appending to only specific users or processes.

The OpenBSD project and community had access to a tool for policy enforcement named systrace. Systrace is a third party tool developed by Niels Provos, and has never been embraced by the OpenBSD team. Systrace lacks the versatility of a proper MAC implementation, and had similar weaknesses to AppArmor since it relies on pathnames for enforcement. Systrace is a form of system call interposition, which has been shown to be insecure.

The only software even close to a MAC implementation is rejected by the OpenBSD team, and is insecure. Despite this, systrace is still maintained and offered/recommended by the community as the preferred way to sandbox and restrict applications. Given this obvious deficit, it would seem even more prudent for OpenBSD to make use of the TrustedBSD project.

This is the main reason why OpenBSD is unable to offer a secure environment in the event an attacker is successful. Instead of implementing a form of extended access controls and ensuring the system is secure even in the event of a successful attack, they prefer to remove as many vulnerabilities as possible. This approach is naïve at best and arrogant at worst.

The need for extended access controls

The main argument against OpenBSD is that it provides very limited access controls. OpenBSD attempts to remove the source of vulnerabilities by producing quality code, and has such faith in this approach that very little is provided to deal with a situation when a machine is exploited, and root access obtained. Perhaps inevitably. It is this lack of access controls and protection mechanisms that prevent OpenBSD from being the secure system it is often credited as being.

It is also the reason the aforementioned frameworks such as SELinux and RSBAC have an inherent security advantage over any OpenBSD machine. Due to the use of some sort of MAC, RBAC, TE or other advanced access control used by these frameworks, a level of control is possible above that in traditional DAC systems. With a traditional DAC system, the user has complete ownership over their files and processes, and the ability to change permissions at their discretion. This leads to many security concerns, and is the reason most attacks can be successful at all.

When a computer is hacked regardless of if it is due to a drive by download targeting an insecure browser on a user’s computer or a targeted attack exploiting a server process, the malicious process or user will inherit the access of the browser or process that was attacked. The prevalence of the DAC architecture throughout most operating systems is still the primary cause of many security issues today. With many server processes still running as a privileged user this is a large concern.

It is also something that is hard to fix without changing to a different design paradigm. Many of the technologies that were developed to help prevent attacks such as privilege separation; executable space protection and process ID randomization help, but are not sufficient for a majority of cases. This is why the need for an extended access control framework is present. With the use of something like SELinux or RSBAC, the significance of individual user accounts or processes as an attack vector is decreased.

With these systems every single aspect of your system can be controlled to a fine grained level. Every file, directory, device, process, user, network connection etc can be controlled independently allowing for extremely fine grained policies to be defined. This is something that simply is not possible with current DAC systems which include OpenBSD .

As an example of what is possible with extended access controls, it a web server process running as root could be set to only have append access(as opposed to general write access available in a DAC system) to specific files in a specific directory, and to only have read access to specific files in a specific directory. If some files need to execute, then that file itself (or the interpreter if a script) can be restricted in a similar way. This alone would prevent web site defacement and arbitrary code execution in a great many cases.

On present systems using DAC if a targeted attack is successful and access to the root account is gained, there is nothing the attacker cannot do. Run their own malicious executables, alter files etc. This is why OpenBSD is necessarily less secure than any system making use of advanced access control frameworks, and also why OpenBSD is not a secure system. While OpenBSD has many innovative technologies that make it harder for an attacker to gain access, it does not provide any way to sufficiently protect a system from an attacker who has gained access.

It is possible for example to restrict something like perl or python with extended access controls. On OpenBSD if a user or an attacker has access to perl or python, then they can run whichever scripts they like. With extended access controls, it is possible to restrict only certain scripts to have access to an interpreter (and additionally make those scripts immutable), and prevent the interpreter from running at all unless called by those specific scripts. There is no equivalent fine grained granularity on OpenBSD.

Another way in which extended access controls can help is to protect against users. Even on a desktop system there is a significant security advantage. At the moment most malware requires or tries to obtain root privileges to do damage or propagate. What most people don’t realize is that even malware running as a normal user can do significant damage as it has complete access to a users files under the current DAC model. With some form of MAC, if a user decided to demonstrate the dancing pigs problem and run an untrusted piece of malware, it could be restricted from having any access to a users files or being able to make network connections.

Even windows implements a form of MAC – Mandatory Integrity Controls. While not terribly powerful, and not used for much at the moment, it still provides increased protection and allows for more security than an OpenBSD box can provide. If even Microsoft can understand the need and significance of these technologies after their track record, why is OpenBSD the only project still vehemently rejecting this technology?

Extended access controls are too complex

Some people are of the view that extended access controls are simply added complexity, which increases the scope for vulnerabilities without providing any meaningful additional security. This position makes little sense. While it is true that adding extended access controls increases complexity, the meaningful increase in security cannot be denied. There are plenty of examples of exploits being contained due to such technology…exploits that would have allowed full access to the system if OpenBSD had been the targeted platform.

It has also been said such systems only serve to shift the attack point. Instead of attacking a particular piece of software, they simply have to attack the access control framework itself. This is simply a myth. While the frameworks themselves can be attacked and even possibly exploited, the increase in security far outweighs any risk. The extended access control framework can be extensively audited and made secure while allowing policies to be enforced. Having one relatively small section of code that is easily maintained and audited and responsible for maintaining security is not a decrease in security, but an increase.

Ideally, a proper extended access control framework would also be formally verified, as I believe is the case with SELinux and RSBAC, based on the FLASK and GFAC architectures respectively. This basically means that these systems have been mathematically proven to meet all of their specifications, making it extremely unlikely that it will be possible for the systems to fail in an unexpected way and be vulnerable to attack.

In almost 10 years, there have been no vulnerabilities reported for these major systems that allowed the framework to be bypassed. The times when there has been a problem, it has been due to poor policy. The example everybody likes to mention is the cheddar bay exploit that Brad Spender(author of GRSecurity) made public in July 2007. It’s true that this exploit allowed for disabling SELinux, but this was due to a stupid policy that allowed 0 to be mmaped for the purposes of allowing WINE to work. Only the RHEL derived distributions were affected. This is not a valid example of the framework being vulnerable, and it certainly does nothing to discredit the technology as a whole.

Due to limitations of certain hardware platforms, it is possible that with the right kernel level vulnerability, an extended access control framework could be subverted. These cases however are quite rare, and with the use of technologies like PaX they become even more unlikely to succeed. In fact, as of writing this article, I am not aware of example of an extended access control being able to be successfully subverted to the contrary. There are however, examples of extended access controls successfully protecting against certain kernel vulnerabilities such as SELinux preventing a /proc vulnerability that could lead to local root.

Some of these frameworks have been criticized for being too complex, in particular SELinux. While I don’t think this is entirely justified, as the SELinux team has made great progress with making this easier with tools such as setroubleshoot and learning mode, I can understand it may be a valid concern. Even so it only applies to a specific implementation. RSBAC, which is just as powerful as SELinux has far clearer error messages and is much easier to craft a policy for. Other implementations such as that of GRSecurity are far simpler yet again. The point here is that the technology is powerful and should be embraced as the added security advantaged is undeniable.

If complexity and user unfriendliness was the main concern the OpenBSD team had then they could still embrace the idea while making the implementation simple to use and understand Instead, they flat-out reject the idea, believing antiquated Unix permissions are more than enough. Unfortunate in this day and age this is no longer the case. Security should not be grafted on, it should be integrated into the main development process. This does not mean patching in protections for specific attacks along the way which is the approach favored by the OpenBSD team. The OpenBSD approach has resulted in a very impressive and stable fortress built upon sand.


While the implementation of various policy frameworks will mature and grow as needed, OpenBSD will remain stale. With a refusal to implement options for properly restricting users or a system in the event an attacker does gain access, the OpenBSD system will be considered a less reliable and trustworthy platform for use as a server or user operating system.

Extended access control frameworks should not be considered a perfect solution, or the be all and end all of security. There are still many situations where they are insufficient such as large applications that necessarily require wide ranging access to properly function. Even so, the level of control these frameworks provide are the best tools we have to secure systems as best we can.

It is interesting to note that even with Linux not really caring about security and having a non disclosure policy, things still end up being more secure than OpenBSD because of the presence of extended access controls. Being able to restrict access in such a powerful way which reinforces that simply trying to eliminate all bugs at the code level while noble, is an inferior approach.

As much as I am disappointed with the fix silently without disclosure approach to security the Linux kernel project has taken since Greg K-H took over, and having to rely on sites like to learn about security problems that were fixed, Linux is the only real project making progress with testing and improving extended access control frameworks. With continued development and support the implementations will become easier to use and the problems eradicated until such technology is common, as it should be.

OpenBSD cannot be considered a secure system until it makes some effort towards facilitating locking down a system with more than the standard UNIX permissions model which has been shown to be insufficient, and stop discounting the possibility that a system will be secure because all bugs have been removed. While well intentioned and accurate to a small extent, it is ultimately meaningless if even just one vulnerability is present.The OpenBSD team consists of highly skilled programmers who have an interest in security and have shown excellent skill at auditing code and identifying and fixing vulnerabilities in software. Unfortunately, they have shown no interest in extending OpenBSD to implement extended access controls as almost all other operating systems have done, leaving their system inherently more vulnerable in the event of a successful intrusion. The OpenBSD serve a useful role in the community, similar to dedicate security analysts or advisors, and for this they should be celebrated.

Note: I am aware that many people use OpenBSD for nothing more than a router, and for this it indeed ideal. For the use of a router, extended access controls would not provide much benefit. I wrote this argument however because many people seem convinced that OpenBSD has suerior security in all instances and including as a network server or user operating system. I became tired of reading these comments and people simply dismissing extended access controls as too complex and not providing any real security.



  1. SELinux –
  2. RSBAC –
  3. GRSecurity –
  4. AppArmor –
  5. The TrustedBSD Project –
  6. Core Security OpenBSD Advisory –
  7. Marc Espie talking about security complexity and calling MAC security theater-
  8. Theo de Raadt stating that MAC should not be included in OpenBSD –
  9. An older similar argument on the OpenBSD misc mailing list –
  10. A simple argument now out of date, that makes a similar argument without going into detail –
  11. Traps and Pitfalls: Practical Problems in System Call Interposition Based Security Tools –
  12. Exploiting Concurrency Vulnerabilities in System Call Wrappers –
  13. Bob Beck talking about systrace –
  14. China chooses FreeBSD as basis for secure OS –
  15. An example of SELinux preventing an exploit on RHEL 5 –
  16. Dan Walsh talking about SELinux successfully mitigating vulnerabilities –
  17. The start of the thread where Brad Spender’s Cheddar Bay exploit is introduced and discussed –
  18. Details on the SELinux policy that allowed for the Cheddar Bay exploit –
  19. SELinux preventing a kernel vulnerability from succeeding –
  20. A second example of a vulnerability that SELinux prevented, due to the users not having the required socket access-
  21. A Phrack article detailing the ways current security frameworks can be exploited, and how to prevent against this –
  22. A primer on OpenVMS security, a highly secure OS designed with security in mind at every level –
  23. Presentation introducing Strlcpy and strlcat –
  24. Start of a mailing list thread where strlcpy and strlcat are discussed and criticized –

Update 1 – January 23rd 2010

I have updated the article to talk about the benefit of formal verification, and address the possibility of an EACL framework being bypassed with a kernel vulnerability.

Update 1 – April 23rd 2010

I have updated the article to reflect the correct status of the Openwall project (i.e. not abandoned), thanks to a comment by Vincent.

December 2, 2009

Keyloggers and virtual keyboards/keypads are not secure

Filed under: Security, Tech — Tags: , , , , , , — allthatiswrong @ 2:20 pm

There seems to be a common misconception that online keyboards or keypads are a useful tool in defeating keyloggers. This is only true in the case where the online keyboard is randomized or a one time password is used, which unfortunately is the exception rather than the rule. I am not aware of other people discussing this, so here goes.

Most modern software keyloggers will not only records keystrokes, but will also records the mouse coordinates each time a mouse is clicked. This is exactly why an online keyboard does nothing to negate a keylogger unless it is randomized. If I see a mouseclick at x60,y60, and subsequent mouseclicks at x48,y60 and x52, y60, then I can likely workout which keys were clicked.

The keylogger will record the site that was visited, and since the authentication page is necessarily open to anybody it allows for an attacker to workout the distance between virtual keys and the starting location of the virtual keyboard. Those mouse coordinates above can now be translated to mean that the ‘u’ key was clicked first, followed by the ‘q’ and ‘e’ keys.

Some people believe that using the windows or another virtual keyboard program is secure and will protect against keyloggers. If anything, this is worse, as the attacker does not even have to use the mouse coordinates to work out which keys were pressed. Virtual keyboard programs tend to send the same WM_KEYUP and WM_KEYDOWN events when a key is clicked, which sends the same signals as if a hardware key is pressed.

At present, relying on virtual keyboards or keypads for an extra layer of security is useless, unless they are randomized. The only way to be sure is to ensure your system is clean, by following good practices or perhaps using a virtual machine if you wish to be extra cautious.

Unfortunately most banks or secure services can’t be bothered to implement a proper system. Several of the largest banks through Australia, the USA and Europe that I have experience only have a simple text password field. This is less secure since it is directly vulnerable to keyloggers. The banks that do tend to have some sort of online keypad tend not have it randomized in any way, making it vulnerable to the attack described above. This is worse than a simple text field due to instilling a false sense of security. It is only a few banks, generally the smaller ones that actually implemented a one time password or randomized keypad.

I’m not sure why the sites trying to make a secure authentication system are not aware of this, or perhaps they simply don’t care. Perhaps like so many others, they feel that giving an illusion of security is sufficient. Customers are already protected from fraud by most laws, so it would seem the incentive to provide to increase security would favour the banks rather than customers. Which means that apparently they are not being hurt enough by fraud(despite it being one of the largest growing attacks against bank customers), which is interesting.