All that is wrong with the world…

March 4, 2013

A comparison of books for learning assembly language.

Filed under: Tech — Tags: , , , , , — allthatiswrong @ 2:31 am

I had wanted to learn assembly for a very long time, at least a year but was very slow getting into it. Part of that, I think, was because I imagined it to be a lot more difficult than it was. The second issue was finding a good guide. Random tutorials I encountered on the internet seemed to jump in without much of an explanation and I was quickly lost. It wasn’t until I realized how easy it was to download eBooks that I started taking my goal more seriously.

There were many different assembly textbooks out there; it was just a matter of choosing the right one. My requirements were simple: Intel syntax, Linux and Windows compatible, not assuming too much prerequisite knowledge and exercises with solutions as a way to test knowledge.

I thought I would quickly write this for people who are in a similar position to what I was in, hopefully narrowing down the choices. All the title reviewed here were the latest editions as at the date of this post.

James T. Strieb – Guide to assembly language: A concise introduction
This is the book I found to be the perfect guide. It was exactly what I was looking for. The book assumes a basic knowledge of programming and doesn’t go over architecture at all. Or if it does, it does so briefly and only when pertinent. The book assumes knowledge of things like libraries, arrays and functions, nothing to complex.

The book shows concepts in C with code examples and then shows how they would be implemented in assembly. It uses Intel syntax with the MASM assembler. There seems to be some of the more complicated stuff left out, although I think it is a great starting point, making it much easier to pick up one of the other assembly textbooks. The book gives solutions to about half the exercises, which is significantly better than the rest I couldn’t recommend this book enough for someone looking to learn assembly. I found it very approachable and was able to learn the language in a short amount of time.

The Art of Assembly Language – Randall Hyde

This was the first book I tried for learning assembly, as I had heard good things about the publisher and this book in particular. It was disappointing to say the least. The biggest gripe I have with this book is that it teaches “High Level Assembly”, essentially an entirely different language from assembly, built using assembly macros. It’s great maybe if you have no programming experience what so ever…although even then I don’t know why you would learn HLA instead of a real language.

I have a good knowledge of concepts like OO and arrays and functions, although I’m not much of a developer. Randall’s book assumed no prior knowledge, which I thought was fine as it would give me a chance to brush up on what I already knew. Learning HLA instead of assembly just became frustrating, and I soon looked for a replacement. It’s hard to recommend this book at all, as well written as it is, unless you are OK with learning a different language to help learn assembly.

Assembly Language for x86 Processors – Kip Irvine

This seemed like a book high on everyone’s list and for good reason. It covers a lot of architecture and then gets into the programming. I didn’t find it as approachable as some of the other books, in particular the one by Streib. I was somewhat put off that he uses his own library for input and output, although looking at it now it seems OK. I was probably just put off from the HLA stuff. He still teaches without his library eventually, so it’s a minor point.

I would not recommend this for a beginner, as he delves into things like the EQU directive and symbols in the introductory chapter! The main reason I didn’t go for this was that it seemed a little too complex, and that there were no solutions made available for the exercises unless you are an instructor. I find that quite frustrating. Now that I’ve finished with the Streib book, I may choose this to augment my existing knowledge.

Introduction to Assembly Language Programming: From 8086 to Pentium Processors – Sivarama P. Dandamudi

This book is over complicated and I don’t think suitable for a beginner. In fact I’m not really sure where it would find a niche, given there seem to be better books out there. The book devotes quite a lot to architecture, focusing on the Intel Pentium and going into RISC architectures with the MIPS processor as well. The book uses the Nasm assembler which is nice, as it is Intel syntax and cross platform.

There is a lot of info on debugging, interrupts and similar things which make it seem useful as a reference. Chapter 3 is devoted to explaining the Pentium Processor, with Chapter 4 starting to teach Assembly. If you just want to learn assembly and can go without deep explanations of processors mechanics, this probably isn’t the book for you. Also, no answers to exercises are provided.

Introduction to 80×86 Assembly Language and Computer Architecture – Richard C. Detmer

This book seems similar to Kip Irvine’s book in the sense that it covers a lot of architecture. There almost seems to be more about architecture than programming. As my goal was only to learn assembly and not architecture, I found this book too broad. When it gets into teaching assembly I thought it was too complex, introducing jmp in the same paragraph as add, for example.

It uses a special library for input and output similar to how Irvine does, but also seems to teach input and output sans library later on, which is fine. Staying consistent with the trend, solutions to exercises are not provided.

Programming from the Ground Up – John Bartlett

I think the main reason I didn’t use this book is because of the use of AT&T syntax. Intel is more popular and seems easier to pickup. It also uses Linux specific tool and since Windows is my everyday operating system, I didn’t want to reboot or use a VM just to learn assembly.

The book seems kind of odd to me…explaining basic architecture in the first 2 chapters but not getting deep into it. It starts off with some basic programs, but then gets into buffers and system calls very early on. This looks like a fine book if you were fine learning AT&T syntax in a Linux environment, and wanted a very gentle introduction to the concepts. There are no answers to exercises, which is frustrating.

Advertisements

February 13, 2013

Thoughts on Fedora 18

Filed under: Tech — Tags: , , — allthatiswrong @ 6:57 pm

Thoughts on Fedora 18

I recently decided to remove my long running Slackware install with Fluxbox and Tint2 and install Fedora 18. I chose Fedora because I know that is is well maintained and up to date, and I wanted to start delving deeper into SELinux . I thought it would be a good opportunity to see how Linux on the desktop has progressed.

I was sadly disappointed. Fedora may not be the first name people think of when they think of a polished desktop Linux experience, but it had to be better than the eternally buggy Ubuntu. I didn’t think it would be too different from Debian and so I had high expectations. I was very disappointed.

I should disclose that I may have been at a disadvantage as I did a full install from the 900mb live CD as opposed to the DVDs, however I noticed no warning that my experience would be hampered in any way if I installed this route.

The first thing I noticed was that there was no no easy way to minimize or close windows that was apparent, short of right clicking. This was quite frustrating, and did not seem to make much sense. I decided to install other desktops such as MATE and Cinnamon, to see how they were in comparison to the horrible Gnome 3.

The problems I had then were with the package manager. Using the package manager I thought I would search for the mate desktop, using mate as my search term. I found a bunch of packages, with it not being clear which ones I had to install or which dependencies were needed. After dealing with a lot of nonsense about waiting in a queue before I could actually install packages, it turns out I had not managed to install the MATE desktop.

I did manage to install cinnamon, although I had the same problem of trying to guess which packages were actually for the desktop environment. Isn’t package management on Linux now meant to be easy to use and intuitive? That is not all the experience I had. Compiling from source would have been significantly easier.

I also noticed that there was no way to easily disconnect from a wireless network. I had connected to the wrong network by mistake and when wanting to rectify this, I found no way in the GUI to disconnect from the current network.

Finally I decided to logoff to test out my newly installed Cinnamon (and I had thought MATE) environments. This was the kicker…there was no logoff option. Apparently this is a well known bug (feature?) with Fedora, in that if there is only one user account the logoff option is not displayed. Sigh. Even ctrl+alt+backspace didn’t work to restart the X server.

I still plan to use Fedora a lot more, but for my initial impressions I am not at all impressed. Year of the Linux Desktop? Not this year, or not with Fedora at least.

November 19, 2011

Wikipedia needs an overhaul

Filed under: Tech — Tags: , , , , — allthatiswrong @ 2:21 am

Wikipedia is an interesting tech phenomenon, perhaps the most interesting of the last 5 years or so. What started as just another wiki with rather ambitious goals actually ended up succeeding, becoming one of the largest and most popular sites on the internet. I never liked Wikipedia simply because anyone could edit it, which I saw as a negative rather than a positive. Of course, now that they have migrated to an “all submissions considered” model it has become far more reliable. There are still problems of course, such as the inconsistency in what kinds of sources to accept, people “owning” articles and denying constructive edits and all sorts of bias and opinion passed off as fact. This is getting better what with the feedback system on some articles but it still isn’t as good as it could be.

One of the most annoying aspects is that the English Wikipedia is USA centric. This isn’t surprising considering most submitters are probably American, but given most English speakers are not it would be nice if the wiki entries were more neutral. I would imagine this is a similar problem for the Spanish and French language Wikipedia’s. I also find it somewhat sad that many Wikipedia articles simply copy the text verbatim from other sources with a reference, meaning the original authors’ site is no longer a main source for information. I’m sure such a trend will only have negative consequences. Also of note is the problem of new users who join trying to contribute to issues they care about only to be accused of being meatpuppets and essentially driven away from the community or refusing contributions from experts. Is it any wonder they are losing contributors? Not to mention ridiculous and inconsistent notability guidelines for a project attempting to be ” A Repository of All Human Knowledge”.

Of course, now it is that time of year again for Jimmy Wales and co to personally ask for money to financially support Wikipedia. Despite having a firm no ads policy this appeal is advertised via a removable banner which cannot easily be removed, ignores the users preferences to ignore it, and changes often enough to use up quite a bit of bandwidth. In fact, I wouldn’t be surprised if the cost to run the ads for so long offset the cost of any donations they receive. Of course for those of you who like the banner, you can now have it on every page you visit.

The biggest problem I would have with donating any money to Wikipedia at the moment, aside from the fact I don’t think enough policing and cleaning up is going on is the lack of transparency. Where is Wikipedia’s budget and needs? Why do they need as much as they do? The appeal doesn’t really give any reasons why Wikipedia actually needs money. The server costs per year are not going to be that much, certainly not enough to warrant receiving several hundred thousand dollars each year in funding. How do we know this money is not going to fund the lavish lifestyles of the Wikipedia staff? Jimmy Wales already makes quite a bit of money through Wikia, how much of that goes back into Wikipedia? Last year Google ended up giving 2 million to Wikipedia, why have they gone through that already? Where are the records showing how it was spent?

I would like to support Wikipedia but it simply doesn’t make sense until there is more transparency and until they put the costs towards quality control. If I was on the fence about donating, the constant begging certainly wouldn’t make me lean towards donating. I use Wikipedia frequently but I can’t be motivated to donate or contribute with the many problems that still exist. Wikipedia needs an overhaul to ensure their money is being properly spent and that people contributions are not simply being deleted or caught up in edit wars, that new users are welcomed and that some level of consistency is enforced.

November 8, 2011

Why is Notepad never improved?

Filed under: Tech — Tags: , , , — allthatiswrong @ 4:28 pm

I don’t understand over the course of 15 years how Notepad has never managed to be improved. Most of the basic Windows utilities have improved over the years, primarily Calc and to a lesser extent mspaint and wordpad. I am not aware however of any improvements ever made to notepad. It can’t handle word wrap properly, and will create newlines based on the size of the windows. It can’t handle Unix end of line characters, requiring you to open the textfile in wordpad or an alternative editor and resave it. It requires using double quotes to save with a non .txt extension, which shouldn’t be necessary. Even the view menu has a grayed out option for a status bar, which I can’t seem to activate. It is also unacceptable for an operating systems basic text editor not to have support for regular expressions in 2011.

The only advancement made in Notepad seems to be the algorithm for detecting Unicode files which was introduced with Windows 2000. While it may have been improved since then, no actual features that would make the program useful have been introduced. I’m not asking for more advanced features like code folding or syntax coloring, although they would be nice as well. Just enough basic functionality to ensure text files didn’t break while transferring between platforms or because you resized the window. There should be no requirement to open text files in wordpad these days just to be able to read it properly.

September 19, 2011

Another minor Facebook security issue

Filed under: Tech — Tags: , , — allthatiswrong @ 11:38 pm

I noticed a recent flaw in Facebooks security resolution process recently. After being asked to confirm my identity simply because I was using a different computer, I apparently took too long to identify my friends in their photos. However, I was able to try two more times before being locked out. In which case Facebook provided the exact same photos with the same selection of people to name in order to confirm my identity. What this means is that I could conceivably attempt to logon to a victims Facebook account from an unauthorized device to get such a prompt, and then take my time to research the answers.

Twenty minutes was the approximate time before my session expired, which gives roughly one hour to come up with the answers. This may not seem terribly difficult given the proclivity with which people tag their friends or publish photos on blogs. It would be even easier if the victim and attacker had a mutual friend in common on Facebook, as they would likely be able to see a lot more photos. In fact, perhaps even searching each name in Facebook could show the face, which would allow for the questions to be answered correctly.

This isn’t a minor flaw in any sense of the word, however it does seem quite possibly that the process as it is now implemented could be abused in conjunction with other vulnerabilities to gain access to someone’s account. I hope that at the least this will foster some interesting discussion on why what I have described is a non issue, or result in a fix.

August 7, 2011

Linux – The worst platform for video editing

Filed under: Tech — Tags: , , , — allthatiswrong @ 6:16 pm

Proponents of Linux often tend to misrepresent Linux as a leader in video editing. Whether this is intentional or just due to being misinformed, nothing could be further from the truth, with Linux potentially being one of the worst platforms for video editing. There are no decent software packages for editing video on Linux and it is one of many prime examples of why Linux is not anywhere ready to be a replacement for OS X or Windows on a large scale. The idea that Linux is the most popular platform for video editing seems to come from the fact that it is used on server farms to do 3D rendering. However, this is not video editing. For actually editing film, doing post-processing such as adding in sound, visual effects or even just playing with scenes there is no software on Linux that can do this reliably.

Often you’ll see people quoting statistics such as 95% of the computers in Hollywood are running linux or some crap. Even if that’s true, it doesn’t mean Linux is being used as the platform that film is edited on. As it stands there are no decent video editing tools for Linux, or for any OSS platform. Kdenlive, PiTiVi, OpenShot, lives, Cinelerra and Kino have all been in development for many years ever languishing or forking. Either way at the moment there is not a single OSS video editor that provides anywhere near the functionality of say Final Cut Pro. Not only are all the available video editors lacking in basic functionality, many of them have horrendous stability issues, segfaulting when trying to import video for example.

Many people of bring up stuff like Maya or Pixar’s RenderMan, but ultimately this is 3D development software that’s use is not limited to film, at all. The fact that films can be made with such software is incidental and does nothing to detract from the point that there is no quality video editing software for Linux. This shouldn’t be surprising, as it is the nature of the beast. The way OSS works is that people develop out of personal motivation or because they are paid to. There are very few people working in video editing who are also programmers, so there is a lack of quality video editing OSS available. Until more people contribute or a company funds development, that’s how it will stay.

Hopefully this will help to dispel the propaganda that Linux is prominent in the film industry. It certainly is as far as CPU hours goes, since it is ideal for rendering. If you expect to see it being used by the people actually editing the films on desktops, then think again. Perhaps one day….probably before Linux gets decent audio applications at least. Also, this isn’t an anti_Linux article. It’s an anti-bullshit-propaganda article. Linux is great, no need to misrepresent what it is capable of or how it is used.

July 10, 2011

Mandatory IT licensing

Filed under: Tech — Tags: , , , , , , — allthatiswrong @ 9:04 pm

I really don’t understand why the IT industry is not regulated. As it stands in most countries you don’t need any kind of certification or experience to work in IT or run an IT business. Most other fields where you need to rely on the people to fix things or do specific work are regulated. Lawyers have to be admitted to the bar, electricians and mechanics generally have to be licenses, engineers need a degree before they can call themselves as such, even plumbers tend to need a certification before they can practice. Given just how often PC’s are critical in everyday life, why are IT workers not held to the same standard? –

At the moment, anyone can simply open a PC repair shop with only a cursory knowledge and start charging money to fix peoples computers, without knowing what they are doing. They could have various malware and the PC repair guy might charge $80 just to run an AV scan, not finding or fixing anything, faulty memory could lead to an unnecessary new CPU and so on. All too often these shops are run by people who know more than most, but simply do it as a hobby and don’t have anywhere near the prerequisite knowledge that they should have, when people are paying for a service and trusting them to provide that service competently and reliably. Here is an interesting article with some of the horror stories of people getting ripped of and displays of incompetence.

There are far too many 40+ guys who don’t know enough to be doing this work as they don’t keep up with changes in the field, and often refuse to admit they are wrong, simply relying on their 30 years of experience as evidence of qualification. Then there are enthusiastic people in their 20’s that may or may not be in college and again while they know more than most, they too often don’t know enough and make the wrong decision. There are so many “professional” shops or consultants at the moment that just screw people over. Either by illegally selling software at OEM prices, recommending shitty AV software (AVG anyone?), charging money for useless registry cleaning tools or whatever.

When they can’t solve a problem they tend to reinstall Windows for an absurd price, acting as though it were a favor. The problem with all of this is that there are no consequences for any of this. There may be the odd lawsuit but if this is the equivalent of a small claims court you can only get the money you lost back. Got fired from your job because the unlicensed under-qualified PC repairmen fucked you over? Tough luck. I find it frustrating to work in an industry where there is almost no barrier to entry, so the field is full of people who have no idea what they are talking about yet convinced that they do. It takes away from those of us who do know what we’re talking about and worked hard to gain such knowledge and experience.

Just look on any pc tech forum and look at the people disagreeing among themselves over solutions. Each of these people or someone like them works in or run a shop, yet clearly they cannot all be right. Such a thing should not be allowed. Then look at the HBGary hack. Selling themselves as security experts they were devastated by bad passwords, unpatched servers, SQL injection and more. Thing that non-experts can avoid, so experts should certainly be able to. Yet people were paying these guys as professionals and relying on them for security, often in critical situations.

So, why does it make sense to allow this to continue? What argument can be made to keep things the way they are? What about the economy? It’s true that this may put a lot of people out of work or be an additional cost for small businesses, but this isn’t a problem to me. The people that can’t demonstrate knowledge shouldn’t be working in this field in the first place and legitimate businesses and consultants can recoup costs from the decrease in competition.

Another important reason I think some sort of mandatory regulation or certification should be required is for privacy reasons. At the moment the law is somewhat of a grey area on what rights you have to privacy when you hand your computer over to repair to someone else. An interesting example is this case, where a guy handed in his PC for repair and the technicians found child porn and promptly reported him to the police. It’s great helping to stop child porn production, but what were they doing looking through his files in the first place? Maybe they saw thumbnails, but why were they ever in his pictures or photos folder? You should be able to trust that your privacy is intact when getting a PC repaired and you should be able to collect substantial damages if it is compromised.

The whole situation at the moment is a problem and it needs a solution. As far as problems go it isn’t so major, but considering the sheer amount of people getting ripped off and/or getting poor service every day, it should be addressed. The problem is just how to address it. A once off certification isn’t useful as the industry changes too fast. Vendor specific certifications are not useful due to their very nature. One possible solution might be a multi-tiered license that has to be renewed on a set basis. You could be licensed at the first tier which would just be simple hardware and software, with the tiers increasing in complexity as they go up.

The problem is that this has to be mandatory, if it’s voluntary it loses any value and the situation would not be any different than it is now. If it is mandatory only people who can demonstrate knowledge and are licensed to practice would do so, which would fix things a lot. No more bullshit repairs or faulty advice, substantially decreased privacy risks, accountability and trust of IT workers and businesses and fair pricing. Is there any reason not to head in this direction? If not, how long will it take governments to catch up?

June 23, 2011

OS X – Safe, yet horribly insecure

Filed under: Security, Tech — Tags: , , , , , , , — allthatiswrong @ 2:48 am

Introduction

I have had this article planned since the end of 2009 and have had it as a skeleton since then. I wanted to point out the many problems with OS X security and debunk the baseless myth that OS X is somehow more secure. Despite 18 months passing by before I managed to finish it, not much seems to have changed. I think I am publishing at an interesting time however just as malware for OS X is increasing and Apple are starting to put effort into securing OS X with the soon to be released Lion. There is no FUD in this article, just an analysis of the available evidence and some speculation. My motivation to write this article was the hordes of OS X users who are either blind or have been mislead by false advertising into believing OS X is somehow immune to malware and attacks.

It is one of the most prevalent myths among the computer purchasing public and to a lesser extent those who work in IT, that Apple computers are far more secure than their Windows and perhaps Linux counterparts. The word myth is precisely accurate, as OS X and other Apple software is among the most vulnerable software on consumer devices today. Apple have an appalling attitude towards security which often leaves their users highly vulnerable while hyping their products as secure, simply because they are rarely targeted. It is important before going further to note the difference between a distributed attack and a targeted attack. A distributed attack is one not specific to any one machine or network, but will exploit as many machines as it can affected by a particular set of vulnerabilities, of which OS X has had many. An example of a distributed attack is a drive by download, where the target is unknown, but if the target is vulnerable the exploit should work. Distributed attacks are used to infect large amounts of machines easily, which are then generally joined into a botnet to earn cash.

A targeted attack is more specific, where a single machine or network is attacked. A targeted attack is not blind and is specific to the machine being attacked. Distributed attacks such as drive by downloads are impersonal by nature because they must compromise thousands of machines while the motivation behind a targeted attack tends to be more personal, perhaps to steal confidential files or install some sort of backdoor. The argument always seems limited to distributed attacks which admittedly are nowhere near the problem they are for windows. This is more than likely because Apple has a very low market share of PC’s, simply making it less than worthwhile to invest in writing software to attack as many machines as possible when money is the motivation. I go into this in further detail in a later section.

Using a Mac may certainly be a safer choice for a lot of people as despite being vulnerable they are not targeted. However this is not the same as Macs being secure, something Eric Schmidt erroneously advised recently. I may be able to browse impervious to malware on a Mac at the moment, however I personally would not be comfortable using a platform so easily compromised if someone had the motivation to do so. In this article I address just why OS X is so insecure including the technical shortcomings of OS X as well as Apples policies as a company that contribute to the situation.

A trivial approach to security

One of the most annoying claims made by OS X (and Linux) users is that the UNIX heritage results in a far more secure design, making it more immune to Malware. Nothing could be further from the truth. The Unix Design is significantly less granular than that of Windows, not even having a basic ACL. The UNIX design came from a time when security was less of an issue and not taken as seriously as it did, and so does the job adequately. Windows NT (and later OSes) were actually designed with security in mind and this shows. Windows was not such a target for malware because of its poor security design; it is because the security functionality was never used. When everybody runs as Administrator with no password then the included security features lose almost all meaning. Point for point Windows has a more secure design than OS X, and is used properly the damage can be significantly minimized on a Windows machine than on an OS X machine, UNIX heritage or not.

A lot of OS X users seem to have this idea that Apple hired only the best of the best when it came to programmers while Microsoft hired the cheapest and barely adequately skilled, which not least resulted in OS X being a well designed piece of software completely free of vulnerabilities. In reality OS X machines have always been easily exploited and are among the first to be compromised at various security conferences and competitions. The vast majority of exploits that have been publicly demonstrated could have been used to write a successful virus or worm. Given how lax Apple is with security updates and any kind of proactive protection any prospective attacker would have quite a field day. The only reason this has not happened yet is not because Apple is magically more secure, it’s because no one has bothered to take the opportunity. It isn’t like no OS X viruses exist. Even without the poor approach apple takes to security, there would be no basis for claiming the design of OS X is more secure than that of other platforms.

Apple is generally months behind fixing publicly disclosed vulnerabilities, often only doing so before some conference to avoid media reporting. They often share vulnerabilities with core libraries in other UNIX like systems with samba and java being two examples. They are extremely difficult to deal with when trying to report a vulnerability, seemingly not having qualified people to accept such reports. Even if they do manage to accept a report and acknowledge the importance of an issue they can take anywhere from months to a year to actually fix it properly.

People always get caught up in the hype surrounding viruses and how OS X is seemingly impervious while forgetting that that is not the only type of threat. Personally for me, malware is a minor threat with the impact being negligible as long as you follow basic security practices and can recognize when something looks out of place. The idea of someone targeting me specifically on a network either because it is so vulnerable that it is child’s play or because they want something from my machine is far more worrying. This is significantly harder to protect against on OS X when you can’t rely on the manufacturer to issue patches in anything considering a prompt timeframe or even to acknowledge that vulnerabilities exist. Given that this is the Apple philosophy, it is hard to pretend to be safe on an Apple machine.

Examples and details

Every OS except OS X has a full implementation of ASLR, stack canaries, executable space prevention, sand boxing and more recently mandatory access controls. OS X doesn’t even try to implement most of these basic protections and the ones it does, it does poorly. I don’t understand why security folk use OS X at all, given its plethora of problems. Yes, they are pretty and yes it is UNIX and yes you are every safe using it, but given security folks tend to be working on various exploits and research that they would want to keep private, using a platform so vulnerable to targeted attacks would not seem to be the smartest move.

Apple to date do not have a proper DEP or ASLR implementation, two well known technologies that have been implemented in other OSes for the last five years. Apple did not bother to implement DEP properly except for 64bit binaries, and even then there was no protection on the heap even if it was marked as non executable. Apple technically implements ASLR but in a way that they may not have bothered. The OS X ASLR implementation is limited to library load locations. The dynamic loader, heap, stack or application binaries are not randomized at all. Without bothering to randomize anything except library load locations their implementation is useless aside from perhaps preventing some return to libc attacks. We can see using the paxtest program from the PaX team (the same team who initiated ASLR protections on PC’s) that OS X fails most of these tests (Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J. 2008). Apple’s decision not to randomize the base address of the dynamic linker DYLD is a major failing from a security point of view. Charlie Miller has demonstrated how a ROP payload can be constructed using only parts of the non randomized DYLD binary. Snow leopard unfortunately did not improve on things much except to add DEP protection to the heap, still only for 64 bit applications. This means that most of the applications that ship with OS X (including browser plugins) are far easier to attack than applications on pretty much any other platform.

The firewall functionality in OS X is impressive, but hardly utilized. The underlying technology is ipfw powerful and more than capable of protecting OS X from a wide variety of threats, however Apple barely utilizes it. The OS X firewall is disabled by default and application based meaning it is still vulnerable to low level attacks. Even if the option to block all incoming connections was set it didn’t do this, still allowing incoming connections for anything running as the root user with none of the listening services being shown in the user interface.

Apple introduced rudimentary blacklisting of malware in Snow Leopard via xprotect.pilst, which works so that when files are downloaded via certain applications they set an extended attribute which indirectly triggers scanning of the file. However many applications such as IM or torrent applications do not set the extended attribute, thus never triggering the Xprotect functionality. A fine example of this is the trojan iWorks which was distributed through torrents, and never triggered Xprotect. At the moment it can only detect very few malware items, although as a response to the MacDefender issue this is now updated daily. Only hours after Apple’s update to deal with MacDefender was released a new version that bypasses the protection was discovered, highlighting the shortcomings of the Xprotect approach. Since it relies on an extended attribute being set in order to trigger scanning, any malware writer will target avenues of attack where this attribute will not be set and for drive by download attacks it is completely useless. Still, it is a good first step for Apple acknowledging the growing malware problem on OS X and starting to protect their users.

It has been a shame to see the sandboxing functionality introduced in Leopard not being utilized to anywhere near its full capacity. Apple are in a unique position where by controlling the hardware and the operating system they have creating a truly homogenous base environment. It would be very easy to have carefully crafted policies for every application that ships with the base system, severely limiting the damage that could be caused in the event of an attack. They could go even further and import some of the work done by the SEDarwin team, allowing for even greater control over applications. They would not have to present this to the user and would probably prefer not to yet doing so would put them far ahead of all the other operating systems in terms of security at this point.

Security wise Apple is at the same level as Microsoft in the early 90’s and early 2000’s. Continuing to ignore and dismiss the problems without understanding the risks and not even bothering to implement basic security features in their OS. With an irresponsible number of setuid binaries, unnecessary services listening on the network with no default firewall, useless implementations of DEP and ASLR and a very poor level of code quality with many programs crashing with a trivial amount of fuzzing Apple are truly inadequate at implementing security. This still doesn’t matter much as far distributed attacks go, at least not until Apple climbs higher in market share but I really dislike the idea of someone being able to own my system just because I happened to click on a link. At least with Apple giving regular updates via Xprotect and including a Malware help page in Snow Leopard we have evidence that they are starting to care.

An appalling record

A great example of Apple’s typical approach to security is the Java vulnerability that despite allowing for remote code execution simply by visiting a webpage, Apple left unpatched for more than six months; only releasing a fix when media pressure necessitated that do so. When OS X was first introduced the system didn’t even implement shadow file functionality, using the same password hashing AT&T used in 1979, simply relying on obscuring the password via a pretty interface. This is indicative of the attitude Apple continues to have to this very day, having a horribly secure design at the expense of convenience and aesthetics, only changing when pressure necessitates it. One of the most interesting examples of this is that regularly before the pwn2own contests where Apple’s insecurity is put on display, they release a ton of patches. Not when they are informed of the problem and users are at risk, but when there is a competition that gets media attention and may result in them looking bad.

Being notoriously hard to report vulnerabilities to does not help either. If a company does not want to hear about problems that put their machines and thus customers at risk it is hard to say that they are taking security seriously. As is the case at the moment if you try and report a vulnerability to Apple it will likely get rejected with a denial and after retrying several times it may be accepted, where a patch may be released any number of weeks or months later. Apple still have a long way to go before demonstrating they are committed to securing OS X rather than maintaining an image that OS X is secure. Having a firewall enabled by default would be a start, something Windows has had since XP. Given the homogeneous nature of OS X this should be very easy to get off the ground and it may well be the case with Lion.

The constant misleading commercials are another point against Apple. Constantly misleading users that OS X is secure and does not get viruses (implying that it cannot) or have any security problems what so ever. Not to mention that they exaggerate the problem on Windows machines, they completely ignore the vulnerabilities OS X has. Most recently evidence Apple’s aforementioned attitude can be seen with their initial response to the MacDefender malware. Rather than address the issue and admit that a problem exists they keep their heads in the sand, even going so far as to instruct employees not to acknowledge the problem. To their credit Apple did change their approach a few days later issuing a patch and initiating a regularly updated blacklist of malware. Their blacklist implementation has flaws, but it is a start.

As much as users and fans of Apple may advocate the security of OS X it is very important to note that OS X has never implemented particularly strong security, has never had security as a priority and is produced by a company that has demonstrated over and over that security is a pain which they would rather ignore, leaving their users at risk rather than acknowledge a problem.

Malware for OS X increasing

While it’s true that doomsday for OS X has long been predicted, despite the predictions lacking a precise time reference. An article by Adam O’Donnell has used game theory to speculate that market share is the main cause for malware starting to target a platform, the result of a tradeoff between a lack of protection and a high enough percentage of users to take advantage of to make the investment worthwhile. The article made the assumption that all PC’s were using AV software and assumed an optimistic 80% detection success rate. If the PC defense rate were higher, then OS X would become an attractive target at a much lower market share. According to the article, if PC defenses were at around 90% accuracy, then OS X would be a target at around 6% market share. The estimated percentage from the article is just under 17%, and just as some countries have reached around that number are we starting to see an increase in malware for OS X. It may be a coincidence but I will not be surprised if the trend continues. Given Apple’s horrid security practices and insecurity it’s going to increase quite noticeably unless Apple changes their act. Aside from market share another important factor is the homogeny of the platform, making OS X an extremely ideal target once the market share is high enough.

A lot of people are saying they will believe the time for OS X has come when they see an equivalent to a Code Red type of worm, except that this is never going to happen. Worms shifted from being motivated by fame having a financial motivation, with the most recent OS X malware being linked to crime syndicates. With the security protections available in most OSes these days (aside from OS X) being more advanced it takes more skill to write a virus to infect at the scale of something like Code Red, and the people who do have that skill are not motivated to draw attention to themselves. These days malware is purely about money, with botnets that going out of their way to hide themselves from users. Botnets on OS X have been spotted since 2009 and OS X is going to be an increasing target for these types of attacks without ever making the headlines as Windows did in the 90’s.

Another contributing factor that should not be overlooked is the generally complacent attitude of OS X users towards securing their machines. Never faced with Malware as a serious threat and being shoveled propaganda convincing them that OS X is secure, most OS X users have no idea how to secure their own machines with many unable to grasp the concept that they may be a target for attack. The MacDefender issue already showed how easy it is to infect a large number of OS X users. Windows users are at least aware of the risk and will know to take their computer in to get fixed or to run an appropriate program as where it seems OS X users simply deny the very possibility. As Apple’s market share increases, the ratio of secure users to vulnerable users continues to slide further apart. With more and more people buying apple machines and not having any idea how to secure them or that they even should there are that many more easy targets. Given the insecurity of OS X and the nativity of the users, I do think it is only a matter of time before OS X malware becomes prevalent, although not necessarily in a way that will make the news. This means the problem is going to get worse as users are going to keep getting infected and not realize it while believing their machines are clean and impervious to risk.

People also have to get over the idea that root access is needed for malware to be effective. Root access is only needed if you want to modify the system in some way so as to avoid detection. Doing so is by no means necessary however, and a lot of malware is more than happy to operate as a standard user, never once raising an elevation prompt and silently infection or copying files or sending out data or doing processing, or whatever malicious thing it may do.

Macs do get malware even if it is a significantly smaller amount that what is for windows. Given the emergence of exploit creation kits for OS X it is inevitably malware is inevitably going to increase for OS X. Even if it never gets as bad as it was for Windows in the 90’s it is important not to underestimate the threat of a targeted attack. Rather than encouraging a false sense of security Apple should be warning users that it is a potential risk and teaching users how to look for signs and deal with it. The Malware entry in the Snow Leopard help is a small step in the right direction. There isn’t much Apple can do to prevent targeted attacks, except maybe fixing their OS and being proactive about security in the first place.

Much room for improvement

One thing OS X did get right was making it harder for key loggers to work. As of 10.5 only the root user can intercept keyboards, so any app making use of EnableSecureEventInput should theoretically be immune to key logging. Of course, if remote code execution is possible then that is a very minor concern. This requires the developer to specifically make use of that function, which is automatic for Cocoa apps using a SECURETEXTFIELD. Of course this does not completely prevent keyloggers from working as applications not making use of that functionality will be vulnerable to keylogging, such as was the case with Firefox and anything not using a secure text field. Of course, given the propensity of privilege escalation attacks on OS X it would not be hard to install a keylogger as root. However this is a great innovation and something that I would like to see implemented in other operating systems.

Apple asked security experts to review Lion which is a good sign, as long as they actually take advice and implement protections from the ground up. Security is a process which needs to be implemented from the lowest level, not just slapped on as an afterthought as Apple have tended to do in the past. I think the app store in Lion will be interesting. If Apple can manage to control the distribution channels for software, then they will greatly reduce the risk of malware spreading. At the moment most software is not obtained via the app store and I don’t ever expect it to be, still the idea of desktop users being in a walled garden would be one solution to solving the malware problem.

Lion is set to have a full ASLR implementation (finally) including all 32 bit applications and the heap. As well as more extensive use of sandboxing it looks like Apple is starting to actually lock down their OS, which means they understand the threat is growing. It will be interesting to see if Apple follows through on the claims made for Lion, or if they fall short much like what happened with snow leopard. Personally I think Lion is going to fall short while the malware problem for OS X will get serious, but it won’t be until 10.8 that Apple takes security seriously.

Update 1 – June 28th 2011

Updated minor grammatical mistakes.

It is amazing the knee jerk response I have seen to this article where people start saying how there are no viruses for OS X, which is something I acknowledge above. I guess people don’t care if they are vulnerable as long as there are no viruses? Then people start attacking the claim that OS X has no ACL, which is a claim I never made. I guess the truth hurts and attacking men made of straw helps to ease the pain.

References

  1. http://secunia.com/advisories/product/96/?task=statistics – A list of OS X vulnerabilities.
  2. http://www.telegraph.co.uk/technology/apple/8550005/Eric-Schmidt-get-a-Mac-if-you-want-to-be-secure.html – Eric Schmidt on OS X.
  3. http://www.sophos.com/en-us/Search-Results.aspx?search=OSX&refine=1a1e9ea6979a493dba64e1b2ced03044 – A list of OS X viruses from Sophos.
  4. Baccas P, Finisterre K, H. L, Harley D, Porteus G, Hurley C, Long J, 2008. OS X Exploits and Defense, p. 269-271.
  5. http://securityevaluators.com/files/papers/SnowLeopard.pdf – Charlie millers talk on snow Leopard security.
  6. http://www.computerworld.com/s/article/9217163/Mac_OS_update_detects_deletes_MacDefender_scareware_ – Apple releases an update to deal with MacDefender.
  7. http://news.yahoo.com/s/livescience/20110601/sc_livescience/newmacdefenderdefeatsapplesecurityupdate – A variant of MacDefender appeared hours after Apple’s update was released.
    http://news.cnet.com/8301-10784_3-9759132-7.html – Charlie Miller talking about setuid programs in OS X.
  8. http://www.zdnet.com/blog/security/mac-os-x-vulnerable-to-6-month-old-java-flaw/3433 – Apple taking 6 months to patch a serious Java vulnerability.
  9. http://www.dribin.org/dave/blog/archives/2006/04/28/os_x_passwords_2/ – Apple using password hashing from 1979 in lieu of a shadow file.
  10. http://www.youtube.com/watch?v=CHFy6egYcUg – Misleading commercial 1.
  11. http://www.youtube.com/watch?v=iPc0NCIZz8s – Misleading commercial 2.
  12. http://www.youtube.com/watch?v=cLVS3QVxhDg – Misleading commercial 3.
  13. http://www.zdnet.com/blog/bott/an-applecare-support-rep-talks-mac-malware-is-getting-worse/3342– Apple representatives told not to acknowledge or help with OS X malware 1.
  14. http://www.msnbc.msn.com/id/43101276/ns/technology_and_science-security/” – Apple representatives told not to acknowledge or help with OS X malware 2.
  15. http://www.securitymetrics.org/content/attach/Metricon2.0/j3attAO.pdf Adam O’Donnell’s article – When Malware Attacks (Anything but Windows)
  16. http://royal.pingdom.com/2011/03/16/the-10-most-mac-friendly-countries-on-the-planet/ – OS X market share by region.
  17. http://www.pcworld.com/article/228961/beware_of_malware_apple_users_even_as_mac_defender_details_emerge.html MacDefender linked to crime syndicates.
  18. http://www.zdnet.com/blog/bott/crying-wolf-apple-support-forums-confirm-malware-explosion/3351 – Many users hit by MacDefender.
  19. https://threatpost.com/en_us/blogs/crimeware-kit-emerges-mac-os-x-050211 – The first exploit creation kits for OS X have started appearing.
  20. http://www.networkworld.com/news/2009/041709-first-mac-os-x-botnet.html” – First OS X Botnet discovered.
  21. http://www.apple.com/macosx/whats-new/features.html#security
  22. https://bugzilla.mozilla.org/show_bug.cgi?id=394107 – A Firefox bug report about a vulnerability to keylogging.
  23. http://www.computerworld.com/s/article/9211599/Apple_invites_bug_researchers_to_scrutinize_Lion_OS?taxonomyId=85 – Apple letting security researchers review Lion.

Update 1 – August 17 2011

A delayed update, but it is worth pointing out that this article is basically out of date. Apple has indeed fixed most of the problems with security with their release of Lion. At least this article is an interesting look back, and shows why mac users should upgrade to Lion and not trust anything before it. Despite Lion being technically secure, it is interesting to note that Apple’s security philosophy is still lackluster. Here is an interesting article on the lessons Apple could learn from Microsoft and an article showing just how insecure Apple’s DHX protocol is, and why the fact it is deprecated doesn’t matter.

June 1, 2011

Why I don’t like StackExchange sites

Filed under: Tech — Tags: , , , , — allthatiswrong @ 6:50 am

The StackExchange series of sites seem like a great idea. I first discovered the Stack Overflow beta which was great…a community of peers and students learning from and helping each other. The design of the page was excellent, very simple and easy to use with a simple voting and reputation system in place. Due to the popularity of the format other sites with the same design sprung up, including Super User for user problems and Server Fault for networking stuff with a whole host of additional sites in beta.

For me these sites have largely replaced forums when I need a quick answer I can’t find elsewhere, however there are quite a few problems with the format that prevents me from contributing in any serious manner. I should stress that problems I have are not related to the design or implementation of the technology, but rather are problems intrinsic to any community moderated site.

The most annoying problem is if asking a specific question, people will make assumptions and try to answer with what they think is best for you, ignoring the actual question asked. This can be frustration and people should not need to explain their entire situation just to get a technical answer. Often the excuse for this is that it is a community orientated site so they don’t want to give an answer that could mislead or harm people, despite questions often being extremely specific. The community rationale is also used to excuse against editing posts away. If you make a specific question it may be edited to “better serve the community” which is just annoying if you need a specific answer to a specific question. The only recourse you then have is to ask your question again, or to delete your original post.

The other problem is all too often emotion and/or politics comes into play affecting the answers selected as correct. Sometimes it doesn’t matter if an answer is technically correct so long as it is popular. Windows is technically and factually more secure than OS X at present, yet any answer saying that in response to a question regarding OS security would get voted down, while a template response about how Windows is horribly insecure would likely get voted up. It’s frustrating to deal with a might is right community, but there also isn’t much that can be done about it while maintaining the freedom the community enjoys.

Lastly, some of the moderators/long time users are far too eager to mark questions as duplicates. Sathya on Super User is especially guilty of this. Sometimes questions may have similar or even identical titles, but often with computing questions the devil is in the details. A person asking the question may want a different solution, may have different needs, may have different circumstances causing the problem, whatever. Sathya simply marks anything similar as a dupe and the sheep follow. Since it only takes 4 votes to close a question it can happen a lot. It can be even more frustrating when you may want an answer in a programming context and your question gets migrated to Super User because people didn’t read it properly. Gah.

I love the technology and continue to use it, but it just isn’t worth committing a lot of time to with this kind of idiocy going on, unless you are fine with the idiocy and partake in it. It’s a shame as the technology is excellent, but people have a long way to go before we use it to its full potential.

April 18, 2011

Thoughts on Slackware

Filed under: Tech — Tags: , , — allthatiswrong @ 11:40 am

Slackware is an interesting distribution known for being stable and having a minimal approach, two of the reasons why for a long time it was my favorite distribution. It is also the oldest still surviving distribution, which should speak for itself. When I started learning about Linux and getting into the low level details of computing, it was through Slackware that I learned a lot of what I know today, or at least the foundation. I had played with Red Hat, Debian and Caldera to varying extents, but they all needlessly obfuscated simple details. It was with Slackware that I had direct access to my system and the native utilities to interact with it. It was with Slackware that I was encouraged to read the documentation and learn how things work rather than use a poorly written GUI tool that tried and failed to make configuration seamless.

One thing I always liked about Slackware was that I always had the feeling of total control over my system. I knew where all my system scripts were that did everything, I knew where every file on my system was and if I didn’t I could find it. I knew how everything worked and if something was out of place, I would know it. This was less true of other distributions where there may be varying levels of obfuscation or several different locations where things may be placed. Slackware has somewhat of a reputation as being difficult to learn, but nothing could be farther from the truth. Everything is well documented and it is easy to do anything. Editing a configuration file is not any harder than having a GUI tool do it for you and for those who think it is they should probably get a Mac.

After traveling around and not having a PC for a few years I hadn’t used Slackware much. It isn’t used in production very often so I didn’t encounter it when working and I wasn’t using it personally, so it wasn’t until 2009 when I got a PC again that I got into using it. I knew I wanted to get up to speed on the Linux stuff I had missed and Slack seemed the obvious choice, especially given the alternatives. The core distributions had not changed in philosophy so much, while newer distributions such as Gentoo or Ubuntu offered no real advantages and many disadvantages. After installing and setting up 13.0 (which I will cover in a later post, the 64bit version is also very nice!) I was glad to see that things seemed mostly familiar. Yet, I also noticed a lot of changes, most of which were not positive. I noticed these changes mainly in the community but they were apparent in the design of the distribution as well.

The current community seems to be full of rabid Microsoft hating zealots who were unwilling to reason and happy to jump on any bandwagon criticizing Microsoft or Windows. This was not the community I remembered, that I had learned so much from. The community I remembered tended to be more mature than that, realizing that Windows and various Linux distributions both have strengths and weaknesses and that one is not automatically and always superior to the other. It isn’t surprising that the community is like this given Slackware’s relatively recent priority on being a desktop; however I wonder which came first? It seems like a chicken and egg problem….did Slackware change first and attract these new users or did they come to Slackware which caused it to adapt? Either way, it’s slightly disappointing.

Another major change I saw in the community was in the attitude towards learning and problem solving. Back in the day if I wanted to know how to do something there would always be people willing to explain things to me or I would be pointed towards the relevant documentation. Nowadays it seems more common that people will question why you want to do something in the first place assuming they know better or disapproving if it doesn’t meet their highly biased standards. An example of this might be that I had an issue with my soundcard module causing problems on occasion. In the days of old I would have been offered help in troubleshooting the issue and perhaps learning something from it. These days the most anyone would offer is a suggestion to reboot.

Another example of something I felt had changed was the recommendation and near requirement to do a full install. This is completely contrary to how I use Slackware, as I like to have quite a minimal install with just the programs I need and use and Fluxbox for a minimal GUI. However the current version of Slackware assumes that everyone is going to do a full install, which leads to stupid dependencies such as Mplayer requiring Samba just in case that niche functionality might be required. Aside from being a waste of space and increasing the risk of an attack, it encourages bad habits. Slackware was famous for adhering to the KISS principle but when a full install is the simplest way to satisfy dependencies, then that wouldn’t seem to hold true any longer. I don’t know that Bob would be happy with how things have changed. It might be OK if this was simply the recommendation from the Slackware team, except the community has blindly followed suit. When asking which package might contain a certain library you inevitably get bombarded with questions asking for justification as to why you didn’t do a full install.

I can’t help but feel that these changes are due to a desire to keep Slackware in the race for desktop Linux, and as a consequence has forgotten its roots. The community and distribution now seem closer to Ubuntu or Mandrake in the sense of target audience and community, which is a shame. It seems that many Slackware users of the past have migrated to Gentoo or Arch or various other distributions. I would not be surprised, as the Arch community and wiki is exactly the sort of community and advice I used to associate with Slackware. If it wasn’t for the cutting edge/rolling release aspect I would probably adopt it as my primary distribution.

One of the things I always liked about Slackware was that it took a minimalistic approach. There was always only what was necessary or what would actually make things easier present, nothing unnecessary or in the way. This no doubt helped Slackware to maintain its reputation for stability as well. It was easy to install specific programs without having to install all the related programs you didn’t actually need. The packages tended to be quite vanilla, which was useful when you were compiling your own software. It was easy to add 3rd party packages without interfering with the rest of the system. There were no kernel updates as Slackware just used the vanilla kernel, which most people knew how to build and update themselves.

One aspect of Slackware that had always received constant criticism was its package management. When compared to offerings such as rpm and apt-get it may seem lackluster, yet at a time when those systems suffered from a dependency hell worse than any windows version had been afflicted with Slackware was a breath of flash air. Originally a Slackware package was simply a collection of compiled binaries and config files inside a tarred gzip file, with an additional text file including a package description and some basic instruction. The installpkg command would extract the files in the right places and place an entry in /var/log/packages where you could easily see what packages you had installed and the files belonging to each package.

Of course most software was not offered in a Slackware package format, although this was not a concern. Using the program checkinstall you would just grab the source tarball, and run checkinstall instead of make install. This would install the program while creating the appropriate entries in /var/log/packages, allowing for an easy uninstall if desired. This is probably the area Slackware had matured the most in in the few years I had not kept up with it. checkinstall was no longer being developed, however the much more diverse src2pkg had appeared to take its place. Slackpkg had been introduced and is now an official tool in the distribution. It is an awesome tool that allows for installing and checking for updates all from one interface. It also makes it easy to create templates which allow you to setup other machines the same way in a very easy way. One of the nicest features is that you can search all packages for a particular file which makes satisfying dependencies trivial. I had a script to do this but having such functionality available natively is much better.

Probably the biggest change was the appearance of Slackbuilds. Slackbuilds are simple build scripts for pretty much any software not officially included with Slackware. You simply grab the source and the associated Slackbuild and run the Slackbuild script, and then install the resulting package. They are easy to configure and understand, and available for most software you will encounter. They don’t tend to be kept so up to date, but since the compiling process doesn’t change so much it isn’t much of an issue most of the time. For the few packages that don’t have Slackbuilds src2pkg works like a charm.

The last thing I would note about Slackware is that it has suffered from some odd decisions preventing it from moving forward. Things like a non graphical boot or sticking with stable versions of programs rather than newer cutting edge versions is all fine. Other decisions seem odd, such as still sticking with LILO over GRUB. It’s true that LILO works fine, however GRUB has a number of distinct advantages over LILO. In which case it is worth asking why not switch to GRUB? Then there is PAM. Slackware is the only modern distribution to have support for PAM. Back in the day, Linux-PAM was a mess and the justification somewhat made sense, however it no longer does. The option to use PAM should at least be included as without it, it makes it hard to use many programs, or even stuff like SELinux which is now native kernel functionality.

Despite everything I still have a lot of love for Slackware and it is still my distribution of choice. I like the hands on simple approach and the easy configuration. I like the simple yet powerful package management which despite the FUD being spread, has no disadvantages over more complex package management systems. I like the most of the packages are compiled from vanilla source without various buggy patches incorporated. Slackware remains a simple and reliable distribution that is easy to use, learn and maintain. For my needs which consist of a stable and minimal install without having to update constantly Slackware is the only option. It is getting harder to keep a minimal install however after doing the initial work the configuration can be kept with tagfiles. As packages continue to have unnecessary dependencies it is becoming easier/necessary to compile my own versions. This is only true for very few packages at the moment and hopefully that won’t change.

Slackware has changed from a distribution which was plain and simple to understand while being easy to configure and slim down for your needs to a distribution which now recommends a full install and tries to accommodate the typical needs of a desktop user. It is made more difficult now to remove unwanted functionality or to integrate 3rd party programs into the distribution. I find Slackware less convenient for my needs and fine myself less wanting to interact with the new community; however this doesn’t mean it is worse than it was before. It just means that it is catering to the needs of its users as they changed and adapted. Still, it is possible to learn a lot on Slackware and I hope much of the community can one day move past their rabid anti MS hate and mature, learning something in the process.

I don’t know if I would still recommend Slackware to new users wanting to learn the ins and outs of Linux, as the community and distribution no longer seems suited to that where something like Arch does. I still use it daily and suggest it to people; I was just disappointed to see it change from how I remembered it in a direction I felt was less true to its user base and principles. I want to give a lot of thanks to Patrick Volkerding who has put in so much work into Slackware, which allowed and encouraged me to learn so much so many years ago. I also want to stress that my opinions should not be taken as a slam on Slackware and I will continue to use and support it, or at the least remember it fondly.

Older Posts »

Create a free website or blog at WordPress.com.