Tuesday, March 5, 2013

Programming Rant... Sorry.

For those of you who are not technical, or who don't care about programming, feel free to skip this post.  I just have to get this out of my system.

I'm currently slogging through a book on JavaScript.  I have a project I want to work on that needs to run in a browser and perhaps be turned into a stand alone smart phone application, so JavaScript seems to be the way to go given what I read.  I could be wrong, but it's where I am starting.

The book in question is an O'Reilly JavaScript tome, and as an introduction to the language it isn't too bad, at least if you have some programming background.  But the language itself is leading me to continue my belief that OO programming is a disaster.

I get the basic idea behind objects and methods.  I am certain that some percentage of programming problems benefit from a system in which objects are available, but I suspect the number of such problems is pretty small overall.  Pick your percentage... I really don't care.  What matters to me is the complexity increase and efficiency decrease that come with OO.  Most programmers have no clue just how their code actually works at the lowest levels anymore, and most schools certainly aren't teaching it.  OO techniques just magnify those problems in enormous ways.

By way of example, here's a bit of code from the book I am reading, reformatted a bit to look OK in this post. It's only an example, and the author does mention that it will be slower than other approaches, but, well... just take a peek:

function Range(from, to) {
    // Don't store the endpoints as
    // properties of this object.

    // Instead define accessor functions
    // that return the endpoint values.
    // These values are stored in the
    // closure.
    this.from = function() { return from; }
    this.to = function() { return to; }


// The methods on the prototype can't

// see the endpoints directly: they have
// to invoke the accessor methods just
// like everyone else.
Range.prototype = {
    constructor: Range,
    includes: function(x) {
        return this.from() <= x &&
               x <= this.to();
    foreach: function(f) {
        for(var x=Math.ceil(this.from()),
            max=this.to(); x<= max;
            x++) {
    toString: function() {
        return "(" + this.from() +
               "..." + this.to() + ")"; }

and with that code defined, he shows how it can be used:

// An "immutable" range
var r = new Range(1,5);
// Mutate by replacing the method
r.from = function() { return 0; };

The first thing - for the uninitiated - is that this code is implementing an object called a range, which is nominally just two integers. The range (1...5) means the numbers 1, 2, 3, 4, and 5 are in the range, and all other integers are not. Simple enough. And obviously ranges have two endpoints, right? So where, exactly, are those endpoints stored in that code?

The author has an earlier version of this code that uses two variables to store the start and end of the range, but in this version they are not obviously present.  I read this code several times, trying to figure it out, before the very last line in the example - the one starting "r.from =" finally tipped me off.

I'm an experienced programmer, and a reasonably good one.  Not the best, but above average in my professional experience.  I've worked with some really brilliant people over the years, and know where at least a few of my limitations are.  Given what I know, this sample code can only be described as ugly and unmaintainable.

The use of a closure is enough to drive some programmers to drink.  (I know plenty who never understood recursion.  Closures are much, much worse.)  Code of this kind is intrinsically difficult to read, difficult to follow, difficult to edit, and so on.  And for those brilliant programmers out there who think this is easy to read and maintain, I cannot stress strongly enough how wrong you are.  You're only thinking of it from your point of view, not the poor sod who is going to add something new to this code 2 years after you've changed jobs.

Once, years ago, I saw code like this in some C code my employer was maintaining:

int f( char *a, char* b)
    char *temp;

    /* ... lots of code that doesn't refer */
    /* to the variable "temp" in any way ... */

    strcpy( a, temp );

    /* ... code that doesn't matter ... */

I was doing some porting work and found that cruft.  Digging into the change history of the file in question showed me that a support engineer had "fixed" a bug by inserting the temp variable and making use of it in that way.  The fact that he hadn't allocated space to copy into and was instead writing over who-knows-what on the stack didn't even occur to him.  He'd tested his code and it worked just fine, so what did it matter?  And yes, I tracked him down and talked to him personally.  He simply didn't get it.


The world is full of cases - and people - like that.  As a result, the best code for the real world is, sadly, the most readable and maintainable code possible, not the fastest, not the most clever, not the shortest.  Fancy programming techniques - like the vast majority of OO - simply make things slower, harder to understand, and vastly increase the "go wrong" space in which programs can fail.

What I am learning about JavaScript - and about OO in general - is that my gut feel was right.  These languages are disasters.  Inexperienced programmers are creating things that should never see the light of day using idiomatic programming techniques they should never even try to use.

Sure, if you're writing some one-off bit of code that will never be reused, or will only be maintained by you, fine, write it however you want.  I don't care.  But if you're working on something that will outlast your time with it, or (more likely) your time with the employer who owns it, you have an obligation to write it in such a way that the next guy that looks at it can quickly and easily figure out what you were doing, why you were doing it, and make changes as needed without breaking the universe.

OO was supposed to help that, and within limits it may.  But if JavaScript is any example (or C++, for that matter), the languages themselves have an amazing ability to make the code harder to read and maintain.

If we were all brilliant programmers, that wouldn't matter, but we're not, and it does.


  1. I like OO a lot--I've done C++ and Java. I think objects and interfaces are extremely handy (my favorite use at the moment is dependency injection for being able to test glue code w/o having to instantiate the things you are glueing together).

    However, I am a serial programmer at heart. (Go Fortran 77!). I like objects until they become too deep--it is impossible to figure out what bit of what object does what. And if you don't need an object, just write some serial code. It's so much easier to read!

    My impression is that JS isn't really an OO language, but you can do OO stuff with it, and conventions vary a lot. I've been using it a little lately and find it mostly yucky ... but then like Perl, it's handy when you need the flexibility.

    My pet peeve is helper functions. If it's just three lines of code and you don't reuse it somewhere, leave it there! I know how to read a for loop. When looking at the code, I don't want to have to find some function LoopOverThings() to see what it does.

  2. Why Wendy... I had no idea you were lurking about out there. :)

    I did a bit of C++ some years back, and Perl has some vaguely OOish stuff in it as well. Based on those experiences, I would put JS as being a lot closer to "real" OO than Perl is. YMMV, of course, but that's what it looks like to me, given the apparent continuum.

    I suspect that we agree about the usefulness of objects. Some problems actually work well with that as the model, just like some problems can readily be solved in Lisp. But not all problems are amenable to an OO solution, just like (too (many (stinking parenthesis) can (really really) screw up) some people's ability to write code). And no, I have no clue if those are balanced or not. :)

    When I wrote C for a living, I wrote code that was easy to read, extensible, and intended to be maintained. When I moved to working mostly in Perl, I did the best I could to keep those habits in place, and I have had people tell me later that I managed it in at least some cases.

    In both languages, though, the trick to achieving that is to avoid the ugly, idiomatic expressions and write simple stuff. If you do that, the fact that a language is OO hardly matters... it's just a tool in your kit. The minute someone starts using all the fancy features - closures, nesting, etc. - then all bets are off, and the code is probably only easily maintainable by the person that wrote it originally, or someone who thinks exactly like the original author did. *sigh*


All comments made on this blog are moderated by the blog's author, and he's a bit busy, so it may take a bit of time for him to approve your comment. Please be patient. He will get to it. Thank you!