I'm currently slogging through a book on JavaScript. I have a project I want to work on that needs to run in a browser and perhaps be turned into a stand alone smart phone application, so JavaScript seems to be the way to go given what I read. I could be wrong, but it's where I am starting.
The book in question is an O'Reilly JavaScript tome, and as an introduction to the language it isn't too bad, at least if you have some programming background. But the language itself is leading me to continue my belief that OO programming is a disaster.
I get the basic idea behind objects and methods. I am certain that some percentage of programming problems benefit from a system in which objects are available, but I suspect the number of such problems is pretty small overall. Pick your percentage... I really don't care. What matters to me is the complexity increase and efficiency decrease that come with OO. Most programmers have no clue just how their code actually works at the lowest levels anymore, and most schools certainly aren't teaching it. OO techniques just magnify those problems in enormous ways.
By way of example, here's a bit of code from the book I am reading, reformatted a bit to look OK in this post. It's only an example, and the author does mention that it will be slower than other approaches, but, well... just take a peek:
function Range(from, to) {
// Don't store the endpoints as
// properties of this object.
// Instead define accessor functions
// that return the endpoint values.
// These values are stored in the
// closure.
this.from = function() { return from; }
this.to = function() { return to; }
}
// The methods on the prototype can't
// see the endpoints directly: they have
// to invoke the accessor methods just
// like everyone else.
Range.prototype = {
constructor: Range,
includes: function(x) {
return this.from() <= x &&
x <= this.to();
},
foreach: function(f) {
for(var x=Math.ceil(this.from()),
max=this.to(); x<= max;
x++) {
f(x);
}
},
toString: function() {
return "(" + this.from() +
"..." + this.to() + ")"; }
};
and with that code defined, he shows how it can be used:
// An "immutable" range
var r = new Range(1,5);
// Mutate by replacing the method
r.from = function() { return 0; };
The first thing - for the uninitiated - is that this code is implementing an object called a range, which is nominally just two integers. The range (1...5) means the numbers 1, 2, 3, 4, and 5 are in the range, and all other integers are not. Simple enough. And obviously ranges have two endpoints, right? So where, exactly, are those endpoints stored in that code?
The author has an earlier version of this code that uses two variables to store the start and end of the range, but in this version they are not obviously present. I read this code several times, trying to figure it out, before the very last line in the example - the one starting "r.from =" finally tipped me off.
I'm an experienced programmer, and a reasonably good one. Not the best, but above average in my professional experience. I've worked with some really brilliant people over the years, and know where at least a few of my limitations are. Given what I know, this sample code can only be described as ugly and unmaintainable.
The use of a closure is enough to drive some programmers to drink. (I know plenty who never understood recursion. Closures are much, much worse.) Code of this kind is intrinsically difficult to read, difficult to follow, difficult to edit, and so on. And for those brilliant programmers out there who think this is easy to read and maintain, I cannot stress strongly enough how wrong you are. You're only thinking of it from your point of view, not the poor sod who is going to add something new to this code 2 years after you've changed jobs.
Once, years ago, I saw code like this in some C code my employer was maintaining:
int f( char *a, char* b)
{
char *temp;
/* ... lots of code that doesn't refer */
/* to the variable "temp" in any way ... */
strcpy( a, temp );
/* ... code that doesn't matter ... */
}
I was doing some porting work and found that cruft. Digging into the change history of the file in question showed me that a support engineer had "fixed" a bug by inserting the temp variable and making use of it in that way. The fact that he hadn't allocated space to copy into and was instead writing over who-knows-what on the stack didn't even occur to him. He'd tested his code and it worked just fine, so what did it matter? And yes, I tracked him down and talked to him personally. He simply didn't get it.
Really.
The world is full of cases - and people - like that. As a result, the best code for the real world is, sadly, the most readable and maintainable code possible, not the fastest, not the most clever, not the shortest. Fancy programming techniques - like the vast majority of OO - simply make things slower, harder to understand, and vastly increase the "go wrong" space in which programs can fail.
What I am learning about JavaScript - and about OO in general - is that my gut feel was right. These languages are disasters. Inexperienced programmers are creating things that should never see the light of day using idiomatic programming techniques they should never even try to use.
Sure, if you're writing some one-off bit of code that will never be reused, or will only be maintained by you, fine, write it however you want. I don't care. But if you're working on something that will outlast your time with it, or (more likely) your time with the employer who owns it, you have an obligation to write it in such a way that the next guy that looks at it can quickly and easily figure out what you were doing, why you were doing it, and make changes as needed without breaking the universe.
OO was supposed to help that, and within limits it may. But if JavaScript is any example (or C++, for that matter), the languages themselves have an amazing ability to make the code harder to read and maintain.
If we were all brilliant programmers, that wouldn't matter, but we're not, and it does.