Javascript OOP

I thought it would be nice to have yet another person trying to explain how object-oriented programming works in JS, why it looks so different and what the pitfalls and benefits are.

Let’s start of with the basics. A class in any other programming language usually looks something like this

class Foo{
 constructor Foo(){},
 private attribute Type a,
 public attribute Type b,
 private method ReturnType c(){},
 pubic method ReturnType d(){}

Coming from a normal object oriented language and having been taught the usual explanation you expect this to provide a kind of contract that the compiler will enforce. If a required type is specified somewhere it will only accept instances of classes descended from this type, you expect it to load the right method based on the signature of the arguments and so on and so forth. On the other hand you don’t really expect the class itself to be accessible by your code. The instances and methods sure, but changing the class at run-time? Impossible.

Well, as usual, Javascript is very different. It’s a lot more powerful since any class can actually be modified on the fly, because there are no real classes. Instead, JS has a concept of constructors and inheritance. Nothing more. Everything else you want you can implement on top of that. You can chose to implement anything you want, but that’s all the language itself provides.
So let’s have a look at constructors work. In other languages, constructors are something optional. In JS, they are the only way to create an instance (aside from a deprecated __proto__ attribute or using [sg]etPrototypeOf in current browsers, but I’d stay away since this could have unintended consequences).

A constructor is just a common function. ANY function can become a constructor, all you have to do is call it with “new”

var x=new (function(){})();

will create an instance of our anonymous constructor.

the only two things that “new” really changes is creating a new object, running the supplied function in the context of this object and if nothing is returned, automatically returning that object.

So why don’t we do

var x=(function(){ return {}; })();
//or better yet
var x={};
//instead of
var x=new (function(){})();

The basic answer is because the special object that is created if you call something with “new” has some very special traits. Specifically, it has its prototype set to the function you used to create it.

So what the heck is a prototype? A prototype is Javascript’s model for inheritance. It’s something like a class, if you access a trait of an object you can access anything that’s provided by the prototype, but it’s a lot more dynamic. That’s because the prototype of an object is itself nothing but an object which you can access and modify at any time.

The logic of Javascript is simple. If you have an object and something tries to access some trait of it, look if the object has it, if not check its constructor’s prototype and access that instead. But since the prototype
itself is nothing but an object, it has a prototype of its own, so you can extend the chain endlessly.

So how do we access the prototype object. Pretty simple. Any function has a prototype attribute, which is a standard object.

So where in any normal programming language we’d declare a method or attribute, in JS we just assign them. We can even change them AFTER we created an instance.

//Define our constructor
var MyConstructor=function(){};
//Create an attribute on it

//Create an instance
var myObject=new MyConstructor();
//Output the magicNumber, which at this point is found on the prototype, namely 666
//Change the magicNumber on the prototype
//Since the instance still has no own magicNumber attribute, inspecting it will give the changed magicNumber of the prototype: 42

I know what you’re saying: So now we can change a attribute by accessing it in an even more annoying way. Great.
But there’s a method to this madness. And that’s the fact that only reading accesses the prototype chain. Writing on the other hand does not and always modifies the object you’re targeting.
So you can read from the prototype object, but write to the target object, giving you a template mechanism which is not unlike that of a class.

var MyConstructor=function(){};
var myObject1=new MyConstructor();
var myObject2=new MyConstructor();
//From Prototype:666
//From myObject1:42
//Since myObject2 has no magicNumber, from prototype:666

See, we changed the magicNumber, but it didn’t affect the prototype or any other instances. We haven’t really defined a class and the mechanism is entirely different, but we have an object now that behaves as
if we had classes (modern JS even has a class syntax… I’d stay away, because while it now looks even more like a class in a statically typed programming language, it’s still just an alias to create a prototype,
leading to much unnecessary confusion).

You can check if a attribute resides on the object itself or further up the prototype chain by using hasOwnattribute. toString for example is inherited, so hasOwnattribute returns false, even though we can see it.

var o={"a":"attributeA"};

So now we know that we can create an object that has a prototype link to another one and that we can use this for attributes. It actually works the same way for functions;

var MyConstructor=function(){};
var myObject=new MyConstructor();

So why do I say functions, not methods? Because methods are a concept that again, doesn’t really exist in Javascript. What we’d usually refer to as method in JS are just function attributes of an object. We can copy them we can overwrite them, we can even move them between objects. That’s because while an execution scope exists in JS, it’s not determined during the creation of the function, but once again at run time.
We can access the current scope of a function using the “this” keyword, the same way we did in the constructor:

var MyConstructor=function(){};
var myObject=new MyConstructor();

Looks an awful lot like a normal method doesn’t it? But we can see it doesn’t work this way if we store a reference to that function and call that instead.

var MyConstructor=function(){};
var myObject=new MyConstructor();
var myFunction=myObject.sayMagicNumber;

Suddenly all it says is “undefined”. Because without being called directly on an object, the function doesn’t know which object it belongs to. The magic happens in the actual “.” or [“”] lookup. Here “this” is set to
the correct object. This is why functions assigned to the prototype don’t return the prototype object as “this”, but the one on which they were called, namely the instance object.

So let’s review for a moment:
1. There are no classes in Javascript
2. However there is a prototype chain which allows access to an objects attribute to bubble up to another object
3. There are no methods
4. But functions are called in the scope of the object to which they are assigned.
5. Everything is dynamic

Understanding these fundamental principles is really all it takes to understand object oriented programming in Javascript. So let’s look at how we can use these principles to emulate even more stuff we are used to
from class-based languages. Inheritance for example. We know that accessing something on an instance looks, if that attribute doesn’t exist, at the prototype. But since the prototype is nothing but an object itself
we can use this to link multiple prototypes to a single instance.

var MyOriginalConstructor=function(){};
var MyDerivedConstructor=function(){};
MyDerivedConstructor.prototype=new MyOriginalConstructor();

var myInstance=new MyDerivedConstructor();
console.log(myInstance.a+” “+myInstance.b);

See what we did there. We made prototype of MyDerivedConstructor an instance of MyOriginalConstructor. So now every time we access a attribute of the instance it will first look at MyDerivedConstructor’s prototype
and if it can’t find anything it will look further and see if it can find anything on MyOriginalConstructor’s prototype. There is a slight pitfall here. Creating an instance of MyOriginalConstructor may have unintended
side-effects. So what we usually do is create a temporary constructor:

var MyOriginalConstructor=function(){
 console.log("We don't want to run this just for inheritance");

var MyDerivedConstructor=function(){};

var MyTemporaryConstructor=function(){};
MyDerivedConstructor.prototype=new MyTemporaryConstructor();


var myInstance=new MyDerivedConstructor();
console.log(myInstance.a+" "+myInstance.b);

However we may want to run the original constructor when the derived constructor is called. We can actually already do this and make sure it runs in the correct scope since we know that all that is used to
determine what “this” refers to is the object on which the function is called. So we can just make the original constructor a attribute of our derived prototype and it will work:

var MyOriginalConstructor=function(){
 this.message="set by MyOriginalConstructor";

var MyDerivedConstructor=function(){

var MyTemporaryConstructor=function(){};
MyDerivedConstructor.prototype=new MyTemporaryConstructor();
var myInstance=new MyDerivedConstructor();

You see how just making it a attribute changed everything. There’s also another way that’s often more convinient. We can specify what to set “this” to by using the functions “call” and “apply” in the Function prototype. They are basically the same, just that “call” requires you to specify arguments one by one while apply uses an array:

var myFunction=function(msg1,msg2){console.log(this+" "+msg1+" "+msg2);};"Hello","Call","Bar");

These are identical as you can see. There’s a third function that’s relatively recent (I won’t explain how to emulate it for now, since this touches an entirely unrelated topic: closures). Bind. Bind gives you
a way to get a proxy function which will always run the original one in the given scope.

var myFunction=function(msg1,msg2){console.log(this+" "+msg1+" "+msg2);};
var myProxy=myFunction.bind("Bound");


You see how no matter what we do “this” always points to the string “Bound”. This is something you will have to deal with due to the callback-based nature of Javascript. Often a function, for example addEventListener
only allows you to specify a callback. A callback is just a function reference, so it doesn’t know anymore on which scope to run. Using bind you can make sure it still does.

So, what else can we do? What would we expect to be able to do? An often requested feature is method overloading. Again, Javascript can’t do that, since it doesn’t even have this concept of classes. Everything you create is still of type object, so how would this even work? Again, JS has a concept that doesn’t directly mimic this, but it has something that’s reasonably similar. You can check the prototype chain and see if a function’s prototype is in an object prototype chain. For this, there’s a special operator. The instanceof operator. You may already have encountered this when you were researching how to tell a plain object from any array. Since an array is not a type of its own, here too you have to check the prototype chain.

console.log([] instanceof Array);

So while we cannot overload a function, we can create a proxy function that will run different code based on type and prototype of the arguments:

var MyConstructor=function(){};

var myProxy=function(unknownTypeArg1){
 else if(unknownTypeArg1 && typeof(unknownTypeArg1)=="object" && unknownTypeArg1 instanceof MyConstructor)

myProxy(new MyConstructor());

There’s another useful object in JS that can make this even more versatile and that’s the arguments object. Every time you call a function an arguments object is created, containing all the parameters in an Array-like format. Checking this way is a lot more convenient than naming all arguments.

var myProxy=function(){
 if(arguments.length==1 && typeof(arguments[0])=="string")
 else if(arguments.length==1 && typeof(arguments[0])=="object" && arguments[0] instanceof MyConstructor)

myProxy(new MyConstructor());

And we can even make it dynamic. You should be able to understand most of this, but if you don’t, don’t fret, it is just an example how you can pass the arguments object and analyze it.

 var foundOverload=false;
 for(var i=0;i<arrayOfOverloads.length && !foundOverload;i++){
 var typeMismatch=false;
 for(var j=0;j<arrayOfOverloads[i].args.length && !typeMismatch;j++)
 if(arrayOfOverloads[i].args[j]!==null && !(args[j] instanceof arrayOfOverloads[i].args[j]))
 return arrayOfOverloads[i].callback;
 throw new Error("No overload found");

Using this function is pretty simple. Just pass in a list of callbacks, each with a signature. checkOverload will return the matching function, so you can execute it with apply (or it will throw an error if there is no match).

var overloadedProxy=function(){
 return Function.checkOverload([
 console.log("String "+s);
 console.log("Array "+a);
 console.log("String "+s+" and Array "+a);


Last but not least, we have the old conundrum of private vs public. The usual way is purely by definition. Private attributes are usually prefixed with “_” by most JS developers. While this doesn’t really change their
accessibility, it gives anybody accessing your code a strong indicator that this field is supposed be private. Most IDEs will even go so far as hide these attributes. There’s also a way to make attributes truly hidden, but
this exploits closures and as such is beyond the scope of this document.

So there you have it. JS is again very different from what most developers are used to, but I hope this article could give you a glimpse at the flexibility that Javascript offers versus the traditional, much less flexible, class model.

Node-SQLite-NoDep : SQLite for node.js without NPM

Seeing as how Mozilla is slowly phasing out anything that’s not part of Firefox (sorry Mozilla, I think that’s the wrong call… we don’t need YACC… yet another Chrome clone), including XUL, XULRunner, JSShell and so on I’m slowly trying to replace these technologies on our servers. JSShell in particular has been invaluable in automating simple tasks in a sane language.

The obvious replacement is node.js… but as great as node.js is, it has a few shortcomings. In particular, it relies heavily on NPM to provide even basic functionality and as any admin knows, something that can break will eventually break if its too complex. An admin wants a solution that’s as complex as necessary, but still as simple as possible. So, installing node.js along with npm on a machine is a liability. Luckily node.js itself is portable, but since its library interface is unstable, depending on anything from npm is a big no-no.

One thing I frequently need is a little database. Simple text files work too, but eventually you’ll end up duplicating what any database already does. SQLite is an obvious choice for applications with very few users, for example reftest results. But connecting SQLite to node.js without anything from NPM is a pretty ugly task. Luckily, there’s also a commandline version of SQLite and while it may not be as fast as a binary interface, it can get the job done… with a little help.

Node-SQLite-NoDep does exactly that. It launches sqlite3.exe and sends any data back and forth between your node.js application and the child_process , converting the INSERT statements produced by sqlite3.exe into buffers and providing basic bind parameter support. The documentation is not entirely complete yet, but you can find a quick introduction here along with the jsdoc documentation available here.

Basically, all you have to do is grab SQLite.js, drop into into a folder, add a bin folder, drop node.exe and sqlite3.exe and you’re good to go.

const SQLite = require('./SQLite.js');

var testdb=new SQLite('.\\mytest.sqlite',[
    {name:"testTable",columns:"X int,LABEL varchar[64],data blob"},
    testdb.sql("INSERT INTO ?testTable VALUES(#X,$LABEL,&DATA)",
        LABEL:"Hello World",
        DATA:new Buffer("DEADBEEF","hex")
        testdb.sql("SELECT * from ?testTable",

is really all you need to see it in action. Just put it into test.js file and launch it with

bin\node.exe test.js

to see it in action.




Commandline: Changing resolution

Just had a common issue this morning that would usually require installing an application, but is very easy to solve using the batch file (GIST) from Thursday’s post:

Changing the resolution from a batch file. Specifically, I wanted to lower my display’s resolution whenever I connect via VNC. The first part is simple: Attach a task to the System Event generated by TightVNC Server (Ok, not that easy… this actually involves using Microsoft’s bizarre XPath subset, since TightVNC’s events are not quite as specific as they should be), then set this task to run a batch file.

Now, for some reason, Microsoft doesn’t include anything to do something as simple as setting the resolution by any other means than calling into USER32.DLL directly… and that call is too complex for little old RunDLL32.exe. .NET can’t do it either without calling into USER32.dll. But at least it makes doing so pretty straightforward.

Declare a struct that matches Windows display properties (no need to declare all fields, I just use dummy byte arrays for any fields that I’m not interested in), then call EnumDisplaySettings to retrieve the current settings into that struct. Change the resolution of the retrieved information and pass it back to ChangeDisplaySettings and voilà.

This is also a good example of how to use arguments with C#.CMD. Just don’t. Save them to environment variables instead and retrieve them via System.Environment.GetEnvironmentVariable . SETLOCAL/ENDLOCAL will keep these environment variables from leaking into other parts of your script.


echo @^
    using System.Runtime.InteropServices;^
    public struct DispSet {^
            byte[] padding0;^
        public int width, height;^
            byte[] padding1;^
    public class App {^
        [DllImport("user32.dll")] public static extern^
        int EnumDisplaySettings(string a, int b, ref DispSet c);^
        [DllImport("user32.dll")] public static extern^
        int ChangeDisplaySettings(ref DispSet a, int b);^
        public static void Main() {^
            var disp = new DispSet();^
            if ( EnumDisplaySettings(null, -1, ref disp) == 0)^
            ChangeDisplaySettings(ref disp, 1);^


Assuming you have C#.CMD somewhere in your path, you can now simply call this batch file with horizontal resolution as first argument and vertical as second.

Using C# in Batch files

Batch Files

People usually smile when I say that some parts of our network like the filtering of system events are held together by batch files. It just seems so arcane, but there are some big benefits:

You don’t have to compile anything, you always know where the source code is and you can simply copy them between machines without having to set up anything. And since it’s more or less a legacy technology, Microsoft isn’t really changing a lot anymore, so there’s little chance of an upgrade breaking a batch file.

The only problem is: cmd.exe shell syntax is a horrible, horrible mess and even the most basic string functions can take ages to implement… plus, the code you’ll write look like gibberish to anybody else no matter what you do. Plus there’s the horrible, horrible string escaping behavior and the very strange behavior of variables.



So, Microsoft started developing a replacement: PowerShell.exe . And functionality-wise it’s wonderful… it can be run interactively, it doesn’t need compilation, it has useful variables, it can access the system’s .NET libraries… it all sounds wonderful… until you try to run the darn thing. Let’s just say: The syntax is frighteningly bad, never mind the documentation plus the fact that for some bizarre reason you’re allowed to run batch files or EXE files, but you need to set an additional policy before you’re allowed to run PowerShell scripts!


The C# Compiler

But enough ranting. Thankfully there’s an alternative that’s preinstalled on all modern Windows System: The C# compiler. Yes, it’s there, even if you don’t have VisualStudio installed. Just enter

dir “%WINDIR%\Microsoft.NET\Framework\v4*”

on the commandline and you’ll see the directory of all installed .NET 4 frameworks, each containing CSC.EXE, which is the C# compiler.

Now, you could just use that, but that means a whole lot of temp files since you can’t pipe to CSC.EXE and you can’t run the code immediately. However there’s another way to access it: Through .NET itself via System.CodeDom.Compiler.CodeDomProvider .


Using PowerShell to access the C# Compiler

Thankfully, there’s one thing that PowerShell gets right: Giving you access to .NET . It’s not a pleasant experience, but it is possible. And there’s another thing PowerShell gets right: it allows piping anything to it. So we can use a little PowerShell script that invokes CodeDomProvider.CreateProvider to compile our code on the fly and run it immediately.

It’s really pretty simple:


$opt = New-Object System.CodeDom.Compiler.CompilerParameters;
$opt.GenerateInMemory = $true;
$cr = [System.CodeDom.Compiler.CodeDomProvider]::CreateProvider
   "public class App { public static void Main() { "+ $input+" } }");
    $obj = $cr.CompiledAssembly.CreateInstance("App");
    $obj.GetType().GetMethod("Main").Invoke($obj, $null);

It’s really very straight forward. Take STDIN, wrap it in a Main function, compile it, run it, report error if there was one during compilation.Through the magic of horrible cmd.exe paramter escaping, this looks a bit differently when passed directly to PowerShell.exe (3 quotes), but you should still be able to recognize it. Just put it in any old batch file (I’m using c#.cmd which I also added to my system’s PATH variable so that I don’t have to enter the whole path each time), but be sure to put it in a single line, because even escaping the linebreak with “^” won’t work for arguments of PowerShell.exe :


@PowerShell -Command " $opt = New-Object System.CodeDom.Compiler.
 CompilerParameters; $opt.GenerateInMemory = $true; $cr = [System.
 CompileAssemblyFromSource($opt,"""public class App { public
 static void Main() { """+ $input+""" } }"""); if(
 $cr.CompiledAssembly) {$obj = $cr.CompiledAssembly.
 CreateInstance("""App"""); $obj.GetType().GetMethod("""Main""").
 Invoke($obj, $null);}else{ $cr.errors; } "

Horrible, I know. But it works.


Including C# inline in batch files

Now, if you want to actually include any C# in your batch file, it’s surprisingly straight-forward since the cmd.exe ECHO command actually has very straight forward escaping rules. Well, except for | and & , which you best avoid by using .Equals() . But new lines just need to be escaped with a “^” at the end of the line and a space before the final pipe character. OK, that sounds way worse than it actually is:

@echo ^
var a="Hello";^
var b="World";^
var foo=(a+" "+b).ToUpper();^
    System.Console.WriteLine("Hey, you named it C#.cmd too :)");^

That’s what a typical call would look like. Again, note the “^” at the end of each line and the space before “|c#”. Remember this and you will be fine. Of course, you can also put the CSharp code in a separate file and use @TYPE to pipe it directly to C#.CMD, so it won’t need any escaping.



Well, there’s obviously the issue of escaping your code if you use ECHO to include it inline, but I really don’t think there’s any way to avoid it.

There are some issues which are mostly due to the C# code running inside the PowerShell process, rather than the CMD.EXE process. Most importantly: You cannot set environment variables without setting them user- or system-wide. You can set the environment variables of the PowerShell process, but these won’t be visible to the parent CMD.EXE process either. Your only way out is to use STDOUT and STDERR and FOR /F to move it to a variable. If that doesn’t work (which may be the case if you want to include the code inline, because escaping inside a CMD.EXE FOR call is incredibly difficult), you’ll need to transport the information using the filesystem.

And since we’re piping the code to PowerShell, STDIN will obviously not be available… so no ReadLine().



Well, obviously support for commandline arguments would be nice at some point, but I haven’t needed it so far.

It would also be nice if the PowerShell could add the class/main wrapper only if there is no method given in the source code. For now I’m simply using two different batch files, c#.cmd and c#full.cmd


Hopefully this will make your life a bit easier 🙂


JSDoc for Mozilla Firefox Components.interfaces

I freely admit, I’ve been spoiled by VisualStudio and .NET. But right now I need to write some JS code for XULRunner and it’s getting painful:

All the information one needs is available on the Wiki, but I want auto-complete, I want argument descriptions and all the little niceties I’ve come to expect from a development environment. WebStorm does an admirable job at allowing me to document my code in a way that makes all this possible, but it needs JSDoc, not a set of Wiki pages in order to do this.

My solution is less than perfect. I wrote a little parser that tries to scrape the content from the Wiki and transforms it into JSDoc. But since a Wiki is not a structured database, this means interpreting the data. Usually my little parser gets it right, but not always. It’s also a terrible bit of code with lots of little fixes every time I encountered a new style that somebody was using. I’ll release it in time, but right now it’s just too ugly.

However, the result it produced is still apparently the best thing we have available right now, so I’m putting it up here. If there’s any interest, particularly in a permanent solution (which would probably involve keeping the documentation in a standardized format and occasionally syncing it with the Wiki), I’ll be happy to help.

Just add this as a reference to your code in order to use it:

Components.interfaces JSDOC

On the iPad, don’t try to fix scrolling

I have to admit that this really bothers me. Yesterday, I decided to write a little tool to let my boss create his presentations on an iPad by sorting a set of pre-created images. Nothing fancy, but I needed two separately scrolling viewports that are NOT operated with the two-finger-dragging-gesture. Seemed pretty straight forward. Make the elements in the last-touched element position:absolute, so that they scroll with the document, while keeping the rest position:fixed so that they stay were they are. Tried it on various browsers on both Android and Windows and it’s so braid-dead simple that even IE can cope.

The thing that I was worried about was that the iPad would smooth-scroll to the new scroll position that’s needed when you return to a previously-scrolled element (you can’t just move the element to the correct position, because that would usually be negative and that means that you wouldn’t be able to scroll to the left-most parts of the element). Turns out that part worked, but everything else fell to pieces. Switching between fixed and absolute a couple of times with big elements almost always crashes Safari straight away. Plus, after switching a couple of times, the iPad would usually get confused and move the scrollable area to some arbitrary rectangle.

I’m sorry, but I really don’t know how to say this nicely: Apple, get your act together. The iOS browser was great when it came out, but having to worry about scrolling feels like the nineties all over again.

Streaming videos to a MK802 Android system-on-a-stick

I got my MK802 Friday and immediately tried to stream videos… first via SMB shares (works, but the little decoder chip usually isn’t able to keep up with my movies) and then via TravelDevel’s VLC Stream and Convert, which I’m using on my LG P920. Unfortunately VLC S&C is not quiet up to the job of working on a landscape device and the developer has apparently dropped off the face of the earth. So I’ve decided to re-implement it via a tiny web-interface. It’s still very bare-bones, doesn’t work with newer VLC versions, doesn’t manage playlists and is missing any kind of configuration panel (you have to edit the source to change the settings), but it works (at 1024×600@24, 2048kbps). If there’s any interest I’ll release it under GPL and set up a project, but for now you are not allowed to redistribute it; you can only install it on your own machines and only if you accept that I’m not responsible for anything that happens.

Here’s how to get it working:

  • This was only tested on VLC 1.1.0. You can get it here. Install it.
  • Open it and go to Tools/Preferences and choose Show Settings/All at the bottom left.
  • Open Interface/Main intefaces and enable “HTTP remote control interface”. Press Save.
  • Close VLC.
  • Open the directory where you installed VLC and open the http directory.
  • Use your favorite text editor to open .hosts .
  • If you’re using static IPs in your home network, which is highly recommended since it allows you to bookmark VLC’s location on your MK802 and will make your network a LOT more secure, add a new line and write down the IP of your MK802.
  • Otherwise, uncomment (remove the “#”). Note that this is very insecure since it will mean that anybody on your network will be able to control VLC which can do a lot of damage to your system. NEVER do it on a public or unencrypted network (in fact, if you’re using an unencrypted network, now may be a good moment to finally enable it).
  • Save .hosts
  • Extract this file to your http directory.
  • Open VLC.
  • On your MK802, open the browser and point it to your PC’s IP (if you don’t know, press Win+R, enter cmd /C “ipconfig & pause” and look for IP-v4 address, which should be something like, prefixed with http:// and followed by :8080/vlc.html# .
  • In my case that’s (Yes, the # is important due to a bug in Android’s URL handler… for some reasons it will never display the hash code, which is what my script uses to keep track of the current directory, unless you enter the “#” manually first; otherwise you won’t be able to bookmark it). Open it and navigate to your desired root folder. Now you can bookmark it. Opening this bookmark will always return you to that folder.
  • Click on the file you want to play or click Play All next to a folder to add it’s whole content to the playlist.
  • Now, you might expect the video to play inside the webpage… sadly that’s not possible due to Android’s broken HTML5/video implementation. In order to see (or return to) the video, you have to press the Video button at the top left.
  • Note that the video will keep on playing on the server even if you leave the page, so don’t forget to press STOP if you value your CPU cycles.

Hope this helps. Thanks to TravelDevel for posting his VLC commandlines, particularly the parts that speed up h264 encoding.

Off-Topic: That’s the way online video should work: Indie Game – The Movie

I don’t really buy any movies online… rather, I buy them on DVD or BluRay and save them to my external hard-drive from there. It’s inconvenient and not how I would like things to work, but it’s the best I can get. Movie studios are paranoid and constantly require newer and stricter copy-protections for online services or they won’t allow them to sell their movies, but disk-based formats are released with a certain copy-protection which is usually quickly broken and cannot be easily upgraded. This way, I can watch my movies when I want to watch them: on the bus with my cellphone (MoboPlayer/Android), streamed via VLC (server) / browser(client) during my lunch break, on my projector/PS3 (PS3 Media Server) with friends or on my net-book (Samba / Mplayer) when I’m in bed. I also get the best quality available with (at least) German and English audio tracks and I can order anywhere. If it’s not available yet in Germany, I can get it from the UK or the US, no problem.

Compare that to the on-line situation. First, I need to find a service that’s available in my country… which means reading about all the better and cheaper services that I can’t get in Germany because, well: I’m in Germany. Then I have to see which one has the movie I want, which usually means not finding one, because the movie/series isn’t released in Germany yet (and may never be). Fast forward 6 to 8 months (if I’m lucky and haven’t forgotten about it and it’s not just available as part of a package). I can find the movie, but there is no audio track besides the German one (which usually is frighteningly bad) and it’s only available in Stereo. If I’m lucky, I can get a HD version in 720p and at a bitrate that’s well below the one for a typical DVD, but usually that “HD” version isn’t available due to licensing issues.

If I should still decide to buy it I get my choice of streaming via either some arcane browser-plugin (Silverlight comes to mind) or a proprietary one which does nothing besides adding security holes to an otherwise secure browser, duplicating its own streaming functionality and making sure that I’m not running a screen-capture program while I’m watching the movie, inevitably using up much of my CPU power and introducing stuttering into the movie. If they feel generous I may get a dedicated client software which does the same thing, but may allow me to buffer more than 3 minutes of the movie so that I don’t have to pause in the middle of the movie.

Of course, if I want to watch it off-line or on anything but a Windows-PC that’s still my problem.

To sum it up: The studios are not making the amount of money from me that they could be making, because they do not offer me the quality of service that would make it easy for me to buy something. I have to order it on DVD/BluRay, wait for it to arrive, copy it and create converted versions for mobile devices. You can imagine that’s not something I do as often as I would, for example, click the buy/download button if it would produce the same result (which is not just a theory, I can actually watch the effect of such an offering by comparing my buyer’s history on Amazon with the one on GOG, which offers DRM-free games). Plus, they have to pay for the disc, shipping, Amazon’s cut and so on and so forth, which all comes out of their margins, so even the little that I do order doesn’t make them as much money as they could be making from a download. It just seems so stupid.

Enter Indie Game – The Movie. This is probably the first time that I bought from somebody who got it right (besides Kookie in the Humble Bundle, but it wouldn’t be fair to count that that, because it was part of a bundle). VHX is handling the distribution and in short, everything is as it should be: I get a streaming version for right away and the most common formats for playback on non-connected devices, with no DRM preventing me from making additional copies in other formats. I bought it as soon as I saw it.

I respect the studio’s desire to protect their work. In fact, I rely on copyright for my work as much as anybody else, but it’s depressing to see an industry self-destruct because of paranoia and a misplaced sense of entitlement. Sure, the people pirating your product are an annoyance and you do not owe them anything, but making your regular customers pay for it is not the way to go if you want to fix it. There are a thousand and one methods that they could employe that would protect them from piracy as much as what they’re doing now (which isn’t working terribly well, as it’s currently easier to pirate than to buy) without alienating their customers. They could add signatures to files (either as metadata or, even better, via Steganography) which would identify the origin of a file if it started showing up on P2P networks, or they could provide a unified DRM as open-source with a free certification program, refusing to license to anybody who wants to sell the movies with an incompatible DRM solution. That would address pretty much any single issue that I have with current DRM systems. And that’s just the stuff on top of my head, but one thing is certain: A licensing jungle combined with proprietary DRM systems that are not compatible between any two services are not the way to go.

I just hope offers like the ones from VHX catch on, so that I don’t continue to get strange looks when I say that I want to pay for my movies and TV series…


This doesn’t have anything with code or Javascript, but I just love playing the games I played when I was younger. The Settlers, Incubation and Sam and Max are at the top of the list, but in the racing section Carmageddon is just below Screamer and Ignition and even pulls ahead of Twisted Metal 2 and Destruction Derby.

And now they’re making a new game. In case you do not know: Carmageddon is a mixture of car racing and arena fighting, much like the better known Twisted Metal series. It’s also a very tasteless game where you get bonus points for running over pedestrians… you’ve got to realize that this game was made when the discussion about how violence in games influences people in real life was at an all time high. Carmageddon made fun of the whole discussion by making it clear that games have nothing to do with reality: Ironically it was censored in Germany with flying bombs instead of pedestrians because the consors were unable to see that.

Personally I think that the discussion about violence in games was (and still is) heading in entirely the wrong direction: Instead of educating people and teaching them that a computer is not a magic box we still act as if any of that stuff happening on screen was real.
The pixels we see on the screen have no more to do with real persons than children playing cops and robbers with a real gunfight. It’s just polygons, pixels and a bit of math… nothing more. That’s what we should get across and Carmageddon was a not-so-subtle nudge in that direction.

Anyway, they’re still trying to get the funding over at and while I think the concept of Kickstarter will have to evolve at some point beyond the non-monetary-reward thing they’re doing now, I think in this case there’s enough talent (and they have invested enough of their own money) that it’s likely that they’ll get the game done. They’re not asking for much… $15 for what amounts to preordering the game and I think that’s fair. I’ve given $25 because I want access to the beta, which I still think is very reasonable.

Kickstarter Project Page

Scripting the windows commandline with Spidermonkey made easy

I frequently have to automate really simple tasks, like moving files with a certain filename to another directory and the Spidermonkey shell that now comes with XULRunner (thank you for that Mozilla, building it yourself was time-consuming and annoying) has become an invaluable tool.

Few people know how easy it is to use any Mozilla-JS based program (yes, that includes XULRunner and Firefox) to work with commandline programs. As with any other programming environment you just need to be able to call the popen function of the OS, which runs any command and returns the output as a stream.

Mozilla-JS does not include POPEN. However it does support CTYPES, a system for calling libraries. On Windows POPEN and everything else you need is in MSVCRT.

Opening MSVCRT with CTYPES is easy:

var msvcrt ="msvcrt");

Now you just need to declare what functions you need (_popen, _pclose, feof and fgetc in our case) and what types these require (first parameter is the name, second the interface type, third is the return type and everything else is the type of each argument):

var popen=msvcrt.declare("_popen",ctypes.winapi_abi,
var pclose=msvcrt.declare("_pclose",ctypes.winapi_abi,,ctypes.void_t.ptr);
var feof=msvcrt.declare("feof",ctypes.winapi_abi,,ctypes.void_t.ptr);
var fgetc=msvcrt.declare("fgetc",ctypes.winapi_abi,

With this you can very easily build something like C’s SYSTEM call, just with the difference that it will return everything the program outputs through STDOUT:

function system(cmd,raw){
/* Open the program (Windows automatically
   uses cmd.exe for that), use raw mode
   if requested. */
    var file=popen(cmd,"r"+(raw?"b":""));

    var o=""; // STDOUT content
    var end=false; // End of STDOUT reached

/* Loop trying to get a character from STDOUT
   until feof informs us that the end of the
   stream has been reached */
        var c=fgetc(file);
             o+=c; //Append current char

    pclose(file); // Close pipe
    return o; // Return 

And that’s really all you need (in this case to call DIR, split by newline and output the first result):

var dirs=system('DIR /B /S');