A JavaScript refresh

 

We will cover here some of the key concepts of JavaScript to get us started. If you have not checked JavaScript for the past few years or if you are new to JavaScript, I hope you find this useful.

We will start by covering the language basics like variables, functions, scope, and the different types, but we will not spend much time on the absolute basics like operators, or what is a function or a variable, you probably already know all that as a developer. We will discover JavaScript by going through simple examples and for each of these, highlight specific behaviors and approach the language from an interactive developer standpoint, coming from other technologies like Flash (ActionScript 3), Java, C# or simply native (C++).

Like other managed languages, JavaScript runs inside a JavaScript VM (Virtual Machine), one key difference to note is that unlike executing bytecode, JavaScript VMs are source based, translating JavaScript source code directly to native code by using what is called a JIT (Just in Time compiler) when available. The JIT performs optimization at runtime (just in time) to leverage platform specific optimizations depending on the architecture the code is being run on. Of course, most browsers available today run JavaScript, the list below highlights the most popular JavaScript VMs today in the industry:

JavaScript can provide some serious advantages over low-level languages like automatic memory allocation and collection through garbage collectors. This, however comes at the cost of speed. But managed languages provides so much value in terms of productivity and platform reach that developers today tend to favor them over low-level languages, despite the loss of performance because of the higher cost for those languages when it comes to targeting multiple platforms.

Before we get started, it is important to differentiate how browsers work. On one side we have the core language, JavaScript, and on the other side we have the browser APIs. Historically, JavaScript and the DOM used to be tightly coupled and most tutorials would cover both at the same time, but they evolved into strong separate entities. So we we will cover JavaScript first, then deep dive into the DOM and browser APIs later on. Everything in this first article could run on the shell without any interaction with the browser APIs, just pure core JavaScript. The list below represents the general-purpose core objects defined in JavaScript:

  • Array
  • Boolean
  • Date
  • Function
  • Number
  • Object
  • RegExp
  • String
  • Error

For a complete list of the global objects, check the Global Objects page from Mozilla. Other objects you might have seen in JavaScript, like Window, CanvasRenderingContext2D, or XMLHttpRequest object or any other, have nothing to do with the JavaScript language itself and are just objects to leverage specific browser capabilities, like network access, audio, rendering and more. We will cover all of these APIs in future articles.

Versions

JavaScript today is implemented across browsers following the ECMAScript specification. As of today, five editions of ECMA-262 have been published. “Harmony”, the latest revision is a work in progress:

Edition
Date Published
1June 1997
2June 1998
3December 1999
4Abandoned
5December 2009
5.1June 2011
6 (Harmony)In progress

Most browsers today support ECMAScript 5.1, the table below illustrates the conformance test suite used:

Product
Version
Test Suite Version
Chrome24.0.1312.57 mES5.1 (2012-12-17)
Firefox19ES5.1 (2013-02-07)
Internet Explorer10.0 (10.0.9200.16384)ES5.1 (2012-12-17)
Maxthon3.4.2.3000ES5.1 (2012-08-26)
Opera12.14 (build 1738)ES5.1 (2013-02-07)
Safari6.0.2 (8536.26.17)ES5.1 (2012-12-17)

There is also separate versioning for JavaScript with some additional features not necessarily implemented in ECMAScript. The latest JavaScript version is 1.8. The table below illustrates the correspondence between JavaScript and ECMAScript versions:

JavaScript version
Relation with ECMAScript
JavaScript 1.1ECMA-262 - Edition 1 is based on JavaScript 1.1.
JavaScript 1.2ECMA-262 was not complete when JavaScript 1.2 was released. JavaScript 1.2 is not fully compatible with ECMA-262 - Edition 1.
JavaScript 1.3JavaScript 1.3 is fully compatible with ECMA-262 Edition 1. JavaScript 1.3 resolved the inconsistencies that JavaScript 1.2 had with ECMA-262 while keeping all the additional features of JavaScript 1.2 except == and != which were changed to conform with ECMA-262.
JavaScript 1.4JavaScript 1.4 is fully compatible with ECMA-262 Edition 1. The third version of the ECMAScript specification was not finalized when JavaScript 1.4 was released.
JavaScript 1.5JavaScript 1.5 is fully compatible with ECMA-262 Edition 3.

Source (Mozilla)

Throughout this article we will be sticking to the ECMAScript 5.1 feature set, which is the latest revision implemented by all major browsers. From time to time we will have a quick look at some specific features outside the ECMAScript scope just for general information. When such features are covered it will be explicitly mentioned.

Assembly language of the web

In the past years, more and more languages have been targeting JavaScript. Recently projects like Emscripten have proved that it is even possible to take native C/C++ code and compile it to JavaScript. Some people have mentioned the idea of JavaScript being the assembly language of the web. It is indeed an interesting analogy. Languages like TypeScript or Dart have demonstrated this too by cross-compiling into JavaScript.

TypeScript is an implementation of ECMAScript 6 with optional strong typing, whereas Dart is a more aggressive approach as a different language that would ideally a Dart VM for Chrome. Recently, efforts like asm.js from Mozilla have pushed the idea even further by proposing a low-level subset of JavaScript that would be targeted by compilers more efficiently. Some other initiatives like CoffeScript (often called transpilers) help developers by exposing language syntactic sugar not necessarily exposed in JavaScript today.

No Compiling

As we just mentioned, one of the beauty of JavaScript is that you do not have to pre-compile your code to get it to run. Your source code is loaded directly then compiled to native code at runtime by the VM using a JIT. As a developer writing JavaScript code and coming probably from a compiled language like C#, Java or ActionScript, you have to remember at all times that there will not be any optimizations done ahead of time by a static compiler. Everything will be figured out at runtime when you will be hitting refresh.

VMs like V8 introduced engines like CrankShaft which perform key optimizations at runtime like constant folding, inlining, or loop-invariant code motion that will help with performance. However, always keep in mind what you can do in your code to help with performance too. Throughout this article, we will cover key optimizations you can rely on to help your code perform better.

Memo

  • JavaScript is based on the ECMA-262 standard.
  • The latest revision implemented by most browsers is ECMAScript 5.1.
  • The latest revision of the ECMAScript specification is revision 6 called « Harmony ».
  • JavaScript does not rely on a static compiler.
  • JavaScript source code is directly compiled to native code by the JIT (a component of the virtual machine).
  • Other languages like TypeScript, Dart or CoffeeScript and more are targeting JavaScript too.
Tools

Feel free to use any text editor you want to write your JavaScript code. Examples in this article will be using the console from Chrome for small code samples to test things quickly. For bigger projects, WebStorm will be used. You can follow along using these tools or use your own. Here is below a set of popular tools to write JavaScript:

Let’s get started and write some code now.

REPL

As a developer reading this, you will probably want to test small things quickly and iteratively. Traditionally, when using bytecode based languages, you would type your code, hit compile, bytecode would be generated and then executed. For every modification, you would change the source code, then hit compile again and observe the changes. With JavaScript, you could be using REPL and have a way more natural and flexible way to test things and compile them on the fly. So what does REPL stands for? It stands for read-eval-print loop. Some bytecode based languages like C#, F# offer a similar functionality to easily test some pieces of your code where you can type in the command line and evaluate pieces of you code quickly and naturally.

With Chrome Developer Tools or any other console available like Firebug with Firefox, you can use the console and just start typing some code. The figure below shows the Chrome console:

REPL

When pressing enter, your code is injected and executed. In our first example, when declaring a variable, the console will just return undefined as variable declarations return no value. Just referencing the variable we declared will automatically return its value:

REPL

If we want to retrieve the string length, we also get auto-completion directly from the console:

REPL

In the figure below, we retrieve the string length:

REPL

This provides a very nice way to test quickly some pieces of your code. Note that in the Chrome developer tools console for multiline entries, you will need to use the Shift+Enter since Enter triggers code execution. In the next figure, we define a foo function then execute it:

REPL

Note that with Firebug in Firefox, the console supports multiline code in an expanded editor view where Enter creates a new line and Shift+Enter executes it. Firefox also provides ScratchPad as part of their developer tools. In ScratchPad, JavaScript multi-line code can be entered and tested interactively by just selecting the lines needed and press Shift+F4 (MacOS) or CTRL+R (Windows). The figure below shows the ScratchPad window and the console displaying the result:

REPL

Memo

  • The JavaScript console allows you to test code interactively inside the browser.
  • Anything defined on the page can be overriden or added through the console.
  • This interactive mode is called « read evaluate print loop » known as REPL.
  • ScratchPad in Firefox offers a great REPL workflow.
Getting started

If you come from other languages like C#, C++ or ActionScript, you may have been used to installing lots of tools, compilers, debuggers. One of the very cool things with JavaScript is that all your really need is a text editor and a browser. When embedded in HTML, JavaScript code can be either inlined inside a script tag:

<script>

// some javascript code

</script>

Or referenced and placed in external .js files, which is a better practice:

<script src="js/main.js"></script>

By default, our JavaScript code will be executed synchronously in the order the browser parses the page, from top to bottom. As a result, placing your JavaScript code at the beginning of the page is discouraged; this would cause the browser to wait until the script is executed to start displaying anything on the page. For now, we will stick to a general good practice and place our code just before the end of the body tag:

...

<script src="js/main.js"></script>

</body>

That way, content on the page be displayed first, and once the display list (DOM) is loaded, our code will be executed. This will also ensure that our scripts have access to the DOM and that all objects can be scripted. We will be spending some time on the order of execution of JavaScript in a future article about the DOM. HTML5 introduced some interesting new capabilities in terms of sequencing and order of JavaScript execution that we will cover too.

A dynamically typed language

JavaScript is a dynamically typed language, meaning that any variable can hold any data type. If you are coming from a statically typed language, this may sound scary to you. In JavaScript there is no static typing and there will probably never be. Sadly, it is often perceived that having an explicit typing of variables is required to get type checking. It is actually not necessarily required. Types can be inferred (type-inference) automatically and propagated everywhere and provide solid code completion and type checking even with languages like JavaScript. TypeScript from Microsoft is a good example of this.

Given the absence of a typing system, JavaScript will never enforce the type of a variable, so you would not be able to rely on types to implicitly perform conversions. Remember, JavaScript does not rely on a static compiler, the VM directly interprets the source code and compiles it to native code at runtime using the JIT. To create an array in JavaScript, you could use the new keyword with the Array function constructor:

var scores = new Array();

Or simply (using the literal syntax):

var scores = [];

Note that no types are specified. At runtime the scores variable will be evaluated as an array:

var scores = [];

// outputs: true
console.log ( scores instanceof Array );

Because types cannot be enforced, variables can hold any type at anytime:

// store an array
var scores = [];

// store a number
scores = 5;

// store an object
scores = {}; 

// store a string
scores = "Hello";

If we try to call an undefined API, no errors will be captured at compile time given that there is no compilation happening statically:

var scores = []; 

scores.foo();

Because the foo method is not available on Array, we will get a runtime exception:

Uncaught TypeError: Object  has no method 'foo'

Same thing for even the simplest object:

var myObject = {};

myObject.foo();

Which would trigger the following runtime exception:

Uncaught TypeError: Object  has no method 'foo'

Browsers report errors like this one through the JavaScript console, and usually include the line that triggered the exception. The exception was uncaught in the example, but we update the code to catch it using the try catch clause:

var scores = [];

try {
  scores.foo();

} catch (e) {
  console.log ('API not available!');
}

We will get back to error handling soon, but for now let’s move for now on to some more essentials concepts like variable declaration.

Memo

  • JavaScript is a dynamically typed language.
  • No typing is required, types are evaluated at runtime.
  • Types cannot be enforced.
  • Therefore, a variable can hold any type at anytime.
  • If an object does not have an API available, the error will be triggered at runtime.
Variables and scope

Variables are declared using the var keyword:

var score = 12;

But omitting the var keyword will also work and declare the variable as global:

// declare a global variable
score = 12;

As you can imagine, this is not recommended and you should always use var when declaring variables. So why is that? The var keyword actually dictates the scope. In the code below we use a local variable inside the foo function, making it inaccessible from outside:

function foo() {
    // declare the variable locally
    var score = 12;

    console.log ( score );
}

function foo2() {
    console.log ( score );
}

// triggers: Uncaught ReferenceError: score is not defined
foo2();

Notice that when running the code above, the error is caught at runtime; remember that there is no static compiler involved which would catch this error ahead of time. Omitting the var keyword would make the variable global to all functions:

function foo() {
    // define the variable globally
    score = 12;

    console.log ( score );
}

function foo2() {
    console.log ( score );
}

// outputs: 12
foo();

// outputs: 12
foo2();

Another important behavior when working with variables is hoisting. This behavior allows you to reference a variable before it is defined. Trying to reference an nonexistent variable will trigger an exception:

// triggers: Uncaught ReferenceError: a is not defined
console.log ( a );

But, referencing a variable declared later with var works, returning its default, unset value undefined:

// outputs: undefined
console.log ( a );

// variable a declared later
var a = 'Hello';

What happens behind the scenes is that all the variables are moved to the top of the context block and declared first, but initialization happens where the variables are defined by our code. We will see in an upcoming section that the same behavior applies to functions too. JavaScript 1.5 introduces the concept of constants that we also have in some other languages. Constants are very important and should actually be the default in most of your programs.

Mutability is a common source of bugs. Some languages like functional programming languages rely on immutability by default. Using the keyword const will guarantee your value cannot be changed after initialization. In the code below, we define a constant named LIMIT:

// define a constant
const LIMIT = 512;

// outputs: 512
console.log ( LIMIT );

Note that our constant is uppercase, which is a best practice to easily spot immutability. If you try to change the value at runtime, the original value is preserved:

// define a constant
const LIMIT = 512;

// outputs: 512
console.log ( LIMIT ); 

// try to overwrite
LIMIT = 45;

// outputs: 512
console.log ( LIMIT );

You may be surprised that no runtime exception is triggered. Actually, some browsers do, like Firefox, since version 13. Unfortunately, as of today, the const keyword is supported in Firefox and Chrome but not in Safari, or IE 9 and 10, which dramatically reduces the reach of this feature. As a result, the const keyword should not be used if you intend to reach a broad audience and a wide variety of browsers. ECMAScript 6 defines const but with different semantics and similar to variables declared with the let statement, where constants declared as const will be block scoped (a concept we will cover in the Functions section).

Memo

  • The var keyword defines the scope.
  • Omitting the var keyword will make the variable global.
  • It is always recommend to use local variables inside functions to prevent conflicts and introduction of states.
  • The const keyword introduced in JavaScript 1.5 introduces immutability but is not widely supported yet.
  • Constants are part of the Harmony proposal (ECMAScript 6).
Type conversions

As we saw earlier, because of JavaScript’s dynamic nature, types cannot be enforced. This can be a limitation for debugging, because any variable can basically hold any type, you may be taken by surprise if, for instance, no runtime exception will be triggered due to an implicit runtime conversion failing. JavaScript actually performs implicit type conversions at runtime at different occasions. First, when using numeric and string values with the + operator, the String type has precedence and concatenation is always performed:

// gives: "3hello";
var a = 3 + "hello";

// gives: "hellotrue"
var b = "hello" + true;

When using other arithmetic operators, the Number type has precedence:

// outputs: 9
var a = 10 - "1";

// outputs: 20
var b = 10 * "2";

// outputs: 5
var c = 10 / "2";

Implicit conversions to Number will also happen when using the == or != operators with the Number, String and Boolean types:

// outputs: true
console.log ("1" == 1); // equals to 1 == 1

// outputs: true
console.log ("1" == true); // equals to 1 == 1

// outputs: false
console.log ("1" != 1); // equals to 1 != 1

// outputs: false
console.log ("1" != true); // equals to 1 != 1

// outputs: false
console.log ("true" == true); // equals to NaN == 1

To avoid implicit conversions and verify that both types and values are equal, you can rely on the strict equality (===) or strict inequality (!==) operators, which will never perform automatic conversion implicitly:

// outputs: false
console.log ("1" === 1);

// outputs: false
console.log ("1" === true);

// outputs: true
console.log ("1" !== 1);

// outputs: true
console.log ("1" !== true);

It is therefore a good practice to use the strict operators to reduce the risks of ambiguity. If we need to convert data explicitly, we can use the appropriates types conversion functions:

// convert a string to a number
var a = Number ("3");

// convert a boolean to a number
var b = Number (true);

// tries to convert a non numeric string to a number
var c = Number ("Hello");

// outputs: 3 1 "NaN"
console.log ( a, b, c );

In the same way, converting a string to a number can be done using the parseInt and parseFloat functions:

// convert a string to a number
var a = parseInt ( "4 chicken" ); 

// convert a boolean to a number
var b = parseFloat ( "1.5 pint" );

// outputs: 4 1.5
console.log ( a, b );

We saw earlier that JavaScript can throw runtime exceptions, let’s spend a few minutes on this now.

Memo

  • Implicit conversion to String is performed when using the + operator.
  • Implicit conversion to Number is performed when using other arithmetic operators.
  • Implicit conversion to Number is performed when using the equality and inequality operators with the Number, Boolean and String types.
  • To avoid implicit conversions, it is recommended to use the strict equality and inequality operators.
  • Explicit conversion can be done using the proper conversion functions.
Runtime exceptions

In all projects we have to deal with runtime exceptions. In JavaScript, as we saw earlier, these can be triggered by the runtime itself or explicitly by our code. For example, if you try to call a method not available on an object, this will trigger a runtime exception:

var scores = [];

// triggers: Uncaught TypeError: Object  has no method 'foo'
scores.foo();

At any time, if we need to throw an exception ourselves, we can use the throw keyword with Error object:

throw new Error ('Oops, there is a problem');

As expected, a runtime exception needs to be handled otherwise the console will output the following message:

Uncaught Error: Oops, there is a problem

To handle errors, we can use the try catch statement. The message property in thrown Errors contains the error message:

try {
    throw new Error ('Oops, there is a problem');

} catch ( e ) {
   // outputs: Oops, there is a problem catched!
   console.log ( e.message + ' catched!');
}

If we need some logic to be executed whether or not an error is thrown after code has executed in the try block, we use the finally statement:

try {
    throw new Error ('Oops, there is a problem');

} catch ( e ) {
    // outputs: Oops, there is a problem catched!
    console.log ( e.message + ' catched!');

} finally {
    // outputs: Code triggered at all times
    console.log ( 'Code triggered at all times' );
}

Note that conditional catch cannot be done the same way as in languages like ActionScript or C#, by placing the appropriate type in the catch block to redirect the exception automatically. In JavaScript we use a single catch block and have the appropriate type test inside that same block:

try {
    throw new Error ('Oops, there is a problem');

} catch ( e ) {
    if ( e instanceof BufferError ) {
           // handle buffer error

    } else if ( e instanceof ParseError ) {
           // handle parse error
    }
} finally {
    // outputs: Code triggered at all times
    console.log ( 'Code triggered at all times' );
}

Note that the JavaScript specification details the ability to inline the if directly inside the catch block.

try {
    throw new Error ('Oops, there is a problem');

} catch ( e if e instanceof BufferError ) {
    // handle buffer error

} catch ( e if e instanceof ParseError ) {
    // handle parsing error

} finally {
    // outputs: Code triggered at all times
    console.log ( 'Code triggered at all times' );
}

Unfortunately, this feature is not part of the ECMAScript specification and will not work in most browsers except Firefox, which again has excellent support for the latest JavaScript’s features. You can rely on Firefox for testing this feature, but don’t rely on it for a real project. What about performance? In JavaScript, exception handling does not have much impact on performance either, except if you use try catch inside a function. So make sure you don’t do this:

function test() {
    try {
        var s = 0;
        for (var i = 0; i < 10000; i++) s = i;
        return s;
    } catch ( e ) {};
}

But instead, move the try catch outside of the function:

function test() {
    var s = 0;
    for (var i = 0; i < 10000; i++) s = i;
    return s;
}

try {
    test();
} catch ( e ) {};

Next, let’s have a look now at the different types of data we will be working with in JavaScript, composite and primitive data types.

Memo

  • Runtime exceptions can be triggered from user code or by the runtime.
  • Conditional catch cannot be done implicitly.
  • Conditional catch has to be done explicitly using an if statement.
Primitive and composite data types

JavaScript defines six data types, and just like in most languages, you can divide these in two categories:

  • Primitive
    • Number
    • String
    • Boolean
    • Null
    • Undefined
  • Composite
    • Object

As expected, primitives are copied by value:

var a = "Sebastian";

var b = "Tinic";

var c = a;

a = "Chris"; 

// outputs: Sebastian
console.log ( c );

Composite (objects) are everything else, like a Window object, a RegExp, a function, etc. and passed by reference. The example below illustrates the idea:

// create an Array
var a = ['Sebastian', 'Alex', 'Jason'];

// create an Array
var b = ['Sebastian', 'Alex', 'Jason'];

// outputs: false
console.log ( a == b );

Even though the two arrays contain the exact same values, we are actually comparing two different pointers here, not two similar values. In the code below, we illustrate this differently:

// create an Array
var a = ['Sebastian', 'Alex', 'Jason'];

// pass by reference (nothing is copied here)
// b points now to a
var b = a;

// modifying b modifies a
b[1] = 'Scott';

// outputs: ["Sebastian", "Scott", "Jason"]
console.log ( a );

Before jumping into the specific behaviors of JavaScript, let’s have a look at the Boolean type now.

Memo

  • There are six data types in total in JavaScript.
  • Primitives are number, string, boolean, null and undefined.
  • Composite (objects) are everything else.
  • Primitives are passed by value, whereas composite types are passed by reference.
Boolean

The concept of Booleans is probably the easiest part of any language. However it is worth nothing a few things when it comes to JavaScript. The code below highlights how primitives behave when tested as Booleans:

var a = true;

var b = "true";

var c = 1;

var d = false;

// outputs: true
console.log ( a == true );

// outputs: false
console.log ( b == true );

// outputs: true
console.log ( c == true );

// outputs: false
console.log ( d == true );

Did you notice the implicit conversion performed here? Like on most languages, you can convert anything to a Boolean by using the Boolean conversion function or Boolean function constructor:

var a = new Boolean ( true );

var b = new Boolean ( "true" );

var c = new Boolean ( "false" );

var d = new Boolean ( 1 );

var e = new Boolean ( false );

var f = new Boolean ( undefined );

var g = new Boolean ( null );

// outputs: true
console.log ( a == true );

// outputs: true
console.log ( b == true );

// outputs: true 
console.log ( c == true );

// outputs: true
console.log ( d == true );

// outputs: false
console.log ( e == true );

// outputs: false
console.log ( f == true );

// outputs: false
console.log ( g == true );

Note that we are using the Boolean constructor here, producing Boolean objects, not the Boolean conversion function. producing primitives. Finally, any type placed inside a conditional expression will be converted to Boolean. In the code below, our test succeeds and displays the message in the console:

// converts automatically "false" to new Boolean("false")
if ( "false" ) {
    console.log ("This will be triggered!")
}

Memo

  • Always remember about implicit conversions.
  • Any type placed inside a conditional expression will be converted to Number.
Number

In JavaScript, there is no such concept of int, uint or float, so in case of decimals, everything is a number. The type number is used for all decimals and is represented as a float and using 64-bit floating point format in memory:

var a = 4;

var b = 6.9;

var c = -5;

// outputs: number number number
console.log ( typeof a, typeof b, typeof c);

Division by zero or overflow and underflow do not trigger exceptions, it just happens silently:

// division by zero
var a = 0 / 0;

// number too big
var b = Number.POSITIVE_INFINITY;

// number too big
var c = Number.NEGATIVE_INFINITY;

// outputs: NaN Infinity -Infinity
console.log ( a, b, c );

It is worth noting that NaN cannot be compared to NaN:

// outputs: false
console.log (Number.NaN == Number.NaN);

But the isNaN function can be used:

// outputs: true
console.log ( isNaN (Number.NaN) );

It is time to talk now about one of the most important core object in JavaScript, the Object type.

Memo

  • At the core, we need to distinct primitive and composite data types.
  • Number is used for all decimals.
  • Every decimal is represented as a float and using 64-bit floating point format in memory.
Object and properties

To create a simple object, we rely on the Object type. Object is in fact pretty much the core of everything in JavaScript. We will come back to this in a few minutes. It is sometimes useful to quickly define an object that would hold a few properties, note that both syntaxes are possible, literal and nonliteral (function constructor):

// custom object with literal syntax
var person = { name: 'Bob', lastName: 'Groove' };

// custom object with new (object constructor)
var person = new Object();

// create some properties
person.name = 'Bob';
person.lastName = 'Groove';

Obviously, the first syntax is shorter and is usually preferred.

Note that we can also use the Object.create() API to create an object, that would allow us to specific which prototype to use for this object. As a dynamic language, it is possible to access properties using multiple syntaxes, the most common one being the dot operator:
// custom object
var person = {};

// create a new property name
person.name = ‘Bob’;

Using the bracket notation also works in case you need to evaluate the property dynamically. Keep in mind that this syntax is slightly slower and should not be used by default. This also makes refactoring harder.

// custom object
var person = {};

var prop = "name";
person [prop] = 'Bob';

Which equals to the dot operator syntax:

// custom object
var person = {};

person.name = 'Bob';

// outputs: true
console.log ( person.name == person['name'] );

Note that property access in general tends to be slow and should be minimized. V8 is known to provide fast property access based on a different way it stores object properties. Other VMs use a hashmap to lookup for properties; V8 is using an array approach using hidden classes. The hidden class is used for objects of the same type which stores the offset to access properties values instead of searching through an hash table.

To test if a property is defined on an object, we can use the in operator:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: true
console.log ( "name" in person );

Note that the in keyword will also search for inherited properties through the prototype chain:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: true
console.log ( "toString" in person );

The hasOwnProperty() API will not check through the prototype chain and only for direct custom properties:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: false
console.log ( person.hasOwnProperty ("toString") );

// outputs: true
console.log ( person.hasOwnProperty ("name") );

By default, all instance properties are dynamic and can be deleted using the delete keyword:

// custom object
var person = {};

// add a new property
person.age = 40;

// delete it
delete person.age;

// try to retrieve it
// outputs: undefined
console.log(person.age);

Because of the underlying mechanics of some VMs like V8 (Chrome) it is not recommended to delete an object property. Doing so will alter the structure of the hidden class and have a performance hit. If you don’t need a property anymore, set it to null but don’t delete it. ECMAScript 5 introduced a set of APIs to allow finer control over object extensions and property attributes. If we want to make sure no properties get added to an object, we can use the Object.preventExtensions() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// prevents any extensibility
Object.preventExtensions(person);

// set the age
person.age = 25;

// outputs: undefined
console.log ( person.age );

// delete the name property
delete person.name;

// outputs: undefined
console.log ( person.name );

Given that methods on objects are in fact properties referencing functions, our object cannot be augmented with any methods either, however, our object can still get its properties deleted. At anytime we can test if the object is extensible or not by using the Object.isExtensible() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// prevents any extensibility
Object.preventExtensions(person);

// outputs: false
console.log ( Object.isExtensible ( person ) );

If we want to prevent from any extensibility or deletion, we can also seal the object through the Object.seal() API. Once an object is sealed, existing properties can still be changed and retrieved, but new ones cannot be added and deletion is also forbidden:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we seal our object
Object.seal ( person );

// outputs: Bob
console.log ( person.name );

// attempt to null the property
person.name = null;

// outputs: null
console.log ( person.name );

// change the name
person.name = "David";

// outputs: David
console.log ( person.name );

// attempt to create a new property
person.age = 30;

// outputs: undefined
console.log ( person.age );

// attempt to delete the property
delete person.name;

// outputs: David
console.log ( person.name );

Finally, we can resort to the Object.freeze() API if we truly want to ensure immutability at all levels:

// custom object
var person= { name: 'Bob', lastName: 'Groove' };

// we freeze our object
Object.freeze ( person );

// attempt to change the name value
person.name = "David";

// outputs: Bob
console.log ( person.name );

// attempt to delete the property
delete person.name;

// outputs: Bob
console.log ( person.name );

// attempt to null the property
person.name = null;

// outputs: Bob
console.log ( person.name );

To test if any object is either sealed or frozen we can rely on the Object.isSealed() and Object.isFrozen() APIs:

// outputs: true
console.log ( Object.isSealed (person) );

// outputs: true
console.log ( Object.isFrozen (person) );

Note that an object can be sealed and frozen at the same time and that once sealed or frozen, you cannot undo it. To summarize:

  • Object.preventsExtensions() prevents any properties or new capabilities (methods) to be added to the object.
  • Object.seal() prevents any properties or new capabilities (methods) to be added to the object but also forbids deletion. However, existing properties can still be changed.
  • Object.freeze() prevents any properties or new capabilities (methods) to be added to the object, forbids deletion, properties cannot be changed. The object is completely immutable.

If we need to go at a more granular level, we can actually know more about each property through the Object.getOwnPropertyDescriptor() API which returns a property descriptor. In the code below, we retrieve a descriptor for the person property:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// retrieve the property descriptor for ‘name’
var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: true
console.log(desc.writable);

// outputs: true
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

A property inspector has the following properties:

  • writeable: Indicates if the property value can be changed.
  • configurable: Indicates if the property value can be changed or deleted.
  • enumerable: Indicates if the property is enumerable or not.
  • value: Indicates the value of the property
  • get: A function acting as a getter for the property.
  • set: A function acting as a setter for the property.

By default, all properties are configurable, enumerable and writeable but configurability is modified if we seal our object:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we seal the object
Object.seal ( person );

// retrieve the property descriptor for ‘name’
var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: true
console.log(desc.writable);

// outputs: false
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

If we freeze it, then everything is locked except enumeration:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we freeze the object
Object.freeze ( person ); 

var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: false
console.log(desc.writable);

// outputs: false
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

Sealing or freezing an object can be useful in scenarios where you want to ensure some level of immutability in parts of your program. Like we just saw, the Object.getOwnPropertyDescriptor() API returns attributes about any property, you may wonder if it is possible to define a new property and at the same time its attributes? Yes, that is possible too. Up until now, we defined new properties using the dot operator:

myObject.foo = myValue;

Using this syntax is actually a shortcut and makes properties by default enumerable, configurable and writeable. ECMAScript 5 defines a more granular API called Object.defineProperty(), which allows you to define the property attributes using a property descriptor. The API has the following signature:

Object.defineProperty(obj, prop, descriptor)

In the code below, we create a name property and make it writeable, enumerable and configurable:

// custom object
var myObject = {};

Object.defineProperty(myObject, "name", {value : 'Bob',
                               writable : true,
                               enumerable : true,
                               configurable : true});

// outputs: Bob
console.log ( myObject.name );

Given that we set all attributes to true, we can enumerate the property, modify it and even delete it:

// custom object
var myObject = {};

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {value : 'Bob',
                               writable : true,
                               enumerable : true,
                               configurable : true});

// outputs: Bob
console.log ( myObject.name );

// outputs: name
for ( var p in myObject ) {
    console.log ( p );
}

// we update the name
myObject.name = 'Stevie';

// outputs: Stevie
console.log ( myObject.name );

// we delete the property
delete myObject.name;

// outputs: undefined
console.log ( myObject.name );

If we change the attributes, we can be very granular and prevent any configuration or update but still allow enumeration:

// custom object
var myObject = {};

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {value : 'Bob',
                               writable : false,
                               enumerable : true,
                               configurable : false});

// outputs: Bob
console.log ( myObject.name );

// outputs: name
for ( var p in myObject ) {
    console.log ( p );
}

// write access fails silently
myObject.name = 'Stevie';

// deletion fails silently
delete myObject.name;

// outputs: Bob
console.log ( myObject.name );

Even more powerful, if we use the get and set attributes as part of the descriptor object, we can define the implementation of getters and setters using the same API:

// custom object
var myObject = {};

// we define the getter
function getter() {
    return this.nameValue;
}

// we define the setter
function setter(newValue) {
    this.nameValue = newValue;
}

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {
                                   get: getter,
                                   set: setter});

// we change the value
myObject.name = 'Stevie';

// outputs: Stevie
console.log ( myObject.name );

We define the getter and setter for our name property. Note that we use the nameValue alias to reference the property, but we can actually use any value, using foo would work just fine.

// we define the getter
function getter() {
    return this.foo;
}

// we define the setter
function setter(newValue) {
    this.foo = newValue;
}

We now have control over how our value is read or written. In the code below, we make sure that any string read from the name property will always be cased correctly:

// we define the getter
function getter() {
    return this.nameValue.charAt(0).toUpperCase()+this.nameValue.substr(1).toLowerCase();
}

In the code below, we use a non case sensitive string, when we retrieved the value, the string is correctly formatted:

// custom object
var myObject = {};

// we define the getter
function getter() {
    return this.nameValue.charAt(0).toUpperCase()+this.nameValue.substr(1).toLowerCase();
}

// we define the setter
function setter(newValue) {
    this.nameValue = newValue;
}

// Example of an object property added with defineProperty with a data property descriptor
Object.defineProperty(myObject, "name", {
                                   get: getter,
                                   set: setter});

// we change the value
myObject.name = 'stevie';

// outputs: Stevie
console.log ( myObject.name );

Pretty powerful right? Note that the Object class also defines a defineProperties() API, allowing you to define multiple properties all at once. Finally, to enumerate properties from an object, we can rely on the Object.keys() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: ["name", "lastName"]
console.log ( Object.keys ( person ) );

You may wonder if immutability makes property access faster? Unfortunately, no. It is important to note that these APIs to seal, freeze or prevent extensions of objects have been slow in the past and have made property access slower. Recent benchmarks have shown that performance got much better recently, but keep an eye on them in terms of performance impact.

Memo

  • All objects are mutable but can be sealed or frozen using the appropriates APIs from the Object class.
  • Object.preventsExtensions() prevents any properties or new capabilities (methods) to be added to the object.
  • Object.seal() prevents any properties or new capabilities (methods) to be added to the object but also forbids deletion. However, existing properties can still be changed.
  • Object.freeze() prevents any properties or new capabilities (methods) to be added to the object, forbids deletion, properties cannot be changed. The object is completely immutable.
  • The dot operator and brackets notation can be used to read and write properties.
  • The brackets notation can be useful but performs slower than the dot operator.
Almost everything is an Object

In JavaScript, almost everything is an Object. Let’s try the code below to illustrate this:

function foo(){};

// outputs: true
console.log ( foo instanceof Object );

var countries = ['USA', 'FRANCE'];

// outputs: true
console.log ( countries instanceof Object );

var person = { name: "Bob", lastName: "Groove" };

// outputs: true
console.log ( person instanceof Object );

As expected, these composites types are of type Object. Even function, we will get back to that later in this article. But what about primitives, like a string, a decimal or a Boolean?

var result = true;

// outputs: false
console.log ( result instanceof Object );

var name = 'Bob';

// outputs: false
console.log ( name instanceof Object );

var score = 190;

// outputs: false
console.log ( score instanceof Object );

So you may wonder then how come these types have methods defined on them? Like calling length on a String:

var name = 'Bob';

// outputs: 3
console.log ( name.length );

Behind the scenes a wrapping object will be created when methods are called on primitives. This concept is known as ‘boxing’. At runtime, the code above will actually generate the following internally:

// outputs: 3
console.log ( (new String (name)).length );

That wrapper object (acting as a box here) is then discarded and garbage collected once the length getter is called. That is why trying to store data on a primitive fails. The temporary object we are writing into is immediately discarded. Trying to retrieve our property later on will create another box and as a result return undefined:

var name = 'Bob'; 

// write some data
name.foo = 'Some Data'; // equals to (new String (name)).foo = 'Some Data';

// outputs: undefined
console.log ( name.foo ); // equals to console.log ( (new String (name)).foo );

But storing data on a primitive created with a function constructor is possible, because no implicit boxing/unboxing occurs:

// create the string with new (function constructor)
var name = new String("Bob");

// write some data
name.foo = "Some Data";

// outputs: Some Data
console.log(name.foo);

Today's JavaScript VMs are pretty fast at boxing/unboxing. It is not worth worrying too much about it performance wise.

Two other types are not of type Object, null and undefined. The code below demonstrates this:

// outputs: false
console.log ( null instanceof Object );

// outputs: false
console.log ( undefined instanceof Object );

Memo

  • Calling a property or method on a primitive causes boxing and unboxing.
  • Therefore, it is not possible to store data or augment capabilities of a primitive.
Null and undefined

As we just saw, null and undefined are a special kind of types. In JavaScript, any variable not initialized is undefined:

var myObject;
var i;

// outputs: undefined undefined
console.log ( myObject, i );

Same for undefined properties:

var myObject = { name: "Bob" };

// outputs: undefined
console.log ( myObject.firstName );

Some typed languages sometimes initialize primitives values or objects to null. This allows developers to use null for initialization testing. Automatic initialization to null by the runtime cannot happen in JavaScript, keep this in mind at all times. If you forget, you may be tempted to use lots of null checks in your code like in the example below:

var myArray;

// if the Array is not initialized, then initialize it
if ( myArray == null ) {
  myArray = new Array();
  console.log ('Array initialized');
} else console.log ('already created');

The issue is that our test for null here is not truly reliable. Remember, our Array is not initialized, hence returns undefined. On top of that, null and undefined are two different types but resolve to true when compared when we are not using the strict equality operator:

// outputs: object undefined
console.log ( typeof null, typeof undefined );

// outputs: true
console.log ( undefined == null );

Remember about implicit conversions? We are using here the == operator. If were to use the strict equality operator (===), which tests for both type and value, our test would fail:

// outputs: false
console.log ( undefined === null );

Remember we used the strict operators earlier to resolve ambiguity. Here again, it proves to be useful. In our previous code we would now be entering the else block:

var myArray;

// if the Array is not initialized, then initialize it
if ( myArray === null ) {
  myArray = new Array();
  console.log ('Array initialized');
} else console.log ('already created');

You could also just rely on a Boolean evaluation and do a simple if not:

var myArray;

// if the Array is not initialized, then initialize it
if ( !myArray ) {
  myArray = new Array();
  console.log ('Array initialized');
} else console.log ('already created');

Note that internally, that test will evaluate to:

if ( !Boolean(undefined) )

But in order to stay consistent with what we initially tried, it is a good practice to just set our variable to null, to explicitly emphasize that it is not initialized but expected to be later on:

// initialize to null
var myArray = null;

// if the Array is not initialized, then initialize it
if ( myArray == null ) {
  myArray = new Array();
  console.log ('Array initialized');
} else console.log ('already created');

Our code becomes now truly reliable even when using strict equality:

// initialize to null
var myArray = null;

// if the Array is not initialized, then initialize it
if ( myArray === null ) {
  myArray = new Array();
  console.log ('Array initialized');
} else console.log ('already created');

If we now run our code, it outputs:

Array initialized

Let’s have a look at loops now.

Memo

  • In JavaScript, anything not defined or initialized in undefined.
  • null will never be set automatically by the runtime.
  • null and undefined are two different types.
  • Only strict equality (===) differentiates null and undefined.
  • It is a best practice to always explictely set uninitialized variables to null.
Loops

Loops are an essential part of any programming language. JavaScript supports all the basics kinds of loops you would expect like for, for in, while or do while:

var lng = 200;

// classic for
for ( var i = 0; i < lng; i++ )  { 
}

var myObject = { name: "Bob", age : 30 };

// object enumeration
for ( var p in myObject ) {
    /* outputs:
    name : Bob
    age : 30
    */
    console.log ( p, " : ", myObject[p] );
}

var i = 0; 

// while loop
while ( i < lng ) {
    console.log ( i );
    i++;
}

// do while loop
do { 
    console.log ( i );
    i++;
} while ( i < lng )

Some other languages provide also support for a for each loop. In JavaScript, we will be using a more functional approach for this and use the Array.forEach() API which we will cover shortly. It is important to note that when using the for in loop to enumerate object properties, because ECMA-262 does not specify the enumeration order, it is implementation dependent and the general behavior across most browsers is to match the definition order.

// custom object
var myObject = { name: "Bob", age: 20 };

// enumerate
for ( var p in myObject ) {
    /*
    // outputs:
    name
    age
    */
    console.log ( p );
}

If we change the order of definition, it gets reflected in the enumeration:

// custom object
var myObject = { age: 20, name: "Bob" };

// enumerate
for ( var p in myObject ) {
    /*
    // outputs:
    age
    name
    */
    console.log ( p );
}

There is one exception though, if you have numerical properties as strings, these will be listed ahead of the non-numerical ones.

// custom object
var myObject = { age: 20, name: "Bob", "12":"2343" };

// enumerate
for ( var p in myObject ) {
    /*
    // outputs:
    12
    age
    name
    */
    console.log ( p );
}

In terms of performance, the for in loop tend to be slow, make sure you don’t rely on it heavily when performance is a key requirement.

Memo

  • JavaScript provides all the classical loops like for, for in, while and do while.
  • When iterating over an object, the order of enumeration depends on the order the properties got set.
  • There is one exception, if you have numerical properties as strings, these will be listed ahead of the non-numerical ones.
Array

As developers, we are probably using arrays all the time in the content we develop. We use them all the time, to store, reference, loop, iterate, pretty much everywhere. In the code below, we create an array and initialize its length through the constructor:

// create an Array
var myArray = new Array(5);

// outputs: 5
console.log ( myArray.length );

Note that with one more argument, the Array constructor adds those values to the array instead:

// create an array
var myArray = new Array(5, 10, 30, 20);

// outputs: [5, 10, 30, 20]
console.log ( myArray );

If we were to set the length to a larger number than the size of the array, undefined values would be padded:

// create an Array
var myArray = new Array(5);

// outputs: 5
console.log ( myArray.length );

// increase the array size
myArray.length = 7;

// outputs: undefined
console.log ( myArray[6] );

In the same way, adding extra commas will add undefined values to the array:

// create an Array
var myArray = ["Bob", "James", , "Tom"];

// outputs: 4
console.log ( myArray.length );

// outputs: undefined
console.log ( myArray[2] );

Array is also an object and can act like a map. In the code below, we create a dynamic property on it. Named, non-numeric properties do not affect array length:

var myArray = [];

// create a custom dynamic property
myArray['name'] = 'Bob';

// outputs: 0
console.log(myArray.length);

// outputs: Bob
console.log(myArray['name']);

At this point, an array is almost identical to a plain object. The APIs used on Array are very similar to what you would find on other languages. You can expect all the mutator and accessor APIs you may have used in the past like pop(), push(), splice(), etc.:

// create an Array
var myArray = new Array(200);

// outputs: 200
console.log ( myArray.length );

// push one element
myArray.push ( 50 );

// outputs: 201
console.log ( myArray.length );

But the list does not stop here. ECMAScript 5.1 introduced a set of iteration APIs on the Array object, The list below illustrates all of them and some examples on how they can be used:

  • forEach: Calls a function for each element in the array.
// some values
var values = [1, 2, 3]

/*
outputs:
1
2
3
*/
values.forEach(function(item) { console.log ( item ) });
  • every: Returns true if every element in this array satisfies the provided testing function.
var data = [ 12, "bobby", "willy", 58, "ritchie" ];

function every ( element, index, source ) {
    return ( element instanceof Number );
}

// is this an Array containing numbers only?
var onlyNumbers = data.every ( every );

// outputs : false
console.log( onlyNumbers );
  • some: Returns true if at least one element in this array satisfies the provided testing function.
var users = [ { prenom : "Bobby", age : 18, sexe : "H" },
            { prenom : "Linda", age : 18, sexe : "F" },
            { prenom : "Ritchie", age : 16, sexe : "H"},
            { prenom : "Stevie", age : 15, sexe : "H" } ]

function some ( element, index, source ) {
    return ( element.sexe == "F" );
}

// is there a female in this Array?
var result = users.some ( some );

// outputs : true
console.log ( result );
  • filter: Creates a new array with all of the elements of this array for which the provided filtering function returns true.
var users = [ { name : "Bobby", age : 18 },
                        { name : "Willy", age : 21 },
                        { name : "Ritchie", age : 16 },
                        { name : "Stevie", age : 21 } ];

function filter ( element, index, source ) {
    return ( element.age >= 21 );
}

var legalUsers = users.filter ( filter );

function browse ( element, index, source ) {
    console.log ( element.name, element.age );
}

/* outputs :
Willy 21
Stevie 21
*/
legalUsers.forEach( browse );
  • map: Creates a new array with the results of calling a provided function on every element in this array.
var names = ["bobby", "willy", "ritchie"];

function map ( element, index, source ) {
    return element.charAt(0).toUpperCase()+element.substr(1).toLowerCase();
}

// we create an array from the result of the map function
var formattedNames = names.map ( map );

// outputs : Bobby Willy Ritchie
console.log ( formattedNames );
  • reduce: Apply a function simultaneously against two values of the array (from left-to-right) as to reduce it to a single value.
var values = [20, 30, 40, 50];

function reduce ( previousValue, currentValue, index, source ) {
    return previousValue + currentValue;
}

// we create a table that holds the reduction
var reduced = values.reduce ( reduce );

// outputs : 140
console.log ( reduced );
  • reduceRight: Apply a function simultaneously against two values of the array (from right-to-left) as to reduce it to a single value.
var values = [20, 30, 40, 50];

function reduce ( previousValue, currentValue, index, source ) {
    return previousValue + currentValue;
}

// we create a table that holds the reduction
// reduceRight starts from the right
var reduced = values.reduceRight ( reduce );

// outputs : 140
console.log ( reduced );

Even though these APIs are really powerful, they tend to be slow. Here again, if performance is a key requirement, do not rely on them too heavily. For instance, the map function could be replaced by the following code:

var names = ["bobby", "willy", "ritchie"];

var lng = names.length; 

for (var i = 0; i < lng; i++) {
  var item = names[i];
  names[i] = item.charAt(0).toUpperCase() + item.substr(1).toLowerCase();
}

This version even though less pretty, will perform much faster. So why is that? A few reasons behind this:

  • Iterator APIs rely on an additional function, meaning an additional function call
  • Functions requires execution context which creates additional objects in scope chain

JavaScript does not have support for dictionaries. In the code below, we use a simple object to map to another one containing positioning information:

// create an Array
var myArray = [];

// an object key
var key = {}

// map the canvas to a custom literal object
myArray[key] = { x : 400, y : 400};

// outputs: Object {x: 400, y: 400}
console.log(myArray[key]);

This example could lead you to the conclusion that arrays can be used as a dictionary. What happens behind the scenes is that the object is converted to a string, which equals to the creation of an [object Object] property:

// map the canvas to a custom literal object
myArray[key] = { x : 400, y : 400}; // equals to: myArray["[object Object]"] = { x : 400, y : 400};

When dealing with arrays, performance is usually a common topic. In the following section, we are going to spend some time on performance optimizations regarding arrays. In the code below we loop over the array elements:

// some names
var names = ["Daniel", "Divya", "Veronique"];

// store the length
var lng = names.length;

/* outputs:
Daniel
Divya
Veronique
*/
for (var i = 0; i < lng; i++) {
    var name = names[i];
}

Note that we cached the length of the array in the lng variable instead of reevaluating the length for each iteration. This is good practice you may want to follow. Re-evaluating the length every time can have an impact on performance. Also, because of its succinctness, the literal syntax is usually favored to create objects like an array:

// empty array
var data = [];

for (var i = 0; i < 1000000; i++) {
    data[i] = i;
}

Note that our array is empty, we do not pre-allocate it with a specific length. However, some VMs like V8 or SpiderMonkey prefer non-pre allocated arrays, and the code above will perform actually faster than the code below:

var lng = 500000;

// slower than using [] or new Array()
var data = new Array(lng);

for (var i = 0; i < lng; i++) {
    data[i] = i;
}

We are talking about a 5 or 10% performance increase, but still. It is also not recommended to use mixed types inside an array. Even though pretty uncommon, be aware that this will obviously force the VM to react to this and spend time handling unexpected types and convert them. In the code below we calculate the sum of the values:

// some values of the same type
var values = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];

var sum = 0;

var lng = values.length;

for ( var i = 0; i< lng; i++ ) {
    sum += values [ i ];
}

Now, if we were to have multiple types inside the array, this code would perform slower:

// some mixed types
var values = [ 1, 2, "3", 4, 5, 6, 7, 8, 9, "10"];

var sum = 0;

var lng = values.length; 

for ( var i = 0; i< lng; i++ ) {
    sum += values [ i ];
}

Also, avoid having holes in your arrays too, which can be caused by the use of delete or just setting a specific index to null:

// some scores
var scores = [200, 100, 456, 231, 800, 453];

// create a hole
scores [2] = null;

// create another hole
delete scores [4];

Note that using the length property is also an efficient way to empty an array. A typical scenario is inside a loop, where you could be tempted to allocate a new array to reset it. Rather than recreate an empty instance every time which would be costly on time and memory, we can just set its length to 0:

// create an Array
var myArray = new Array(5);

// outputs: 5
console.log ( myArray.length );

// empties the array
myArray.length = 0;

// outputs: 0
console.log ( myArray.length );

Finally, when indexing elements from an array, store the value into a local variable and avoid using the array syntax (brackets notation) which is slow, so instead of writing the following code:

for ( var i = 0; i< lng; i++ ) {
    myArray[i].x = Math.random()*500;
    myArray[i].y = Math.random()*500;
    myArray[i].friction = Math.random();
}

It is preferred to write:

for ( var i = 0; i< lng; i++ ) {
    // store a reference in a local variable
    var element = myArray[i];
    element.x = Math.random()*500;
    element.y = Math.random()*500;
    element.friction = Math.random();
}

In addition to iteration APIs, JavaScript 1.7 introduced proper iterators. Let’s have a look at them.

Memo

  • Iterator APIs provide useful mechanisms for array processing.
  • These APIs are not as fast as manual iteration.
  • V8 (Chrome) and SpiderMonkey (Firefox) prefer non pre-allocated arrays.
  • Nitro (Safari) prefers pre-allocated arrays.
  • When looping, store the array length in an array for reuse.
  • Avoid holey and mixed type arrays.
Iterator

JavaScript 1.7 introduced iterators, through the Iterator object. Iterators are really useful and allow developers to reduce the number of states introduced by iteration. Originally, without any iterators, looping through an array in JavaScript look like this:

var countries = [ 'France', 'USA', 'Japan' ];

var lng = countries.length; 

for ( var i = 0; i< lng; i++) {
  // outputs:
  France
  USA
  Japan
 console.log ( countries[i] );
}

Notice the number of variables defined here, a variable i is incremented, tested against a maximum length, and the use of the variable i to index our array, lots of places where things could go wrong. How could we reduce this to the maximum and write safer code? This is where iterators come to the rescue:

// some data
var countries = [ 'France', 'USA', 'Japan' ];

// create the Iterator through the Iterator function
var it = Iterator(countries);

// outputs: [0, "France"]
console.log ( it.next() );

// outputs: [1, "USA"]
console.log ( it.next() );

// outputs: [2, "Japan"]
console.log ( it.next() );

Thanks to iterators, less variables are defined, we use query the next item from our Iterator object, that’s all. But remember, Iterators are for now only available as part of JavaScript 1.7, not ECMAScript. As a result, Chrome or IE do not support such a feature, only Firefox. This basically prevents you from using iterators in projects where reach matters.

Memo

  • Iterators allow the introspection of arrays in an implicit and stateless manner.
  • Iterators are part of JavaScript 1.7 and not ECMAScript 5, therefore not largely available, except in Firefox.
Date

The Date class returns information about the current time, but is also very commonly used for benchmarking purpose using the Date.now() API. The example below illustrates the idea:

var start = Date.now();

var iterations = 50000000;

var buffer = new Array(iterations);

for(var i = 0; i < iterations; i++) {
  buffer[i] = i;
}

// outputs: 2703
console.log(Date.now() - start);

We capture the time (ms) before the code we want to benchmark is triggered, once finished we compare the two. Note that iterating over this array takes around 3 seconds today in Chrome 24 and around 2 seconds in Safari 6 and Firefox 18 on MacOS. The use of a Date object to calculate performance is the most popular way to benchmark JavaScript code, but some more advanced techniques are being used today too. Here are some of the limitations of the Date.now() approach:

  • Very fast executions could just return 0 ms, which makes your test unusable.
  • The allocation of objects could also trigger the GC and impact the general performance.

To get more granular performance metrics, we can rely on the new performance.now() API part defined on the performance property. This API returns a floating point since the page started loading providing microseconds in the fractional, which can be very useful for benchmarking. Let’s have a quick look at the difference:

// outputs: 1359349137666
console.log( Date.now() );

// outputs: 22.16799999587238
console.log ( window.performance.now() );

If we change our previous benchmark to use the performance.now()API, we get a more granular number:

var start = window.performance.now();

var iterations = 50000000;

var buffer = new Array(iterations); 

for(var i = 0; i < iterations; i++) {
  buffer[i] = i;
}

// outputs: 3303.2060000114143
console.log(window.performance.now() - start);

This API is part of the navigation timing feature available now in Chrome, Firefox and IE9 but unfortauntely not Safari or Opera. We will come back to this feature in a future article to study sequencing around loading and initialization.

Memo

  • For granular performance benchmarking, when possible, rely on the Performance API : window.performance.now().
  • Such an API is not available yet in Safari and Opera.
Function

JavaScript has support for first-class functions, which means they can be passed as parameters to other functions, or be returned, assigned to variables and stored into arrays. Actually, functions can be declared three ways, as named, as an expression or anonymous:

// named function
function sayHello () {
  console.log('Hello');
}

// function expression
var sayHello = function() {
  console.log ('Hello');
}

// anonymous function used as the listener
button.addEventListener ("click", function (e) {
  console.log ( e.currentTarget );
})

One first difference between the two is hoisting. Remember we saw previously that behavior with hoisting of variables. With functions, the named definition is always interpreted first, so if we call sayHello() before defining it, it will just work:

// outputs: First definition
sayHello();

// named function
function sayHello () {
  console.log('First definition');
}

// function expression
var sayHello = function() {
  console.log ('Second definition');
}

// outputs: Second definition
sayHello();

Because named functions are interpreted first, function expressions always have precedence over named functions:

// function expression
var sayHello = function() {
  console.log ('First definition');
}

// named function
function sayHello () {
  console.log('Second definition');
}

// outputs: First definition
sayHello();

Finally, named functions cannot be renamed, whereas with a function expression, the variable pointing to it can be renamed.

Note that we can simply inline the call right after its definition, this technique called an immediately invoked function expression (or IIFE) allows us to define the function and trigger it at the same time:

// named function with call inlined
(function sayHello() {
  console.log('First definition');
}());

Note that this syntax can also be useful to prevent from polluting the global scope (Window). By declaring variables with var inside a function, these definitions will be scoped only inside the function. In the code below, we only expose a global entry point through the getUsers function, other functions will behave as private:

(function () {

  getUsers = function () {
    // entry point exposed to the global namespace
    return 'getting users';
  }

  var checkTime = function () {
    // logic to check time
  }

  var authenticate = function () {
    // logic for authentication
  }
}());

// outputs: getting users
console.log ( getUsers() );

// triggers: Uncaught ReferenceError: checkTime is not defined
console.log ( checkTime() );

Note that the getUsers definition is now global. If we want even better control, we can rely on the reveal module pattern:

var module = (function () {

  var getUsers = function () {
    // entry point exposed to the global namespace
    return 'getting users';
  }

  var checkTime = function () {
    // logic to check time
  }

  var authenticate = function () {
    // logic for authentication
  }

  return {
    getUsers:getUsers
  }

}());

// outputs: getting users
console.log ( module.getUsers() );

// triggers: Uncaught ReferenceError: checkTime is not defined 
checkTime();

Through this pattern, we no longer rely on a global definition. We externalize safely the definitions we want to make public or not, without ever polluting the global scope.

Let’s have a look now at a function itself and what we can do with it:

function calculate(a, b, c){};

// grab the number of parameters
// outputs: 3
console.log (calculate.length);

// store data
calculate.x = 12;

// retrieve it
// outputs: 12
console.log (calculate.x);

As we saw earlier, functions are objects, and like with arrays, we can use the brackets syntax to access or write properties, which can be useful if we need to evaluate the properties dynamically:

function calculate(a, b, c){};

// grab the number of parameters
// outputs: 3
console.log (calculate.length);

// store data
calculate['x'] = 12;

// retrieve it
// outputs: 12
console.log (calculate['x']);

We can also easily access all the parameters passed to a function using the arguments array:

function average() {
    var total = 0;
    var lng = arguments.length;

    for ( var i = 0; i< lng; i++ ) {
        total += arguments [ i ];
    }
    return total / lng;
} 

// outputs: 263.5
console.log ( average ( 10, 29, 893, 122 ) );

All functions create a local variable arguments in the function body when they’re called which holds all the parameters passed. It is also worth noting that variables defined in a parent function are accessible inside a nested function (closure):

function foo() {
    var a = 'From parent function';

    var innerFoo = function() {
        console.log ( a );
    }();
}

// outputs: From parent function
foo();

On the other hand, the parent function, has no way to access local variables from the inner function:

function foo() {
    var a = 'From parent function';

    var innerFoo = function() {
        var b = 'From inner function';
        console.log ( a );
    }();
    console.log ( b );
}

// outputs: Uncaught ReferenceError: b is not defined
foo();

Memo

  • Functions can be anonymous or named.
  • Hoisting is performed on named functions.
  • Named functions can be called before being defined.
  • Function expressions cannot be called before being defined.
Context of execution

One of JavaScript difficulties resides in understanding how the this keyword behaves. If you have been developing with ActionScript, the this keyword behaves like with ActionScript 1 and 2. You may think that this will always point to the original context the function has been defined on, it actually points to the context of execution:

function foo() {
  console.log ( this );
}

// outputs: Window {top: Window, window: Window, location: Location, external: Object, chrome: Object…}
foo();

By default, the global scope is the Window object, the core document class of the browser. In this example, the function becomes a method of the Window object:

// outputs: Window {top: Window, window: Window, location: Location, external: Object, chrome: Object…}
window.foo();

In this example, two variables x and y are defined on the global scope, therefore properties of the Window object:

// some position
var x = 200;
var y = 300;

function foo() {
  console.log ( this.x, this.y, x, y );
}

// outputs: 200 300 200 300
foo();

In this case, this points to the global context (Window), where our properties are defined. By omitting this, we implicitly target the global scope and can also access the x and y properties. At runtime, the VM will look for variables defined in the local scope of the foo function, if not available, it will look for them in the parent context. If local variables are defined inside the function, these will be chosen first. Therefore, the this keyword allows us to resolve ambiguity and always target the current context of execution, not the local scope:

// some position
var x = 200;
var y = 300;

function foo() {
    // local variables
    var x = 500;
    var y = 500;
    console.log ( this.x, this.y, x, y );
}

// outputs: 200 300 500 500
foo();

Remember at all times that the context of execution may vary. In the code below, even though the foo function is defined on the Window object, passing its reference exports it to the context of myObject:

// some position
var x = 200;
var y = 300;

function foo() {
    console.log ( this.x, this.y, x, y );
}

var myObject = { x : 400, y : 400 }; 

// pass a reference to the foo function
myObject.foo = foo;

// outputs: 400 400 200 300
myObject.foo();

Now this, points to the current object the function is executed on. We can also decide on which context is going to be used using the call API available on any function. In the following code, we execute our foo function back on the Window scope:

// some position
var x = 200;
var y = 300;

function foo() {
    console.log ( this.x, this.y, x, y );
}

var myObject = { x : 400, y : 400 };

// pass a reference to the foo function
myObject.foo = foo;

// outputs: 200 300 200 300
myObject.foo.call ( this );

We can also new a function using its name and treat it like a function constructor:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

var p = new Point(30, 30);

// outputs: Point {x: 30, y: 30}
console.log ( p );

// outputs: 30, 30
console.log ( p.x, p.y );

In this scenario, this allows us to reference the current context of the object that we instantiated and define its properties.  If we do not use this, we declare x and y as global variables (or local if preceded with var) and fail to define the object properties:

function Point(x, y) {
  x = x;
  y = y;
}

var p = new Point(30, 30);

// outputs: Point {}
console.log ( p );

// outputs: undefined undefined
console.log ( p.x, p.y );

We can use the this keyword to reference the current scope to define our new distance method through a distance property:

function Point(x, y) {
  this.x = x;
  this.y = y;

  this.distance = function (point) {
    var dx = Math.abs ( this.x - point.x );
    var dy = Math.abs ( this.y - point.y );
    return Math.sqrt (dx*dx+dy*dy);
  }
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// outputs: 63.245553203367585
console.log ( p1.distance( p2 ) );

To augment the capabilities of a custom or native object, and perform subclassing, we would have to rely on the prototype object.

Memo

  • Functions can be anonymous or named.
  • Named functions can be called before being defined.
  • Anonymous functions cannot be called before being defined.
  • Functions are objects and can be newed.
  • Using this is essential to always refer to the current context and resolve ambiguity with local variables.
  • this points to what is on the left of the dot operator.
Prototype, the good old friend

JavaScript is built on top of the concept of prototype, let’s have a quick look at how this works. If we wanted to define a distance method on our Point object, we would have to write:

function Point(x, y) {
    this.x = x;
    this.y = y;

    this.distance = function (point) {
      var dx = Math.abs ( this.x - point.x );
      var dy = Math.abs ( this.y - point.y );
      return Math.sqrt (dx*dx+dy*dy);
    }
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// outputs: 63.245553203367585
console.log ( p1.distance( p2 ) );

Now if we wanted the Point object to be extended (subclassed) at some point, we could rely on the prototype object:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

Point.prototype.distance = function (point) {
  var dx = Math.abs ( this.x - point.x );
  var dy = Math.abs ( this.y - point.y );
  return Math.sqrt (dx*dx+dy*dy);
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// outputs: 63.245553203367585
console.log ( p1.distance( p2 ) );

Or use the call function, to execute the Point constructor in the context of our Point3D object:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

function Point3D(x, y, z) {
    Point.call(this, x, y);
    this.z = z;
}

Remember to always use the keyword to access the current class properties, otherwise, you will be targeting the global scope of execution, where our methods are defined (Window), not our class instance. In the code below we illustrate an example:

function Point(x, y) {
  this.x = x;
  this.y = y;
  this.enabled = false;
}

Point.prototype.distance = function (point) {
  var dx = Math.abs ( this.x - point.x );
  var dy = Math.abs ( this.y - point.y );
  console.log ( enabled );
  return Math.sqrt (dx*dx+dy*dy);
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// throws: Uncaught ReferenceError: enabled is not defined 
console.log ( p1.distance( p2 ) );

To augment native classes, we can use the same technique. In the code below we augment the Array class by adding a new shuffle API:

Array.prototype.shuffle = function() {
    var lng = this.length;

   for( var i = 0; i< lng; i++ ) {
      var tmp = this[i];
      var randomNum = Math.floor(Math.random()*this.length);
      this[i] = this[randomNum];
      this[randomNum] = tmp;
   }
}

// create an array to be shuffled
var myArray = ["a","b","c","d","e"];

// shuffle it
myArray.shuffle();

// outputs: ["b", "a", "d", "c", "e", shuffle: function]
console.log(myArray);

Extending native classes this way is very powerful and efficient. In a few lines, we augmented the capabilities of the core Array class, pretty neat. But there is also a major risk when doing this. Some future versions of JavaScript could implement such functionalities, colliding with your custom implementations and break your code. We just covered in the previous section how the keyword this behaves. In the example below, we are using a closure to check the distance:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

Point.prototype.distance = function (point) {
  var dx = Math.abs ( this.x - point.x );
  var dy = Math.abs ( this.y - point.y );

  (function checkDistance () {
    console.log ( this.x, this.y );
  })();

  return Math.sqrt (dx*dx+dy*dy);
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// outputs: undefined undefined
// outputs: 63.245553203367585
console.log ( p1.distance( p2 ) );

Given that the parent function executes in the context of the Point object, we would expect the same for the inner function. In this scenario, the scope becomes the global object Window, which returns undefined for both properties x and y. Fortunately, we can save a reference to the scope from the parent function and access it from the closure:

function Point(x, y) {
  this.x = x;
  this.y = y;
}

Point.prototype.distance = function (point) {
  var dx = Math.abs ( this.x - point.x );
  var dy = Math.abs ( this.y - point.y );

  var ref = this;

  (function checkDistance () {
    console.log ( ref.x, ref.y );
  })();

  return Math.sqrt (dx*dx+dy*dy);
}

var p1 = new Point(30, 30);
var p2 = new Point(50, 90);

// outputs: 30 30
// outputs: 63.245553203367585
console.log ( p1.distance( p2 ) );

Memo

  • Inner functions refers to the global context (Window).
Closure

The concept of closure can be a little tricky to understand at first, but real power lives behind closures.  A closure is a nested function that captures non-local variables from containing scopes and is exported outside of its original scope. The code below illustrates the idea:

function increment () {
  var x = 0;

  return function () {
    return x++;
  }
}

// grab a reference
var ref = increment();

// outputs: 0
console.log ( ref() );

// outputs: 1
console.log ( ref() );

// outputs: 2
console.log ( ref() );

// outputs: 3
console.log ( ref() );

The parent function increment defined a local variable x, which is accessible from the inner function. Once returned, the inner function is exported to a different scope and has captured the x variable with it. The beauty is that the x variable is protected because out of reach, and cannot be overwritten. In the code below, we define another variable x to see if it collides:

function increment () {
  var x = 0;
  return function () {
    return x++;
  }
}

// grab a reference
var ref = increment();

// outputs: 0
console.log ( ref() );

// outputs: 1
console.log ( ref() );

// outputs: 2
console.log ( ref() );

var x = 0; 

// outputs: 3
console.log ( ref() );

The original x variable is unaffected, captured inside the closure. Also, you have to be mindful of garbage collection when using closures. In the code below, we have a variable MAX_VALUE:

function increment () {
  var x = 0;
  var MAX_VALUE = 100;

  return function () {
    return x++;
  }
}

Because the closure does not capture the MAX_VALUE variable, it is lost and immediately eligible for garbage collection.

Memo

  • Functions defined inside functions have their own scope.
  • A closure is a nested function that captures non-local variables (coming from the parent function) and is exported outside of its original scope.
Garbage Collection

Like any managed language, JavaScript relies on a garbage collector (GC) to reclaim memory. It is very important to understand that garbage collection is trigger by memory allocation, not object disposal, and cannot be controlled by JavaScript developers. If you don’t understand the mechanics behind garbage collection, you may write non memory efficient code or worse create applications with memory leaks and consuming way too much memory.

Writing inefficient code may also pressure the garbage collector which will lead to collection happening synchronously on the UI thread. This could cause the UI to lock and make your application unresponsive. You should always be paying attention to the GC to ensure that our content stays as responsive as possible. Garbage collectors in JavaScript are mark and sweep based and will collect objects which have no remaining references, it is therefore very important to null the references of the objects you want to be collected. In the code below, we null the single reference we have:

// custom object
var myObject = {};

// null the only reference available
myObject = null;

Remember that nulling the reference has no direct impact on the GC. Later on, at some point, when the GC will be requesting more memory, objects without remaining references will be collected and memory will be reclaimed. In the code below, we have another reference to our object in an array:

// custom object
var myObject = {};

// an Array holds one reference
var arrayReferences = [ myObject ];

// we null one of the two references
myObject = null;

In this scenario, one reference still remains. This reference will prevent our object from being collected. As expected, to completely dispose our object, we need to clean all references:

// custom object
var myObject = {};

// an Array holds one reference
var arrayReferences = [ myObject ];

// we null one of the two references
myObject = null;

// we remove the other reference
arrayReferences[0] = null;

// or simpler
arrayReferences.length = 0;

The general following good practices will always help write more GC friendly code:

  • Always remove event listeners when done with an object
  • Try to avoid instantiating too many objects. Cache and reuse them when possible.

Let’s spend some time actually on the third item and see how object pooling can be useful to write GC friendly code.

Memo

  • Garbage collection is triggered by memory allocation, not object nulling.
  • The garbage collection cannot be controlled or triggered explicitely.
  • It is a best practice to limit the pressure on the garbage collector.
  • When done with an object, it is a good practice to null all its references to make it eligible for garbage collection.
Object pooling

Even though some GC experts state that newing objects should not be costly with an optimized GC, the reality is that still today in most languages, the GC is probably going to be affected by expensive instantiation of new objects. Developers over the years have developed techniques to minimize the number of allocation performed in their content. The idea is simple, the objects are allocated at the initialization of the application and available from a pool. Once done with an object, it is placed back into the pool for later use.

Note that this is valuable when the size of the objects you are pooling are big enough so that the cost of instantiating them is more expensive than retrieving and storing in the pool. Here is below an example of a pool class. Note that we rely here on the prototype object:
function ObjectPool (cls) {
    this.cls = cls;
    this.MAX_VALUE = 0;
    this.GROWTH_VALUE = 0;
    this.counter = 0;
    this.pool = new Array();
    this.currentSprite = null;
}

ObjectPool.prototype.initialize = function(maxPoolSize, growthValue) {
    this.MAX_VALUE = maxPoolSize;
    this.GROWTH_VALUE = growthValue;
    this.counter = maxPoolSize;

    var i = maxPoolSize;

    this.pool = new Array(this.MAX_VALUE);

    while( --i > -1 )
           this.pool[i] = new this.cls();
}

ObjectPool.prototype.getInstance = function() {
    if ( this.counter > 0 )
        return currentSprite = this.pool[--this.counter];

    var i = this.GROWTH_VALUE;

    while( --i > -1 )
            this.pool.unshift ( new this.cls() );

    this.counter = this.GROWTH_VALUE;

    return this.getInstance();
}

ObjectPool.prototype.disposeSprite = function (disposedSprite) {
    this.pool[this.counter++] = this.disposedSprite; 
}

To use it, we would be using this simple code:

function Enemy (){}

Enemy.prototype.sayHello = function() {
    return 'Hello from an Enemy';
}

// create the pool
var pool = new ObjectPool(Enemy);

// initialize the pool
pool.initialize(200, 10);

// retrieve the instance
var myEnemy = pool.getInstance();

// use the object
console.log ( myEnemy.sayHello() );

// call dispose (which would need to be implemented on the object)
myEnemy.dispose();

// one disposed, put it back to the pool for later use
pool.dispose ( myEnemy );

At initialization time, we allocate the required amount of enemies. Every time a new enemy is needed, we grab it from the pool. Once done with it, we return it where it is deactivated.

Memo

  • To helpe reduce the cost of instanciation, it is possible to rely on a pool helper class.
  • The purpose of the pool class is to pre-allocate objects needed later on and provide them when needed. Once done with them, objects can be put back into the pool for later use.

I hope you enjoyed this refresh!

Comment (1)

  1. focus wrote:

    Holy cow, this is HUGE one! Memos are great, thanks for your work, very appreciated.

    Wednesday, September 10, 2014 at 3:01 am #