In ActionScript 3, memory is managed through the help of a garbage collector that allocates and deallocate objects through the application lifecycle. The garbage collector (GC) allocates memory when a new object is created, scans the objects graph periodically, detects unreferenced objects and deallocated them, pretty useful. AS3 is not the only language that relies on garbage collection, C# with Mono, JavaScript or Java all rely on garbage collection. On paper, it sounds great, but any developer who has developed more complex content on a GC based platform will tell you that it's not all so blue. Even though GC makes the developer's life easy at first, ActionScript 3 developers worst enemy today is actually the GC, so why is that?

Unpredictability

First, garbage collection is completely unpredictable, I remember when teaching ActionScript 3 to students, the idea that objects would be deallocated "at some point" in time, but nobody knows when, was something pretty hard to grasp. Actually, it was even possible that objects would never get deallocated if the garbage-collector never decided to kick-in. So how do you test this? In ActionScript 3, to test/profile an application, it is possible to trigger the GC from the Flash Builder profiler, but also from AS3 with the System.gc() API or even better with Adobe Scout, that also provides information on who is eligible or who just got deallocated.

In the code below, we set the sprite reference to null, note that this does not trigger anything, our sprite is still alive:

import flash.display.Sprite;

var mySprite:Sprite = new Sprite();

mySprite.addEventListener ( Event.ENTER_FRAME, onFrame );

function onFrame ( e: Event ):void
{
	trace ('I am alive!');
}

// we dereference the object
// collection is not triggered, sprite is still alive and talking
mySprite = null;

At this point, our sprite is eligible garbage collection, but still remains in memory and still dispatches the Event.ENTER_FRAME. To test if our sprite will eventually be garbage collected, we can trigger the GC using the System.gc() API:

import flash.display.Sprite;

var mySprite:Sprite = new Sprite();

mySprite.addEventListener ( Event.ENTER_FRAME, onFrame );

function onFrame ( e: Event ):void
{
	trace ('I am alive!');
}

// we dereference the object
// collection is not triggered, sprite is still alive and talking
mySprite = null;

// collection is triggered, object is killed
System.gc()

Remember that the System.gc() API is a debug-only feature, so you cannot rely on it in production. This GC unpredictability can be pretty sneaky. Typically, you don't want the garbage collection to happen in the middle of something, in a game, where best performance is crucial, you don't want garbage collection to happen right in the middle of the game, bur rather before the new level gets loaded. In other words, at a time where the experience/performance is not impacted.

In Flash Player 11, we introduced a new API System.pauseForGCIfCollectionImminent() which helped developers influence when the garbage collection would kick-in. You still could not control the GC directly, but it is was an improvement.

Synchronous (UI lock)

The reason why you don't want garbage collection to happen at moments you don't control is because GC in Flash happens on the UI thread, therefore locks the UI when collection happens. The more complex your scene becomes, and the bigger the graphs are, the longer the pause will be. In a game, this is a showstopper, because UI lock ruins the experience and frustrates users.

That's why AS3 developers have developed workarounds over the years to prevent the GC from being triggered, object pools being one of the strategy. The idea behind object pooling is that instead of allocating new objects constantly and pressure the GC, a set of objects are allocated during app initialization and once objects are done doing their tasks, they are placed back inside a pool for later reuse. Keep in mind that this will do the job, but will consume more memory, as objects never get deallocated, you win on the performance side, but lose on memory footprint. You can find more details about object pooling here.

With Swift?

In Swift, thanks to ARC (Automatic reference counting) you also don't need to manage memory manually by allocating memory and releasing objects like with Objective-C before ARC was introduced. ARC counts the number of references pointing to objects and when the number of references reaches zero, it kills them. Pretty similar to ActionScript 3 you may say, but with a few notable differences.

Compile time vs runtime

In AS3, all this GC work happens at runtime, the bytecode generated by the ActionScript 3 compiler does not emit any specific calls to allocate or release memory. If you were to decompile the assembly code generated by the Swift compiler, you would see the calls to allocate and release objects, like if you had done it manually, in Swift, the compiler does all the work for you.

Synchronous deallocation

In AS3, like we have seen before, setting the last remaining reference to an object to null, won't kill the object. It will make it eligible for garbage collection. In Swift, it will immediately deallocate the object synchronously, and that is a big difference. You can track initialization and de-initialization through the use of the init and deinit methods:

class Hero {
    
    let name: String
    
    init ( name: String ) {
        self.name = name
        println ("\(self.name) got initialized")
    }
    
    deinit {
        println ("\(self.name) got deinitialized")
    }
}

// we create our hero
// note the use of the optional (?) operator
// using this operator allows the var bob to be set to nil
var bob: Hero? = Hero(name: "Bob")
        
// we set the only reference to nil (equivalent of null)
// the object is immediately destroyed and the deinit method is called
bob = nil

If we run our application, we see in the output window:

Bob got initialized
Bob got deinitialized

Because the developer has full control over when the objects are deallocated, objects are killed sooner, optimizing for memory consumption (there is no pool of eligible objects in memory waiting to be disposed). It is also a more incremental approach that prevents the UI from locking. Most GC, on most platforms, as beautiful and complex as they are, will always impact the UI thread.

Circular references (retain cycles)

In AS3, if two objects were unreachable from the roots of the application (Stage, Loader), but were still referencing each other, they would still be garbage collected.

In Swift, if two objects are unreachable from the roots of your application, but are still referencing each other, you have what is called a circular reference or retain cycle and these two objects will never get deallocated and probably cause a memory leak. The GC in Flash Player/AIR solved this through the combination of deferred reference counting and conservative mark-and-sweep. The mark and sweep piece is what handles circular references (retain cycles) and that is a big advantage of garbage collection in Flash Player/AIR.

To deal with this in Swift, you would use weak references to express that if the last reference to an object is weak, the object should still be deallocated. Swift also introduces the concept of unowned references, which are non optional. This brings more granularity in object dependencies, you can read more about it here.

I hope you enjoyed this quick overview of differences between the Flash memory model (GC) and Swift (ARC).