React Beyond The DOM: Experiments In Applying React To Pixi.js, Three.js, and Web Audio

In this post I describe some experiments I performed in which React drives things that are not the browser DOM. This includes the visual scene graphs used by pixi.js and threejs as well as the audio graph used by the web audio API.


Note that these libraries I’m presenting are experimental and unfinished, serving as a proof-of-concept rather than a finished product. These projects were an experiment to see how straightforward it would be to use React on things besides the DOM. React itself is still changing fairly rapidly and so is Om, which is the ClojureScript layer I use on top of React.

React is “a Javascript library for building user interfaces” built and used by Facebook. The general idea behind React is that a developer describes how to map from application state to GUI state in a declarative style, and leaves the rest to React. Whenever you tell React about changes to your application state, it figures out how to properly modify the GUI to reflect this new application state. In vanilla React this GUI is an HTML DOM tree that gets modified to properly reflect your application state. Changes in the DOM tree then cause new stuff to get display in your browser window.

After reading a few posts about React I realized that the general ideas could work in a more general manner, giving me a way to “render” application state to a mutable scene graph or tree structure that wasn’t the DOM. After perusing the React source, it appeared that the browser-specific parts of React were actually separated quite well from the rest of React’s functionality as self-contained mixins. I could instead provide my own mixins that used a non-DOM render target. The result is a React-like library that looks like React but ultimately renders to something else, such as the scene graph of three.js or the audio graph used by Web Audio.

The relevant projects on github are:

First I’ll explain why I was interested in using React for general scene graph manipulation. That I’ll go over the three libraries which I targeted using React. I’ll describe the various quirks and surprises I found when applying React to each library.

Initial Motivation

What sort of need would motivate a person to control pixi.js using React?

I really wanted some way to draw 2D and 3D scenes in the browser using ClojureScript. There were some good graphics libraries floating around like pixi.js, createJS, and three.js, but I found that building and manipulating the scene graphs from ClojureScript was a hassle. This stems from the mismatch between Clojure’s immutable data structures and the mutable scene graphs used by a library like pixi.js or createJS.

Later I was made aware of React when I read several posts on David Nolen’s blog where he talked about Om, a library written in and for ClojureScript. With Om you can use the native data structures of ClojureScript (immutable maps and vectors) to control the mutable browser DOM tree. Since the DOM isn’t too different from any other scene graph, I figured React would be a good general solution to let me manipulate mutable scene graphs from ClojureScript.

And it works pretty well. Using this library you can build and manipulate a 2D pixi stage using ClojureScript. It’s still experimental but I find it way more useful than directly controlling the scene graph via hand-written ClojureScript!

React and Pixi.js

Pixi.js is a 2D graphics library that lets you describe your graphics using a scene graph with strong apparent inspiration from the ’display list’ approach used in Flash. Pixi.js uses WebGL when available and can draw a lot of stuff very fast. I felt it was a good choice for initially trying to retarget React.

React itself is pretty big, with dozens of files containing various components and managers and mixins. A good way to get a head start is to look at how someone else approached the same problem, and luckily there was already another project floating around: react-art, which uses React to target the ART library and render graphics to a canvas among other things. A lot of my initial code was made by copying and rewriting parts of react-art.

Roll Your Own Components

To make my custom components I found that I needed only a few mixins from React. This was for React 0.11, so YMMV:

  • ReactComponent.Mixin which handles the basic React lifecycle
  • ReactMultiChildComponent.Mixin which is used by nodes that can contain children<
  • ReactBrowserComponentMixin and ReactDOMComponent.Mixin if you’re creating any DOM nodes
  • Starting in React 0.11, you use ReactDescriptor to create descriptor factories

I merged together the mixins I needed for a particular node and implemented all the relevant methods like mountComponent and updateChildren for pixi.js, my target library. Most of the effort involves figuring out which methods need to be implemented and when they occur in React’s lifecycle. Because of things like update batching and timing functions the program flow is not always straightforward. But starting from an example such as react-art I can kind of muddle around and figure out where things go eventually.

Now instead of creating and manipulating DOM nodes or HTML markup, I wrote code to create and manipulate a pixi.js DisplayObject:

mountComponent: function(rootID, transaction, mountDepth) {
    /* jshint unused: vars */ 
    ReactComponentMixin.mountComponent.apply(this, arguments);
    this.displayObject = this.createDisplayObject(arguments);
    this.applyDisplayObjectProps({}, this.props);
    this.applySpecificDisplayObjectProps({}, this.props);
    this.mountAndAddChildren(this.props.children, transaction);
    return this.displayObject;

I wrote similar code thatadds and attaches child nodes. The Stage component is a little more complicated since it’s both a DOM element (the canvas) and a pixi.js Stage object. Therefore I had to include the browser and DOM mixins to properly handle attaching the Stage into the rest of the DOM. The Stage also ended up being the bridge between “DOM-land” which contains DOM nodes and “pixi-land” which contains pixi.js DisplayObjects.

Since the pixi.js scene graph is structured a lot like the DOM most of the other algorithms in React like diffing, batching, and createClass all just work. I added proper hooks for pixi’s mouse callbacks but I didn’t integrate it into React’s event system because I’m a lazy bum.

The only other gotcha was that React Composite Components–those created via createClass–have some code in *updateComponent* that directly manipulates HTML and the DOM. This obviously has to be monkey patched to modify pixi.js objects instead, or you’ll get weird errors in certain rarely occurring situations.

Now I could create pixi.js scenes using React style code

return ReactPIXI.DisplayObjectContainer(
    {x:xposition, y:100 },
        ReactPIXI.Sprite({image:creamimagename, y:-35, anchor: new PIXI.Point(0.5,0.5), key:'topping'}, null),
        ReactPIXI.Sprite({image:'cupCake.png', y:35, anchor: new PIXI.Point(0.5,0.5), key:'cake'}, null)

And since all this is done using normal function calls, I can extend this to Om pretty easily and build scenes in ClojureScript:

(defn simplestage [cursor]
        (pixi/stage #js {:width 400 :height 300}
            (pixi/text #js {:x 100 :y 100 :text "argh!"}))))

No Obvious Speedup Here

Finally note that here we’re just managing raw Javascript objects and not the (relatively) slow DOM. This means that driving pixi.js through React is not going to be any faster than just manipulating the DisplayObjects yourself. The virtual tree diffing still avoids changing things that don’t need it, and you also gain the declarative style of React’s render methods which I find advantageous.

Also, unlike the standard DOM it’s not unusual for a single node to have thousands of children as shown in the pixi.js bunnymark. React is still less than ideal when used with thousands of children even with a custom version of shouldComponentUpdate. This is an even bigger problem when I try to drive thousands of bunnies using Om, since the bunny data stored in ClojureScript has to be converted into a Javascript array and then passed to React. So if I want thousands of bouncing bunnies I’ll have to somehow roll my own ReactMultiChild mixin that handles this issue.

React And Three.js

Three.js is 3D rendering library that uses a scene graph and renders 3D scenes. It can be configured to use the HTML canvas via WebGL, which allows for high-performance 3D rendering in the browser. Over the last few years three.js has gotten pretty capable with support for shaders, skeletal animations, and tons of other cool effects. Since three.js uses a hierarchical scene like pixi.js does, hooking up React to three.js was very similar to pixi.js with the obvious wrinkle of having a three dimensional coordinate system. There is a Scene instead of a Stage, and I create Object3D nodes instead of DisplayObjects, but otherwise the managing of components and children is almost identical to pixi.js.

The big difference is that three.js contains a lot of ancillary data structures like meshes, materials, and a camera. Three.js uses these data structures but leaves it up to the user to manage them. I toyed with the idea of making them components and automatically managing them but decided against it. There are several valid ways to store and access 3D assets which vary by application so it didn’t make sense to force the user into a specific method.

Also, three.js doesn’t handle mouse clicks by default so I put in some basic picking functionality in order to get an interactive demo working

React And Web Audio

This experiment was a little more out there than the previous two. I had looked at the Web Audio API and was a little unhappy with the thought of manually building and maintaining an audio processing graph in JavaScript or ClojureScript. But hey, my previous two attempts went pretty well, so why not try to apply React to Web Audio?

Each stage in the audio graph is basically an AudioNode, so I came up with a system where each AudioNode is basically a React node. An AudioNode feeds audio data ‘upward’ to the parent node. Thus audio flows upward in the React tree until it hits a destination (usually the AudioContext) at which point the audio gets pushed to the speakers. Thus your audio sources are at the bottom of the tree producing data that flows upward through filters or amplifiers and ultimately stream out of the top as actual sound.

It turns out that Web Audio has several extras that make it a slight mismatch to React’s render model, so I only supported the bare minimum of functionality to demonstrate the concept. Most notably there is no support for multiple outputs from a single AudioNode and no support for feeding dynamic waveforms into audio parameters.

Another wrinkle is that the AudioBufferSourceNode can only play once. If you wanted to play a given audio clip multiple times you basically have to create and throw away a new node each time you play the clip. This meant that using React you would have to repeatedly destroy and create new React nodes via setProps or updateComponent. Yuck. So I inserted a hack: there is an extra property called triggerkey that gets watched by the AudioBufferSourceNode component. Whenever the value of triggerkey changes (as defined by Javascript !==) the underlying AudioNode is replaced and restarted. As an example of use, the “playbuffer” example sets triggerkey to an integer that gets incremented every time the relevant sound needs to be restarted.

Finally, the main AudioContext often needs to be referenced in contexts outside of React, so I allowed the user to create their own AudioContext and pass that in for React to use. If the user doesn’t specific an AudioContext then React creates one.


The use of mixins within the React library makes extending it to other render targets pretty easy once you understand how the various lifecycle methods are called. Any library that uses a tree-like representation can probably be controlled via React components. You won’t get the performance boost that React gets manipulating the DOM, but all the other niceties like the declarative style still work.

These libraries are still pretty messy and experimental, but I think they serve as good examples of rendering to non-DOM targets with React. There were a few gotchas along the way for each specific library but it’s hard to tell ahead of time unless you’re familiar with both React and the library in question.

In any case I hope that React continues to evolve. React provides a useful way to create and manage scene graphs, even without the obvious performance win that is seen for DOM rendering. From what I’ve seen so far the virtual diffing method is great for immutable-by-default languages such as ClojureScript.