Rust WebAssembly — Sharing data between WebWorkers

Julien de Charentenay
5 min readJan 16, 2022

In the post Implementing a WebAssembly WebGL viewer in Rust, the transfer of information between the solver WebWorker and the rendering is done by serializing and deserializing simulation data using wasm-bindgen JSON serialization — see JsValue::from_serde. This is a very slow process.

Mihail Malostanidis responded with very relevant suggestions. This post looks into his suggestion of “using shared memory to communicate between the simulation and the render thread”. The implementation described here improved the rendering speed from circa 7fps to 25fps on my laptop.

Image by Gerd Altmann from Pixabay

The views/opinions expressed in this story are my own. This story relates my personal experience and choices, and is provided for information in the hope that it will be useful but without any warranty.

The objective of the exercise is to speed up the transfer of information between the simulation and render threads and, in the process, speed-up the simulation by reducing the time taken to serialize the simulation data.

The screenshot below establishes the initial benchmark. The benchmark is based on running the simulation with the default parameters — ie 1000 vortons — using Firefox and a locally hosted webserver. Under these conditions, the simulation runs and is rendered at circa 7 frames per seconds.

Benchmark prior to implementation — 1000 vortons with locally hosted webserver

Exposing the WebAssembly module memory

The general idea is that the memory used in the simulation thread could be shared with the rendering thread to alleviate transferring information.

This github discussion, particularly this post, indicates how the memory used by the wasm module can be retrieved and reused when a new instance of the wasm module is created. As I progressed along that line of thoughts — exposing the memory by creating a memory function, changing the wasm-bindgen target to expose the init function, compiling rust with shared memory flags, adding webpack loader, adding requests headers — I was unable to (a) get the wasm module to use shared memory and (b) inject memory at the initialisation of another wasm module.

This project made me approach the problem in a different direction that yielded good benefits.

Serialization

I studied the source code of the project Shared Channel for WebAssembly to understand how the SharedArrayBuffer object is shared between threads/workers. The code showed me a few things: (a) I still have a lot to learn,(b) it exposes a serde-bincode feature that sounded of interest and (c) one can declare a rust struct and instantiate it in JavaScript. I used this later knowledge to refactor and simplify the part of the Rust code that exposes bindings to JavaScript.

My understanding is that the serde-bincode feature exposes a Shareable default trait implementation using bincode to generate binary serialization of any object implementing serde’s Serialize and Deserialize traits.

I decided to investigate the benefits of using a binary serialisation to transfer the simulation information from the simulation thread to the rendering thread as follows:

  • Implement serde’s Serialize and Deserialize traits on the simulation struct — which was already done;
  • Use the bincode crate to serialise the simulation struct as a binary vector Vec<u8> in place of a JSON string;
  • Transfer the binary vector from the JavaScript simulation thread, to the main JavaScript thread;
  • Deserialize the binary vector in the WebAssembly rendering module to allow the simulation to be rendered.

Two methods have been implemented using either an ArrayBuffer or a SharedArrayBufferto transfer the serialized simulation struct from WebAssembly to JavaScript — in addition of the JSON string serialisation. The following code extract shown these methods alongside the method providing JSON serialisation:

pub fn to_json(&self) -> JsValue {
JsValue::from_serde(&self.simulation).unwrap()
}
pub fn to_array_buffer(&self) -> ArrayBuffer {
let b = bincode::serialize(&self.simulation).unwrap();
Uint8Array::from(&b[..]).buffer()
}
pub fn to_shared_array_buffer(&self) -> SharedArrayBuffer {
let b = bincode::serialize(&self.simulation).unwrap();
let mut r = SharedArrayBuffer::new(b.len() as u32);
let mut a = Uint8Array::new(&r);
for i in 0..b.len() { a.set_index(i as u32, b[i]); }
r
}

The binary serialisations are significantly faster than the JSON string serialisation, with the serialisation to SharedArrayBuffer being marginally slower due to the need to manually copy the values into the SharedArrayBuffer.

The deserialisation methods are very similar and as shown below:

pub fn from_json(content: JsValue) -> Result<Solver, JsValue> {
match JsValue::into_serde(&content) {
Ok(sim) => Ok(Solver { simulation: sim} ),
Err(e) => Err(JsValue::from_str(format!("Unable to parse to simulation. Error {}", e).as_str())),
}
}
pub fn from_array_buffer(content: ArrayBuffer) -> Result<Solver, JsValue> {
let a = Uint8Array::new(&content);
match bincode::deserialize(&a.to_vec()[..]) {
Ok(sim) => Ok(Solver { simulation: sim } ),
Err(e) => Err(JsValue::from_str(format!("Unable to retrieve simulation from ArrayBuffer. Error {}", e).as_str())),
}
}
pub fn from_shared_array_buffer(content: SharedArrayBuffer) -> Result<Solver, JsValue> {
let a = Uint8Array::new(&content);
match bincode::deserialize(&a.to_vec()[..]) {
Ok(sim) => Ok(Solver { simulation: sim } ),
Err(e) => Err(JsValue::from_str(format!("Unable to retrieve simulation from SharedArrayBuffer. Error {}", e).as_str())),
}
}

The transfer of the message from the simulation thread to the main thread for rendering is undertaken in a similar manner regardless of whether the information is stored as a JSON string, a SharedArrayBuffer or an ArrayBuffer:

switch (use_simulation_format) {
case 0: // json
self.postMessage({ on_simulation: true, simulation: wasm_solver.to_json() });
break;
case 1: // array buffer
self.postMessage({ on_simulation_array_buffer: true, simulation: wasm_solver.to_array_buffer() });
break;
case 2: // shared array buffer
self.postMessage({ on_simulation_shared_array_buffer: true, simulation: wasm_solver.to_shared_array_buffer() });
break;
default: // Default: array buffer
self.postMessage({ on_simulation_array_buffer: true, simulation: wasm_solver.to_array_buffer() });
}

Results

The update is available on the project github repository and online at https://cfd-webassembly.com/vpm/index.html. Serializing the simulation object to a binary vector provides a significant speed benefit with the rendering increasing from 7 frame per seconds [using JSON string in Chrome] to 25 frames per seconds [using ArrayBuffer in Chrome].

JSON String — https://cfd-webassembly.com/vpm/index.html?format=json#
ArrayBuffer — https://cfd-webassembly.com/vpm/index.html?format=array_buffer#
SharedArrayBuffer — https://cfd-webassembly.com/vpm/index.html?format=shared_array_buffer#

On another note, I compared the execution speed in Chrome, Edge and Firefox. Interestingly, Firefox reported slightly lower fps than Chrome — indicating a different result to the one reported previously in https://julien-decharentenay.medium.com/rust-native-vs-webassembly-execution-speed-a-comparison-experiment-using-a-fluid-dynamics-vortex-4b08f535cd9.

Further investigation showed that the execution speed in Chrome slows down noticeably when the developer console is opened — in my case; whilst Firefox execution speeds does not depends on whether the developer console is open or not.

The benchmarking reported in previously was undertaken with the development console open and is not representative of normal use where Chrome, Edge and Firefox performance is similar. An edit has been included in the previous article to rectify this point.

--

--

Julien de Charentenay

I write about a story a month on rust, JS or CFD. Email masking side project @ https://1-ml.com & personal website @ https://www.charentenay.me/