Implementing a WebAssembly WebGL viewer using Rust
This story focuses on the implementation of a web viewer for a fluid solver using the vortex particle method. The fluid solver is written in Rust, and compiles to WebAssembly. A Rust module, handling the WebGL viewer through WebAssembly using web-sys, is the natural next step to allow visualisation of the simulation results in the web-browser. The live version is available at cfd-webassembly.com.
It will interest people trying to understand the structure of the fluid solver code available on github, write a WebGL viewer, or handle large datasets created in Web Assembly.
The views/opinions expressed in this story are my own. This story relates my personal experience and choices, and is provided for information in the hope that it will be useful but without any warranty.
The implementation uses information from the Rust and WebAssembly small book and the `wasm-bindgen` guide. WebGL aspects consider information from WebGL Fundamentals and WebGL2 Fundamentals, WebGL: 2D and 3D graphics for the web, and nalgebra’s Computer-graphics recipes.
This story focuses on (a) handling transfer of results between calculation and rendering, (b) WebGL rendering from Rust and (c) basic camera handling.
Handling simulation data
The story VueJS + Rust + WebAssembly + WebWorker hosted on AWS S3, an example presents the fluid solver architecture when hosted on a website — such as the live version at www.cfd-webassembly.com.
The architecture uses a Web Worker to run the simulation, which is calculation heavy, in a separate thread so that it does not impact on the user interface responsiveness. The trade-off of this decision is that the simulation is unable to modify the user interface, including DOM, or handle rendering. The rendering needs to be done in the main thread with the simulation results transferred between the Web Worker thread and the main thread.
Two options are considered. The simulation results could be rendered to WebGL from JavaScript or from WebAssembly. I chose the later as it allows the re-use of the existing Rust data-structure to calculate quantities derived from the simulation results, such as velocity, as well as speed benefits — I hope…
The software architecture is modified as shown in the following diagram. The main thread spawns a Web Worker thread that calls WebAssembly functions written in Rust to carry out the simulation work. When results are available, these are past back to JavaScript and transferred to WebAssembly functions also written in Rust that carry out the rendering work using WebGL.
As the two sets of WebAssembly functions are different, I started with two different WebAssembly modules — one for the simulation solver functions and one for the rendering functions. Unfortunately, wasm-pack
and I had different opinions on the subject and I ended up with using only one WebAssembly module that contains both simulation and rendering functions. These functions are implemented in two distinct Rust files rust\wasm\src\solver.rs
and rust\wasm\src\viewer.rs
with the viewer functions prefixed with viewer_
.
The simulation solver and rendering functions are called in different context. The former is called from within a Web Worker and the later in the main thread. As a consequence the data generated by the simulation solver functions — i.e the simulation result/state stored in global variable SIMULATION
— can not be shared directly with the rendering functions.
Currently the approach employed to share the simulation results is as follows:
- The Web Worker calls the WebAssembly solver
step
function to run 1 iteration; - On completion of the iteration, the Web Worker posts a message to the main thread indicating completion of the iteration together with simulation state iteration and time retrieved from the solver as payload;
- The Web Worker then posts a message to the main thread indicating availability of the simulation results together with a payload containing the serialized simulation state;
- On reception of the message, the main thread triggers the viewer draw
viewer_draw
function with the serialized simulation state provided as argument; - The viewer draw function deserializes the simulation state and proceed with its rendering.
Whilst this approach gets the job done, it is a very slow process due to the requirement for passing data between Web Worker and main threads, and its associated serialisation, and deserialisation. It will need to be revisited in the future to improve speed.
WebGL Rendering
I have a very limited experience with rendering and hence have relied on the following:
- The `wasm-bindgen` Guide WebGL example: It provides a fully working example for WebGL rendering;
- VorteGrid: Interactive Fluid Simulation for Games and Movies: This github repository contains the source code associated with a series of articles on fluid simulation prepared by Dr M J Gourlay and discussed here. In 2019, I ported a version of the source code released for the article #3 to WebAssembly. This exercise required me to rewrite the visualisation part using OpenGL/OpenGLES. The rewrite is used as a guide in the writing of the WebGL rendering presented in this article.
The explanation presented here is based on my understanding. Do not hesitate to suggest corrections, clarifications or improvements.
The compilation of WebGL shaders and program has been grouped into the file rust\wasm\src\viewer\webgl.rs
to allow the code to be reused. The shader compilation function webgl_compile_shader
handles the compilation of shaders, including fragment and vertex shaders for which specialised functions webgl_compile_fragment_shader
and webgl_compile_vertex_shader
are provided.
pub fn webgl_compile_shader(context: &WebGl2RenderingContext,
shader_type: u32,
source: &str,
) -> Result<WebGlShader, Box<dyn Error>> {
let shader
= match context.create_shader(shader_type) {
Some(s) => s,
None => return Err(Box::new(SimpleError::new("Unable to create shader object"))),
};
context.shader_source(&shader, source);
context.compile_shader(&shader); if context
.get_shader_parameter(&shader, WebGl2RenderingContext::COMPILE_STATUS)
.as_bool()
.unwrap_or(false)
{
Ok(shader)
} else {
Err(Box::new(SimpleError::new(context
.get_shader_info_log(&shader)
.unwrap_or_else(|| String::from("Unknown error creating shader")))))
}
}
The WebGL program compilation function webgl_link_program
is taken straight from the `wasm-bindgen` guide example:
pub fn webgl_link_program(
context: &WebGl2RenderingContext,
vert_shader: &WebGlShader,
frag_shader: &WebGlShader,
) -> Result<WebGlProgram, Box<dyn Error>> {
let program
= match context.create_program() {
Some(p) => p,
None => return Err(Box::new(SimpleError::new("Unable to create program object"))),
}; context.attach_shader(&program, vert_shader);
context.attach_shader(&program, frag_shader);
context.link_program(&program); if context
.get_program_parameter(&program, WebGl2RenderingContext::LINK_STATUS)
.as_bool()
.unwrap_or(false)
{
Ok(program)
} else {
Err(Box::new(SimpleError::new(context
.get_program_info_log(&program)
.unwrap_or_else(|| String::from("Unknown error creating program object")))))
}
}
The WebGL rendering is spread amongst two files rust\wasm\src\viewer\main.rs
and rust\wasm\src\viewer\program_vorton_render.rs
. The former defines a Main
struct that creates a WebGl2RenderingContext
from the relevant DOM canvas element when the struct is created:
pub fn new(element_id: &str) -> Result<Main, Box<dyn Error>> {
let document = web_sys::window()
.unwrap().document().unwrap();
let canvas = document
.get_element_by_id(element_id).unwrap();
let canvas: web_sys::HtmlCanvasElement
= match canvas.dyn_into::<web_sys::HtmlCanvasElement>()
{
Ok(c) => c,
Err(_) => return Err(Box::new(SimpleError::new(format!("Unable to cast HTML element {} into canvas element", element_id).as_str()))),
}; let context
= match canvas.get_context("webgl2")
{
Ok(c) => c,
Err(_) => return Err(Box::new(SimpleError::new(format!("Unable to retrieve webgl2 context from canvas {}", element_id).as_str()))),
};
let context
= match context.unwrap()
.dyn_into::<WebGl2RenderingContext>()
{
Ok(c) => c,
Err(_) => return Err(Box::new(SimpleError::new(format!("Unable to cast webgl2 context appropriately from canvas {}", element_id).as_str()))),
}; Ok(Main {
context,
viewer_elements: HashMap::new(),
camera: None,
})
}
The Main
struct draw
method starts the rendering process. It handles the initialisation of the camera object, defines and clear the background and call the draw
methods for each defined viewer_elements
— instances of the trait ViewerElement
:
pub fn draw(&mut self, simulation: &Simulation)
-> Result<(), Box<dyn Error>> {
// Initialise camera [if required]
self.init_camera(simulation)?; // Initialise background
self.context
.clear_color(6f32/255f32, 78f32/255f32, 59f32/255f32, 1f32);
self.context
.clear(WebGl2RenderingContext::COLOR_BUFFER_BIT); for e in self
.viewer_elements.values_mut() {
e.draw(&self.context, self.camera.as_ref().unwrap(), &simulation)?;
} Ok(())
}
The struct ProgramVortonRender
defined in rust\wasm\src\viewer\program_vorton_render.rs
implements the trait ViewerElement
to render the vortons. When the struct is instantiated, it compiles its associated vertex and fragment shaders and link the WebGL program as shown below:
pub fn new(context: &WebGl2RenderingContext)
-> Result<ProgramVortonRender, Box<dyn Error>> {
let vert_shader = webgl_compile_vertex_shader(
context,
r##"
attribute vec4 vPosition;
uniform mat4 uMatrix;
void main()
{
gl_Position = uMatrix*vPosition;
gl_PointSize = 2.5;
}
"##,
)?; let frag_shader = webgl_compile_fragment_shader(
context,
r##"
precision mediump float;
void main()
{
gl_FragColor = vec4(0.9, 0.9, 0.9, 1);
}
"##,
)?;
let program = webgl_link_program(&context, &vert_shader, &frag_shader)?; Ok(ProgramVortonRender {
program,
vertices: Vec::new(),
n_vertices: 0,
})
}
The vertex shader takes a mat4
matrix and a vec4
position. The matrix represents the transformation of the vertex position from the 3D coordinate system to the 2D rendering canvas. This transformation matrix is a representation of the camera — its position, direction and characteristics — as described in the WebGL 3D Cameras article. The fragment shaders apply a uniform colour to all vertices.
The rendering takes places in the draw
and redraw
functions. These two functions allow for the rendering to take place when the results are updated— draw
function — and when camera is updated — redraw
function. The draw
function is based on the `wasm-bindgen` guide example and (a) extract the vortons’ position into an f32
vector, (b) pass this vector to the WebGL program using methods that I still need to get familiar with — including the use of the unsafe method Float32Array::view
, the comment shown in the code below is taken from the ` wasm-bindgen` guide example:
fn draw(&mut self, context: &WebGl2RenderingContext, camera: &Camera, simulation: &Simulation)
-> Result<(), Box<dyn Error>> {
let buffer = match context.create_buffer() {
Some(b) => b,
None => return Err(Box::new(SimpleError::new("Failed to create buffer"))),
};
context.bind_buffer(WebGl2RenderingContext::ARRAY_BUFFER, Some(&buffer)); self.n_vertices = simulation.vortons().len();
self.vertices
= simulation
.vortons()
.iter()
.fold(
Vec::new(),
|mut r, v| {
let p = v.position();
r.push(p.x as f32);
r.push(p.y as f32);
r.push(p.z as f32);
r
}
); // Note that `Float32Array::view` is somewhat dangerous (hence the
// `unsafe`!). This is creating a raw view into our module's
// `WebAssembly.Memory` buffer, but if we allocate more pages for ourself
// (aka do a memory allocation in Rust) it'll cause the buffer to change,
// causing the `Float32Array` to be invalid.
//
// As a result, after `Float32Array::view` we have to be very careful not to
// do any memory allocations before it's dropped.
unsafe {
let positions_array_buf_view = js_sys::Float32Array::view(self.vertices.as_slice());context.buffer_data_with_array_buffer_view(
WebGl2RenderingContext::ARRAY_BUFFER,
&positions_array_buf_view,
WebGl2RenderingContext::STATIC_DRAW,
);
} let vao = match context.create_vertex_array() {
Some(a) => a,
None => return Err(Box::new(SimpleError::new("Could not create vertex array object"))),
};
context.bind_vertex_array(Some(&vao)); context.vertex_attrib_pointer_with_i32(0, 3, WebGl2RenderingContext::FLOAT, false, 0, 0);
let v_position_location = context.get_attrib_location(&self.program, "vPosition");
context.enable_vertex_attrib_array(v_position_location as u32); context.bind_vertex_array(Some(&vao)); self.redraw(context, camera)
}
The redraw
function is called at the end of the draw
function to avoid code duplication. The redraw
function assign the camera matrix in the WebGL program before requesting the drawing of the vertex array:
fn redraw(&mut self, context: &WebGl2RenderingContext, camera: &Camera)
-> Result<(), Box<dyn Error>> {
context.use_program(Some(&self.program)); let u_matrix_location = context.get_uniform_location(&self.program, "uMatrix");
context.uniform_matrix4fv_with_f32_array(
u_matrix_location.as_ref(),
false,
camera.as_view_projection()?.as_slice());
context.draw_arrays(WebGl2RenderingContext::POINTS, 0, self.n_vertices as i32);
Ok(())
}
At present only vortons’ position are rendered. The trait and structs organisation aims to allow for other rendering to take place in future.
Camera Handling
The last building block to put in place relates to camera handling. It is managed though a Camera
struct declared and implemented in rust\wasm\src\viewer\camera.rs
. The camera object is instantiated, initialised and stored in the Main
struct, file rust\wasm\src\viewer\main.rs
, and used as part of the ViewerElement
rendering pipelines as a mat4
matrix— if needed.
The Camera
struct members includes width and height of rendering canvas, camera eye and target locations, up vector and field of view in degrees. The Camera
object is created with the canvas element width and height nominated and 60° field of view. The camera eye and target locations and up vector are assigned via setter functions. This approach allows flexibility to manipulate the camera depending on user action and hence implement pan, rotations and other camera action — but this work has not been done yet.
The camera information needs to be transformed in conjunction with a projection matrix into a 4x4 matrix to be used by the WebGL vertex shader. WebGLFundamentals describes implementation details for camera matrix in WebGL — 3D Cameras, perspective projection in WebGL 3D Perspective and orthographic projection in WebGL — Orthographic 3D. In place of implementing these directly, this project uses nalgebra to provide the heavy lifting associated with camera and projection as described in nalgebra’s documentation on Computer-graphics recipes.
The camera is converted to a 4x4 matrix using a view matrix and a projection matrix in the Camera method as_view_projection
. The projection matrix is set to a 3D perspective projection matrix with hard-coded near and far field limits. The view matrix is constructed from the camera eye and target location and the up vector. The view and projection matrix are multiplied to calculate the view-projection matrix provided to the WebGL vertex shader.
pub fn as_view_projection(&self) -> Result<Matrix4<f32>, Box<dyn Error>>
{
let view = Isometry3::<f32>::look_at_rh(
self.eye.as_ref().unwrap(),
self.target.as_ref().unwrap(),
self.up.as_ref().unwrap()
);
let projection = Matrix4::<f32>::new_perspective(
self.width as f32 / self.height as f32,
self.field_of_view_deg * std::f32::consts::PI / 180f32 ,
0.1f32, 200f32);
let view_projection = projection * view.to_homogeneous();
Ok(view_projection)
}
The nalgebra library allows for a very straightforward implementation of the calculation of the view-projection matrix required by WebGL.
The camera position is currently hard coded — target set to [0,0,0] from an eye position [0, 5, 0] and up set to [0, 0, 1] — with no allowance for canvas resizing, or handling of user actions such as pan and zoom — which will be looked at in the future.
Summary
This post shows the implementation of a WebGL render from Rust/WebAssembly using web-sys. Whilst the post shows that the implementation of the WebGL render and camera is reasonably straightforward, it highlights a challenge in the transfer of data generated in a Web Worker back to the main thread’s WebAssembly for rendering.