Joel
Salzman

M.S. student at Columbia studying Computer Vision & Graphics

VHard

An XR rock climbing app for kinesthetic rehearsal. If you climb, chances are you've stood at the base of a hard route and mimed the moves. But what if you could actually see the holds in front of you without climbing up to them? We scanned a MoonBoard and put it in AR/VR with a custom shader that lights up where your fingers and palms touch. So, you can practice your exact hand and body positions on the ground with metric accuracy and live feedback.

We presented VHard as a demo in IEEE ISMAR 2024.

VISER

An XR glaciology app. We visualize radar data in an immersive 3D environment and develop custom UX tools for scientists.

Using a Quest or Hololens VR/AR headset, users can manipulate radargrams and other scientific data to study glaciological processes in Antarctica and Greenland. I created the pipeline that ingests radar plots and generates 3D meshes that visualize the actual locations from where the signals were gathered. We were the first to model the entire flight trajectory in 3D.

Since joining the team, I have been named on two publications (see the project site for details).

Neurochoric Radiance Fields

A combination of Gaussian Splatting and Zip-NeRF.

Begun as a project for Peter Belhumeur's Advanced Topics in Deep Learning class, I am attempting to improve the state of the art technique for novel view synthesis by using a neural network to learn a point sampling probability field, sampling primitives from this field, and then splatting the primitives to render images.

It kind of works. At minimum, I learned a ton about radiance fields by doing this. The project is fully compatible with Nerfstudio.

Refleconstruction

3D reconstruction of objects that are partially seen through reflections.

Done as a project for Shree Nayar's class Computational Imaging, we wrote a pipeline for an Intel RealSense 455 camera that creates 3D models. What makes this interesting is that part of each object is seen directly by the camera and part is only visible through a mirror. So, the object can be reconstructed better if the points seen through the mirror are properly registered with the directly-seen points. We wrote a self-supervised algorithm that segments and merges these point clouds.

LocalAIze

Camera pose estimation for an indoor video using a deep neural network. This was a group project for Peter Belhumeur's Deep Learning for Computer Vision class. Our goal was to figure out where in our classroom a random image was taken from, given simple conditions (same lighting, no movement, etc). We took a supervised deep learning approach but used COLMAP to estimate the ground truth poses.

GIS Developer

For over two years, my job was to figure out where to build utility-scale renewable (primarily wind and solar) projects for Apex Clean Energy. Most of my work consisted of data engineering and automated geospatial analysis. My deliverables remain proprietary but I will happily explain what I did and what I learned if asked.

Voting Power

Where do votes matter most? A project for my last GIS class in college under Krzysztof Janowicz that ended up as an interactive web map. See the project page for (many) details.

Working on this project is a big part of why I decided to go back to school for Computer Science.

Aquaculture

Primary Ocean Producers is a startup aiming to cultivate Macrocystis pyrifera in the deep ocean, in partnership with Catalina Sea Ranch. We were funded by a grant from ARPA-E to grow giant kelp en masse in order to produce carbon-neutral biofuel.

My role was to site the pilot facilities off the coast of California. Along with two aquatic biologists, I developed a hierarchical suitability model for giant kelp cultivation. Among other factors, we looked at chemical availability (for nutrients), geophysical phenomena (so the kelp would be safe), and legal restrictions (so we could build the facility).