How does the TrueDepth camera work?

The TrueDepth camera uses structured light. There is an infrared projector and camera that projects a known pattern and are able to measure the distance to 30k dots (interpolated to 205k dots) at 30fps. Here is a great explainer video.

What devices currently have the TrueDepth camera?

  • iPhone X
  • iPhone Xs
  • iPhone Xs Max
  • iPhone Xr
  • iPad Pro (2018)

As of November 2018, estimates put the count of TrueDepth-enabled devices sold at 100 million.

How accurate is it?

There’s no single metric that fully describes the accuracy of 3D reconstruction. That being said each depth vector is accurate to ~0.5mm from 0.1-1.0 meters of distance. As you move the camera closer to the object being scanned, our algorithms refine the surface and fill in sharper detail. (You can try this for yourself by starting a scan far away from your face and then coming closer—the details of your eyelids will be resolved.)

How easy is it to integrate?

Check out our iOS Quickstart Guide to see for yourself!

Can I scan things with the rear-facing camera?

Unfortunately not at this time, as the TrueDepth camera is only front-facing on iPhones. However, we are developing a small 3D printable bracket that redirects the front-facing camera 90º to make scanning easier.

How big of an object can I scan?

Our algorithms do not require you to pre-set the bounding box of what you are scanning. The only limitations to how large an object you can scan are the memory of the phone.

What is the range of the scanner?

You can utilize the full range of the TrueDepth camera, which works up to about three meters, though it’s much more accurate at distances under one meter.

What is not scannable?

Extremely glossy/reflective things and see-through things (like glass). The TrueDepth sensor may not work in bright sunlight.

Will this be available for Android?

Eventually, once there are enough devices on the market with depth cameras.

Can I use your SDK with other depth cameras?

Not yet, but please contact us if you are interested in this.

Can I pause a scan and then resume later?

Not yet, but it’s on our roadmap.

Can I take volumetric/3D photos with your SDK?


Can I take volumetric/3D video with your SDK?

Not out of the box. However, you can convert single frames into point clouds, then build a movie of them yourself.

What file formats can the SDK write to?

The SDK generates a point cloud with positions, RGB color values, and normals for each point. You can export this to a few common 3D file formats: OBJ and PLY, with more on the way soon. The SDK can also generate a mesh to the PLY format. Need another file format? Let us know.

Can I do custom rendering during scanning?

Yes, ScanningViewController allows you to override renderering while scanning, and you can write your own point cloud preview in place of the default PointCloudPreviewViewController.

Can I scan moving or deforming (non-rigid) objects?

Not yet, currently you should scan a static/rigid object.

Is your software open source?

Mostly no—the core scanning algorithms are provided as a compiled binary framework, and the analysis is performed on our servers. However, the UI and Swift network interface components are open source. On top of that, we do have some awesome open source tools

Didn’t answer your question? Email us at