Navigation
The TrueDepth camera uses structured light. There is an infrared projector and camera that projects a known pattern and are able to measure the distance to 30k dots (interpolated to 205k dots) at 30fps. Here is a great explainer video.
As of November 2018, estimates put the count of TrueDepth-enabled devices sold at 100 million.
There’s no single metric that fully describes the accuracy of 3D reconstruction. That being said each depth vector is accurate to ~0.5mm from 0.1-1.0 meters of distance. As you move the camera closer to the object being scanned, our algorithms refine the surface and fill in sharper detail. (You can try this for yourself by starting a scan far away from your face and then coming closer—the details of your eyelids will be resolved.)
Check out our iOS Quickstart Guide to see for yourself!
Unfortunately not at this time, as the TrueDepth camera is only front-facing on iPhones. However, we are developing a small 3D printable bracket that redirects the front-facing camera 90º to make scanning easier.
Our algorithms do not require you to preset the bounding box of what you are scanning. The only limitations to how large an object you can scan are the memory of the phone. (a lot)
You can utilize the full range of the TrueDepth camera, which works up to about three meters, though it’s much more accurate at distances under one meter.
Extremely glossy/reflective things and see-through things (like glass). The TrueDepth sensor may not work in bright sunlight.
Eventually, once there are enough devices on the market with depth cameras.
Not yet, but please contact us if you are interested in this.
Not yet, but it’s on our roadmap.
Yes.
Not out of the box. However, you can convert a single frame into a point cloud, then build a movie of them yourself.
The SDK generates a point cloud with positions, RGB color values, and normals for each point. You can export this to a few common 3D file formats: OBJ and PLY. Need another file format? Let us know.
Yes, ScanningViewController
allows you to override renderering while scanning, and you can write your own point cloud preview in place of the default PointCloudPreviewViewController
.
Not yet, currently you should scan a static/rigid object.
Mostly no—the core scanning algorithms are provided as a compiled binary framework, and the analysis is performed on our servers. However, the UI and Swift network interface components are open source. On top of that, we do have some awesome open source tools