Up until now, Qt Quick and it’s underlying scene graph has been powered by a 2.5D renderer. This is great for implementing traditional 2D user interfaces for Desktop, Mobile, and embedded devices. However, when you want to show 3D content in your UI it has been necessary to either write your own OpenGL renderer or use an additional rendering engine. This has been sufficient when rendering a 3D scene inline, but becomes problematic when inter-mixing 2D and 3D elements. While there may currently be a few options in Qt for handling 3D content, there is no unified design tooling that can handle the mixing and matching that is necessary currently. It also currently not obvious how to use Qt Quick for more future-forward user interfaces for Augmented Reality and Virtual Reality where the final end target is not simply a 2D Window, but rather a spatial scene. As we move towards the next major release of Qt we hope to address all of these issues in Qt Quick by providing a way to define spatial content in the Qt Quick Scene Graph. This talk addresses what we have done so far and where we intend to go.