Skip to content

SceneView/sceneview-android

Repository files navigation

SceneView for Android

SceneView Logo

3D is just Compose UI.

SceneView 3.0 brings the full power of Google Filament and ARCore into Jetpack Compose. Write a Scene { } the same way you write a Column { }. Nodes are composables. Lifecycle is automatic. State drives everything.

Sceneview ARSceneview Filament ARCore

Discord Open Collective

The idea

You already know how to build a screen:

Column {
    Text("Title")
    Image(painter = painterResource(R.drawable.cover), contentDescription = null)
    Button(onClick = { /* ... */ }) { Text("Open") }
}

This is a 3D scene — a photorealistic helmet, HDR lighting, orbit-camera gestures:

Scene(modifier = Modifier.fillMaxSize()) {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
    }
    LightNode(type = LightManager.Type.SUN) {
        intensity(100_000f)
        castShadows(true)
    }
}

Same pattern. Same Kotlin. Same mental model — now with depth.

No engine lifecycle callbacks. No addChildNode / removeChildNode. No onResume/onPause overrides. No manual cleanup. The Compose runtime handles all of it.


AR in 15 lines

var anchor by remember { mutableStateOf<Anchor?>(null) }

ARScene(
    modifier = Modifier.fillMaxSize(),
    planeRenderer = true,
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    anchor?.let { a ->
        AnchorNode(anchor = a) {
            ModelNode(modelInstance = helmet, scaleToUnits = 0.5f)
        }
    }
}

When the plane is detected, anchor becomes non-null. Compose recomposes. AnchorNode enters the composition. The model appears — anchored to the physical world. When anchor is cleared, the node is removed and destroyed automatically. Pure Compose semantics, in AR.


What's new in 3.0

SceneView 3.0 is a ground-up rewrite around a single idea: 3D is just more Compose UI.

What changed What it means for you
Scene { } / ARScene { } content block Declare nodes as composables — no list, no add()
SceneScope / ARSceneScope DSL Every node type (ModelNode, AnchorNode, LightNode, ...) is @Composable
NodeScope trailing lambda Nest child nodes exactly like Column { } nests children
rememberModelInstance Async loading — returns null while loading, recomposes when ready
SceneNodeManager Internal bridge — Compose snapshot state drives the Filament scene graph
ViewNode Embed any Compose UI as a 3D billboard inside the scene
SurfaceType enum Choose SurfaceView (best performance) or TextureView (transparency)
All resources are remember Engine, loaders, environment, camera — Compose owns the lifecycle

See MIGRATION.md for a step-by-step upgrade guide from 2.x.

Table of Contents


3D with Compose

Installation

dependencies {
    implementation("io.github.sceneview:sceneview:3.0.0")
}

Quick start

Scene is a @Composable that renders a Filament 3D viewport. Think of it as a Box that adds a third dimension — everything inside its trailing block is declared with the SceneScope DSL.

@Composable
fun ModelViewerScreen() {
    val engine = rememberEngine()
    val modelLoader = rememberModelLoader(engine)
    val environmentLoader = rememberEnvironmentLoader(engine)

    // Loaded asynchronously — null until ready, then recomposition places it in the scene
    val modelInstance = rememberModelInstance(modelLoader, "models/damaged_helmet.glb")
    val environment = rememberEnvironment(environmentLoader) {
        environmentLoader.createHDREnvironment("environments/sky_2k.hdr")!!
    }

    Scene(
        modifier = Modifier.fillMaxSize(),
        engine = engine,
        modelLoader = modelLoader,
        environment = environment,
        cameraManipulator = rememberCameraManipulator(),
        mainLightNode = rememberMainLightNode(engine) { intensity = 100_000.0f },
        onGestureListener = rememberOnGestureListener(
            onDoubleTap = { _, node -> node?.apply { scale *= 2.0f } }
        )
    ) {
        // ── Everything below is 3D Compose ─────────────────────────────────

        modelInstance?.let { instance ->
            ModelNode(modelInstance = instance, scaleToUnits = 1.0f, autoAnimate = true)
        }

        // Nodes nest exactly like Compose UI
        Node(position = Position(y = 1.5f)) {
            CubeNode(size = Size(0.2f), materialInstance = redMaterial)
            SphereNode(radius = 0.1f)
        }
    }
}

That's it. No engine lifecycle callbacks, no onResume/onPause overrides, no manual scene graph bookkeeping. The Compose runtime handles all of it.

SceneScope DSL reference

All composables available inside Scene { }:

Composable Description
ModelNode(modelInstance, scaleToUnits?) Renders a glTF/GLB model. Set isEditable = true to enable pinch-to-scale and drag-to-rotate.
LightNode(type) Directional, point, spot, or sun light
CameraNode() Named camera (e.g. imported from a glTF)
CubeNode(size, materialInstance?) Box geometry
SphereNode(radius, materialInstance?) Sphere geometry
CylinderNode(radius, height, materialInstance?) Cylinder geometry
PlaneNode(size, normal, materialInstance?) Flat quad geometry
ImageNode(bitmap / fileLocation / resId) Image rendered on a plane
ViewNode(windowManager) { ComposeUI } Compose UI rendered as a 3D surface
MeshNode(primitiveType, vertexBuffer, indexBuffer) Custom GPU mesh
Node() Pivot / group node

Gesture sensitivityNode exposes scaleGestureSensitivity: Float (default 0.5). Lower values make pinch-to-scale feel more progressive. Tune it per-node in the apply block:

ModelNode(modelInstance = instance, isEditable = true, apply = {
    scaleGestureSensitivity = 0.3f   // 1.0 = raw, lower = more damped
    editableScaleRange = 0.2f..1.0f
})

Every node accepts an optional content trailing lambda — a NodeScope where child composables are automatically parented to the enclosing node:

Scene {
    Node(position = Position(y = 0.5f)) {    // NodeScope
        ModelNode(modelInstance = helmet)     // child of Node
        CubeNode(size = Size(0.05f))          // sibling, still a child of Node
    }
}

Async model loadingrememberModelInstance returns null while the file loads on Dispatchers.IO, then triggers recomposition. The node appears automatically when ready:

Scene {
    rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
        ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
    }
}

Compose UI inside 3D spaceViewNode renders any composable onto a plane in the scene:

val windowManager = rememberViewNodeManager()

Scene {
    ViewNode(windowManager = windowManager) {
        Card {
            Text("Hello from 3D!")
            Button(onClick = { /* ... */ }) { Text("Click me") }
        }
    }
}

Reactive state — pass any State directly into node parameters. The scene updates on every state change with no manual synchronisation:

var rotationY by remember { mutableFloatStateOf(0f) }
LaunchedEffect(Unit) { while (true) { withFrameNanos { rotationY += 0.5f } } }

Scene {
    ModelNode(
        modelInstance = helmet,
        rotation = Rotation(y = rotationY)   // recomposes on every frame change
    )
}

Tap interactionisEditable = true enables pinch-to-scale, drag-to-move, and two-finger-rotate gestures on any node with zero extra code:

Scene(
    onGestureListener = rememberOnGestureListener(
        onSingleTapConfirmed = { event, node -> println("Tapped: ${node?.name}") }
    )
) {
    ModelNode(modelInstance = helmet, isEditable = true)
}

Surface type — choose the backing Android surface:

// SurfaceView — renders behind Compose layers, best GPU performance (default)
Scene(surfaceType = SurfaceType.Surface)

// TextureView — renders inline with Compose, supports transparency / alpha blending
Scene(surfaceType = SurfaceType.TextureSurface, isOpaque = false)

Samples

Sample What it shows
Model Viewer Animated camera orbit around a glTF model, HDR environment, double-tap to scale
glTF Camera Use a camera node imported directly from a glTF file
Camera Manipulator Orbit / pan / zoom camera interaction
Autopilot Demo Full animated scene built entirely with geometry nodes — no model files needed

AR with Compose

Installation

dependencies {
    // Includes sceneview — no need to add both
    implementation("io.github.sceneview:arsceneview:3.0.0")
}

Add to AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera.ar" android:required="true" />

<application>
    <meta-data android:name="com.google.ar.core" android:value="required" />
</application>

Quick start

ARScene is Scene with ARCore wired in. The camera is driven by ARCore tracking. Everything else — anchors, models, lights, UI — is declared in the ARSceneScope content block. Normal Compose state decides what is in the scene.

var anchor by remember { mutableStateOf<Anchor?>(null) }

val engine = rememberEngine()
val modelLoader = rememberModelLoader(engine)
val modelInstance = rememberModelInstance(modelLoader, "models/helmet.glb")

ARScene(
    modifier = Modifier.fillMaxSize(),
    engine = engine,
    modelLoader = modelLoader,
    cameraNode = rememberARCameraNode(engine),
    planeRenderer = true,
    sessionConfiguration = { session, config ->
        config.depthMode =
            if (session.isDepthModeSupported(Config.DepthMode.AUTOMATIC))
                Config.DepthMode.AUTOMATIC
            else Config.DepthMode.DISABLED
        config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
        config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
    },
    onSessionUpdated = { _, frame ->
        if (anchor == null) {
            anchor = frame.getUpdatedPlanes()
                .firstOrNull { it.type == Plane.Type.HORIZONTAL_UPWARD_FACING }
                ?.let { frame.createAnchorOrNull(it.centerPose) }
        }
    }
) {
    // ── AR Compose content ───────────────────────────────────────────────────

    anchor?.let {
        AnchorNode(anchor = it) {
            // All SceneScope nodes are available inside AR nodes too
            modelInstance?.let { instance ->
                ModelNode(modelInstance = instance, scaleToUnits = 0.5f)
            }
        }
    }
}

The anchor drives state. When anchor changes, Compose recomposes and AnchorNode appears. When the anchor is cleared, the node is removed and destroyed automatically. AR state is just Kotlin state.

ARSceneScope DSL reference

ARScene { } provides everything from SceneScope plus:

Composable Description
AnchorNode(anchor) Follows a real-world ARCore anchor
PoseNode(pose) Follows a world-space pose (non-persistent)
HitResultNode(xPx, yPx) Auto hit-tests at a screen coordinate each frame
HitResultNode { frame -> hitResult } Custom hit-test lambda
AugmentedImageNode(augmentedImage) Tracks a detected real-world image
AugmentedFaceNode(augmentedFace) Renders a mesh aligned to a detected face
CloudAnchorNode(anchor) Persistent cross-device anchor via Google Cloud
TrackableNode(trackable) Follows any ARCore trackable
StreetscapeGeometryNode(streetscapeGeometry) Renders a Geospatial streetscape mesh

Augmented Images

ARScene(
    sessionConfiguration = { session, config ->
        config.augmentedImageDatabase = AugmentedImageDatabase(session).also { db ->
            db.addImage("cover", coverBitmap)
        }
    },
    onSessionUpdated = { _, frame ->
        frame.getUpdatedTrackables(AugmentedImage::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
            .forEach { detectedImages += it }
    }
) {
    detectedImages.forEach { image ->
        AugmentedImageNode(augmentedImage = image) {
            ModelNode(modelInstance = rememberModelInstance(modelLoader, "drone.glb"))
        }
    }
}

Augmented Faces

ARScene(
    sessionFeatures = setOf(Session.Feature.FRONT_CAMERA),
    sessionConfiguration = { _, config ->
        config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
    },
    onSessionUpdated = { session, _ ->
        detectedFaces = session.getAllTrackables(AugmentedFace::class.java)
            .filter { it.trackingState == TrackingState.TRACKING }
    }
) {
    detectedFaces.forEach { face ->
        AugmentedFaceNode(augmentedFace = face, meshMaterialInstance = faceMaterial)
    }
}

Geospatial Streetscape

ARScene(
    sessionConfiguration = { _, config ->
        config.geospatialMode = Config.GeospatialMode.ENABLED
        config.streetscapeGeometryMode = Config.StreetscapeGeometryMode.ENABLED
    },
    onSessionUpdated = { _, frame ->
        geometries = frame.getUpdatedTrackables(StreetscapeGeometry::class.java).toList()
    }
) {
    geometries.forEach { geo ->
        StreetscapeGeometryNode(streetscapeGeometry = geo, meshMaterialInstance = buildingMat)
    }
}

Samples

Sample What it shows
AR Model Viewer Tap-to-place on detected planes, model picker, animated reticle, pinch-to-scale, drag-to-rotate
AR Augmented Image Overlay content on detected real-world images
AR Cloud Anchors Host and resolve persistent cross-device anchors
AR Point Cloud Visualise ARCore feature points
Autopilot Demo Autonomous AR scene driven entirely by Compose state

Resources

Documentation

Community

Related Projects

Support the project

SceneView is open-source and community-funded.

About

3D and AR for Android using Jetpack Compose and Layout View, powered by Google Filament and ARCore

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Sponsor this project

  •  

Packages

 
 
 

Contributors