{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "\n# Using nuScenes with vision3d\n\nThis example demonstrates using the nuScenes dataset (mini-split) with\n:class:`vision3d.datasets.NuScenes3D`. It covers inspecting the\n:class:`~vision3d.datasets.SampleInputs`,\n:class:`~vision3d.datasets.SampleTargets` tuple returned by the dataset,\nbatching with :func:`vision3d.datasets.collate_fn` for training, and\nvisualizing a frame with :func:`vision3d.viz.log_sample`.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Construct the dataset\n:class:`~vision3d.datasets.NuScenes3D` yields sample frames describing\nthe 3D scene. Each sample carries lidar points, all six camera images,\ntheir intrinsics and extrinsics, and 3D bounding-box annotations of the\nobjects in the scene.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from pathlib import Path\n\nfrom vision3d.datasets import NuScenes3D\n\nNUSCENES_ROOT = Path(\"~/.cache/vision3d/nuscenes-mini\").expanduser()\n\ndataset = NuScenes3D(NUSCENES_ROOT, version=\"v1.0-mini\", split=\"train\", download=True)\nprint(f\"len(dataset) = {len(dataset)}\")\nprint(f\"classes ({len(dataset.classes)}): {dataset.classes}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Inspect a sample\nA single index returns a ``(inputs, targets)`` tuple where ``inputs``\nis a :class:`~vision3d.datasets.FusionInputs` dict and ``targets``\nis a :class:`~vision3d.datasets.SampleTargets` dict. Most values are\n:mod:`vision3d.tensors` subclasses. They tag each tensor with its\nown semantic type (:class:`~vision3d.tensors.PointCloud3D`,\n:class:`~vision3d.tensors.CameraImages`,\n:class:`~vision3d.tensors.BoundingBoxes3D`, ...) so\n:mod:`vision3d.transforms` can dispatch to the right operation per\ninput.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "inputs, targets = dataset[0]\n\nprint(\"inputs:\")\nprint(\n    f\"  points: type={type(inputs['points']).__name__} \"\n    f\"shape={tuple(inputs['points'].shape)} dtype={inputs['points'].dtype}\"\n)\nprint(\n    f\"  images: type={type(inputs['images']).__name__} \"\n    f\"shape={tuple(inputs['images'].shape)} dtype={inputs['images'].dtype}\"\n)\nprint(\n    f\"  intrinsics: type={type(inputs['intrinsics']).__name__} \"\n    f\"shape={tuple(inputs['intrinsics'].shape)} dtype={inputs['intrinsics'].dtype}\"\n)\nprint(\n    f\"  extrinsics: type={type(inputs['extrinsics']).__name__} \"\n    f\"shape={tuple(inputs['extrinsics'].shape)} dtype={inputs['extrinsics'].dtype}\"\n)\n\nprint(\"targets:\")\nprint(\n    f\"  boxes: type={type(targets['boxes']).__name__} \"\n    f\"shape={tuple(targets['boxes'].shape)} dtype={targets['boxes'].dtype} \"\n    f\"format={targets['boxes'].format.name}\"\n)\nprint(\n    f\"  labels: type={type(targets['labels']).__name__} \"\n    f\"shape={tuple(targets['labels'].shape)} dtype={targets['labels'].dtype}\"\n)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Batch with :func:`vision3d.datasets.collate_fn`\nVariable-size tensors (point clouds, per-frame box counts) cannot be stacked\nalong a batch dimension, so :func:`vision3d.datasets.collate_fn` returns\ntuples-of-tensors keyed the same as the per-sample dicts. Pass it as the\n``collate_fn`` argument to :class:`~torch.utils.data.DataLoader` whenever\nyou train or evaluate on a vision3d dataset.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from torch.utils.data import DataLoader\n\nfrom vision3d.datasets import collate_fn\n\nloader = DataLoader(dataset, batch_size=2, collate_fn=collate_fn)\nbatch_inputs, batch_targets = next(iter(loader))\n\nprint(f\"batch size: {len(batch_inputs)}\")\nfor i, (inp, tgt) in enumerate(zip(batch_inputs, batch_targets)):\n    print(\n        f\"  sample {i}: \"\n        f\"points={tuple(inp['points'].shape)} \"\n        f\"boxes={tuple(tgt['boxes'].shape)}\"\n    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Visualize the dataset\n:func:`vision3d.viz.log_sample` logs a\n:class:`~vision3d.datasets.SampleInputs` /\n:class:`~vision3d.datasets.SampleTargets` pair to [Rerun](https://rerun.io/) for interactive visualization.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import rerun as rr\nimport rerun.blueprint as rrb\n\nfrom vision3d.viz import fusion_layout, log_sample\n\nrr.init(\"vision3d_nuscenes\", spawn=True)\nrr.send_blueprint(\n    rrb.Blueprint(\n        fusion_layout(NuScenes3D.camera_names, NuScenes3D.camera_grid),\n        rrb.TimePanel(state=\"collapsed\"),\n    )\n)\nrr.log(\"world\", rr.ViewCoordinates.RIGHT_HAND_Z_UP, static=True)\nrr.log(\n    \"world/boxes\",\n    rr.AnnotationContext([(i, name) for name, i in dataset.class_to_idx.items()]),\n    static=True,\n)\n\nfor frame_idx in range(10):\n    f_inputs, f_targets = dataset[frame_idx]\n    rr.set_time(\"frame\", sequence=frame_idx)\n    log_sample(f_inputs, f_targets, label_to_id=dataset.class_to_idx, jpeg_quality=75)"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.14.5"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}