Learning Library

← Back to Papers
Research Paper

Quantum‑Enhanced Neural Radiance Fields for Compact 3D Synthesis

Authors: Daniele Lizzio Bosco, Shuteng Wang, Giuseppe Serra, Vladislav Golyanik
Published: 2026-01-08 • Added: 2026-01-09

Key Insights

  • QNeRF replaces large MLPs in NeRF with parameterised quantum circuits, exploiting superposition and entanglement to encode spatial and view‑dependent features.
  • Two variants are proposed: **Full QNeRF** uses the entire quantum state for maximal expressivity, while **Dual‑Branch QNeRF** splits spatial and view encodings, dramatically lowering circuit depth and improving scalability to near‑term hardware.
  • On moderate‑resolution image datasets, QNeRF attains comparable or superior rendering quality to classical NeRF while using < 50 % of the trainable parameters, indicating higher parameter efficiency.
  • The quantum components act as a compact continuous function approximator, leading to faster convergence during training due to richer representational capacity per parameter.
  • The architecture remains hybrid; classical post‑processing (e.g., density‑color decoding) is retained, allowing seamless integration with existing NeRF pipelines and gradual migration to real quantum processors.

Abstract

Recently, Quantum Visual Fields (QVFs) have shown promising improvements in model compactness and convergence speed for learning the provided 2D or 3D signals. Meanwhile, novel-view synthesis has seen major advances with Neural Radiance Fields (NeRFs), where models learn a compact representation from 2D images to render 3D scenes, albeit at the cost of larger models and intensive training. In this work, we extend the approach of QVFs by introducing QNeRF, the first hybrid quantum-classical model designed for novel-view synthesis from 2D images. QNeRF leverages parameterised quantum circuits to encode spatial and view-dependent information via quantum superposition and entanglement, resulting in more compact models compared to the classical counterpart. We present two architectural variants. Full QNeRF maximally exploits all quantum amplitudes to enhance representational capabilities. In contrast, Dual-Branch QNeRF introduces a task-informed inductive bias by branching spatial and view-dependent quantum state preparations, drastically reducing the complexity of this operation and ensuring scalability and potential hardware compatibility. Our experiments demonstrate that -- when trained on images of moderate resolution -- QNeRF matches or outperforms classical NeRF baselines while using less than half the number of parameters. These results suggest that quantum machine learning can serve as a competitive alternative for continuous signal representation in mid-level tasks in computer vision, such as 3D representation learning from 2D observations.

Full Analysis

# Quantum‑Enhanced Neural Radiance Fields for Compact 3D Synthesis **Authors:** Daniele Lizzio Bosco, Shuteng Wang, Giuseppe Serra, Vladislav Golyanik **Source:** [HuggingFace](None) | [arXiv](https://arxiv.org/abs/2601.05250) **Published:** 2026-01-08 ## Summary - QNeRF replaces large MLPs in NeRF with parameterised quantum circuits, exploiting superposition and entanglement to encode spatial and view‑dependent features. - Two variants are proposed: **Full QNeRF** uses the entire quantum state for maximal expressivity, while **Dual‑Branch QNeRF** splits spatial and view encodings, dramatically lowering circuit depth and improving scalability to near‑term hardware. - On moderate‑resolution image datasets, QNeRF attains comparable or superior rendering quality to classical NeRF while using < 50 % of the trainable parameters, indicating higher parameter efficiency. - The quantum components act as a compact continuous function approximator, leading to faster convergence during training due to richer representational capacity per parameter. - The architecture remains hybrid; classical post‑processing (e.g., density‑color decoding) is retained, allowing seamless integration with existing NeRF pipelines and gradual migration to real quantum processors. ## Abstract Recently, Quantum Visual Fields (QVFs) have shown promising improvements in model compactness and convergence speed for learning the provided 2D or 3D signals. Meanwhile, novel-view synthesis has seen major advances with Neural Radiance Fields (NeRFs), where models learn a compact representation from 2D images to render 3D scenes, albeit at the cost of larger models and intensive training. In this work, we extend the approach of QVFs by introducing QNeRF, the first hybrid quantum-classical model designed for novel-view synthesis from 2D images. QNeRF leverages parameterised quantum circuits to encode spatial and view-dependent information via quantum superposition and entanglement, resulting in more compact models compared to the classical counterpart. We present two architectural variants. Full QNeRF maximally exploits all quantum amplitudes to enhance representational capabilities. In contrast, Dual-Branch QNeRF introduces a task-informed inductive bias by branching spatial and view-dependent quantum state preparations, drastically reducing the complexity of this operation and ensuring scalability and potential hardware compatibility. Our experiments demonstrate that -- when trained on images of moderate resolution -- QNeRF matches or outperforms classical NeRF baselines while using less than half the number of parameters. These results suggest that quantum machine learning can serve as a competitive alternative for continuous signal representation in mid-level tasks in computer vision, such as 3D representation learning from 2D observations. --- *Topics: computer-vision, efficiency, ai-ml* *Difficulty: advanced* *Upvotes: 0*