From 33690ef3bc44f9dbf0bbb09b748ffc39b03abecf Mon Sep 17 00:00:00 2001 From: Alex Trevithick Date: Sun, 11 Oct 2020 15:37:12 -0400 Subject: [PATCH] Update README.md --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 35eb3bb..4273d44 100644 --- a/README.md +++ b/README.md @@ -12,8 +12,7 @@ This is the official repository for General Radiance Field (GRF) from: ![](https://github.com/alextrevithick/GRF/blob/main/qual_comp_real.png) ## Qualitative Results on Shapenet -:-------------------------:|:-------------------------: -![](https://github.com/alextrevithick/GRF/blob/main/car.gif) | ![](https://github.com/alextrevithick/GRF/blob/main/chair.gif) +![](https://github.com/alextrevithick/GRF/blob/main/car.gif) ![](https://github.com/alextrevithick/GRF/blob/main/chair.gif) ## Method GRF is a powerful implicit neural function that can represent and render arbitrarily complex 3D scenes in a single network only from 2D observations. GRF takes a set of posed 2D images as input, constructs an internal representation for each 3D point of the scene, and renders the corresponding appearance and geometry of any 3D point viewing from an arbitrary angle. The key to our approach is to explicitly integrate the principle of multi-view geometry to obtain features representative of an entire ray from a given viewpoint. Thus, in a single forward pass to render a scene from a novel view, GRF takes some views of that scene as input, computes per-pixel pose-aware features for each ray from the given viewpoints through the image plane at that pixel, and then uses those features to predict the volumetric density and rgb values of points in 3D space. Volumetric rendering is then applied.