A new dataset for better augmented and mixed reality


OpenRooms creates photorealistic synthetic scenes from input images or scans, with unprecedented control over shape, materials and lighting. Credit: University of California - San Diego
OpenRooms creates photorealistic synthetic scenes from input images or scans, with unprecedented control over shape, materials and lighting. Credit: University of California - San Diego

Computer scientists at the University of California San Diego have released OpenRooms, a new, open-source dataset with tools that will help users manipulate objects, materials, lighting, and other properties in indoor 3D scenes to advance augmented reality and robotics.


Manmohan Chandraker, a professor in the UC San Diego Department of Computer Science and Engineering said, this was a huge effort, involving 11 Ph.D. and master's students from my group and collaborators across UC San Diego and Adobe. It is an important development, with great potential to impact both academia and industry in computer vision, graphics, robotics, and machine learning.


The OpenRooms dataset and related updates are publicly available on this website, with technical details described in an associated paper presented at CVPR 2021 in May.


OpenRooms lets users realistically adjust scenes to their liking. If a family wants to visualize a kitchen remodel, they can change the countertop materials, lighting or pretty much anything in the room.


Chandraker said, with OpenRooms, we can compute all the knowledge about the 3D shapes, material and lighting in the scene on a per-pixel basis. People can take a photograph of a room and insert and manipulate virtual objects. They could look at a leather chair, then change the material to a fabric chair and see which one looks better.


OpenRooms can even show how that chair might look in the daytime under natural light from a window or under a lamp at night. It can also help solve robotics problems, such as the best route to take over floors with varying friction profiles. These capabilities are finding a lot of interest in the simulation community because, previously, the data was proprietary or not available with comparable photorealism.


Chandraker said, these tools are now available in a truly democratic fashion providing accessible assets for photorealistic augmented reality and robotics applications.


Chandraker's team uses computational methods to make sense of the visual world. They are particularly focused on how shapes, materials, and lighting interacts to form images.


He said, we essentially want to understand how the world is created, and how we can act upon it. We can insert objects into existing scenes or advance self-driving, but to do these things, we need to understand various aspects of a scene and how they interact with each other.

This deep understanding is essential to achieve photorealism in mixed reality. Inserting an object into a scene requires reasoning about shading from various light sources, shadows cast by other objects, or inter-reflections from the surrounding scene. The framework must also handle similar long-range interactions among distant parts of the scene to change materials or lighting in complex indoor scenes. Hollywood solves these problems with measurement-based platforms, such as shooting actor Andy Serkis inside a gantry and converting those images into Gollum in the Lord of the Rings Trilogy. The lab wants to achieve similar effects without expensive systems.


To get there, the group needed to find creative ways to represent shapes, materials, and lighting. But acquiring this information can be time-consuming, data-hungry, and expensive, especially when dealing with complex indoor scenes featuring furniture and walls that have different shapes and materials and are illuminated by several light sources, such as windows, ceiling lights, or lamps.


Chandraker said, one would have to measure the lighting and material properties at every point in the room. It's doable but it simply does not scale.


OpenRooms uses synthetic data to render these images, which provides an accurate and inexpensive way to provide ground truth geometry, materials, and lighting. The data can be used to train powerful deep neural networks that estimate those properties in real images, allowing photorealistic object insertion and material editing. These possibilities were demonstrated in a CVPR 2020 oral presentation by Zhengqin Li, a fifth-year Ph.D. student advised by Chandraker and first author on the OpenRooms paper. The software provides automated tools that allow users to take real images and convert them into photorealistic, synthetic counterparts.


Chandraker said, we are creating a framework where users can use their cell phones or 3D scanners for developing datasets that enable their own augmented reality applications. They can simply use scans or sets of photographs.


Chandraker and the team were motivated, in part, by the need to create a public domain platform. Large tech companies have tremendous resources to create training data and other IP, making it difficult for small players to get a foothold. This was recently illustrated when a Lithuanian company, called Planner 5D, sued Facebook and Princeton, claiming they unlawfully utilized its proprietary data.


Chandraker said, you can imagine such data is really useful for many applications. But progress in this space has been limited to a few big players who have the capacity to do these kinds of complex measurements or work with expensive assets created by artists.


Journal Information: Zhengqin Li et al, OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets, arXiv:2007.12868v2 [cs.CV] arxiv.org/abs/2007.12868

3 views0 comments

Recent Posts

See All