- Development Setup and Project Structure
- Adding a Spherical Image to the Scene
- Create a UI Component to Hold Four Buttons
- Add Animations for Button State Transitions
- Building and Testing the Application
- Further Resources
- Frequently Asked Questions (FAQs) about Building a Full-Sphere 3D Image Gallery with React VR
React VR is a JavaScript library by Facebook that reduces the effort of creating a WebVR application. You may compare React VR with A-Frame by Mozilla, but instead of writing HTML, with React VR we’re using JavaScript to create a WebVR scene.
React VR is built on the WebGL library three.js and the React Native framework. This means that we’re able to use JSX tags, React Native components, like <View>
or <Text>
, or React Native concepts, like the flexbox layout. To simplify the process of creating a WebVR scene, React VR has built-in support for 3D meshes, lights, videos, 3D shapes, or spherical images.
In this article, we want to use React VR to build a viewer for spherical images. For this, we’ll use four equirectangular photos, which I shot at React Conf 2017 with my Theta S camera. The gallery will have four buttons to swap the images, which will work with the mouse and/or VR headset. You can download the equirectangular images as well as the button graphics here. Last but not least, we’ll take a look at how animations work with React VR by adding a simple button transition.
For development, we’re using a browser like Chrome on the desktop. To check if the stereoscopic rendering for VR devices works, we’re using a Samsung phone with Gear VR. In theory, any mobile browser capable of WebVR should be able to render our app in a stereoscopic way for the usage with GearVR, Google Cardboard, or even Google Daydream. But the library, as well as the API, are still under development, so the support may not be reliable. Here’s a good summary of browsers currently supporting WebVR features.
Development Setup and Project Structure
Let’s start by installing the React VR CLI tool. Then create a new React VR project with all its dependencies in a new folder called GDVR_REACTVR_SITEPOINT_GALLERY
:
npm install -g react-vr-cli
react-vr init GDVR_REACTVR_SITEPOINT_GALLERY
cd GDVR_REACTVR_SITEPOINT_GALLERY
To start a local development server, we’ll run an npm script and browse to http://localhost:8081/vr/
in Chrome.
npm start
If you see a black and white room with stairs, pillars, and a “hello” text plane, everything’s correct.
The most important files and folders scaffolded by the React VR CLI are:
index.vr.js
. This is the entry point of the application. Currently, the file contains the whole source code of React VR’s default scene, as we already saw in the browser.static_assets
. This folder should contain all assets used in the application. We’ll put the equirectangular images and the button graphics in this folder.
We want our project to have three components:
- a Canvas component, which holds the code for the full-sphere images
- a Button component, which creates a VR button to swap the images
- a UI component, which builds a UI out of four Button components.
The three components will each have their own file, so let’s create a components
folder to contain these files. Then, before we start creating the Canvas component, let’s remove the scaffolded example code from the index.vr.js
file so it looks like this:
/* index.vr.js */
import React from 'react';
import {
AppRegistry,
View,
} from 'react-vr';
export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {
render() {
return (
<View>
</View>
);
}
};
AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => GDVR_REACTVR_SITEPOINT_GALLERY);
Adding a Spherical Image to the Scene
To add a spherical image to the scene, we’ll create a new file Canvas.js
in the components
folder:
/* Canvas.js */
import React from 'react';
import {
asset,
Pano,
} from 'react-vr';
class Canvas extends React.Component {
constructor(props) {
super(props);
this.state = {
src: this.props.src,
}
}
render() {
return (
<Pano source={asset(this.state.src)}/>
);
}
};
export default Canvas;
In the first six lines of code, we import the dependencies. Then we declare our Canvas component and define how it renders by using the JSX syntax.
If you want to learn more about JSX, I recommend you check out “Getting Started with React and JSX”.
A look at the JSX code reveals that the Canvas component returns only one component, the React VR <Pano>
component. It has a parameter, the source
prop, that uses an asset
function to load the image from the static_assets
folder. The argument refers to a state, which we initialized in the constructor function.
In our case, we don’t want to define the path in the Canvas component itself, but use the index.vr.js
file to define all image paths. This is why the state.src
object refers to the component’s props
object.
Check out the ReactJS documentation for React.Component if you would like to know more about state and props.
Let’s continue by modifying the index.vr.js
file to use the Canvas component and render it to the scene:
/* index.vr.js */
import React from 'react';
import {
AppRegistry,
View,
} from 'react-vr';
import Canvas from './components/Canvas';
export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {
constructor() {
super();
this.state = {
src: 'reactconf_00.jpg',
};
}
render() {
return (
<View>
<Canvas
src={this.state.src}
/>
</View>
);
}
};
AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => GDVR_REACTVR_SITEPOINT_GALLERY);
Besides the already used React VR dependencies, we need to import our custom Canvas component. Next, we declare the application class in line six:
/* index.vr.js */
import Canvas from './components/Canvas';
Then, we add the <Canvas>
component as a child component of the <View>
component. We’re using src
as the component’s prop because we’re referring to it in the Canvas component. A look in the browser should now show the panoramic image, and we should already be able to interact with it.
Create a UI Component to Hold Four Buttons
What we want to do now is to create four buttons that a user can trigger to swap the images. So we’ll add two new components: a UI component, and its child component, a Button component. Let’s start with the Button component:
/* Button.js */
import React from 'react';
import {
asset,
Image,
View,
VrButton,
} from 'react-vr';
class Button extends React.Component {
onButtonClick = () => {
this.props.onClick();
}
render () {
return (
<View
style={{
alignItems: 'center',
flexDirection: 'row',
margin: 0.0125,
width: 0.7,
}}
>
<VrButton
onClick={this.onButtonClick}
>
<Image
style={{
width: 0.7,
height: 0.7,
}}
source={asset(this.props.src)}
>
</Image>
</VrButton>
</View>
);
}
};
export default Button;
To build the button, we’re using React VR’s <VrButton>
component, which we import in line six. Also, we’re using an image component to add our asset images to each button, since the <VrButton>
component itself has no appearance. Like before, we’re using a prop to define the image source. Another feature we’re using twice in this component is the style
prop, to add layout values to each button and its image. The <VrButton>
also makes use of an event listener, onClick
.
To add four Button components to our scene, we’ll use the UI parent component, which we’ll add as a child in index.vr.js
afterward. Before writing the UI component, let’s create a config object defining the relation between the equirectangular images, the button images, and the buttons themselves. To do this, we declare a constant right after the import statements in the index.vr.js
file:
/* index.vr.js */
const Config = [
{
key: 0,
imageSrc: 'reactconf_00.jpg',
buttonImageSrc: 'button-00.png',
},
{
key: 1,
imageSrc: 'reactconf_01.jpg',
buttonImageSrc: 'button-01.png',
},
{
key: 2,
imageSrc: 'reactconf_02.jpg',
buttonImageSrc: 'button-02.png',
},
{
key: 3,
imageSrc: 'reactconf_03.jpg',
buttonImageSrc: 'button-03.png',
}
];
The UI component will use the values defined in the config to handle the gaze and click events:
/* UI.js */
import React from 'react';
import {
View,
} from 'react-vr';
import Button from './Button';
class UI extends React.Component {
constructor(props) {
super(props);
this.buttons = this.props.buttonConfig;
}
render () {
const buttons = this.buttons.map((button) =>
<Button
key={button.key}
onClick={()=>{
this.props.onClick(button.key);
}}
src={button.buttonImageSrc}
/>
);
return (
<View
style={{
flexDirection: 'row',
flexWrap: 'wrap',
transform: [
{rotateX: -12},
{translate: [-1.5, 0, -3]},
],
width: 3,
}}
>
{buttons}
</View>
);
}
};
export default UI;
To set the source of an image, we’re using the config values we already added to the index.vr.js
file. We’re also using the prop onClick
to handle the click event, which we’ll also add in a few moments to the index.vr.js
file. Then we create as many buttons as defined in the button config object, to add them later in the JSX code that will be rendered to the scene:
/* UI.js */
const buttons = this.buttons.map((button) =>
<Button
key={button.key}
onClick={()=>{
this.props.onClick(button.key);
}}
src={button.buttonImageSrc}
/>
);
Now, all we have to do is add the UI component to the scene defined in the index.vr.js
file. So we import the UI component right after importing the Canvas component:
/* index.vr.js */
import UI from './components/UI';
Next, we add the <Canvas>
component to the scene:
/* index.vr.js */
<View>
<Canvas
src={this.state.src}
/>
<UI
buttonConfig={Config}
onClick={(key)=>{
this.setState({src: Config[key].imageSrc});
}}
/>
</View>
When checking this code in the browser, you’ll notice that the click doesn’t trigger an image source swap at the moment. To listen for updated props, we’ll have to add another function to the Canvas component right after the constructor function.
If you’re interested in the lifecycle of a React component, you might want to read about React.Component in the React docs.
/* Canvas.js */
componentWillReceiveProps(nextProps) {
this.setState({src: nextProps.src});
}
A test in the browser should now be successful, and a click on a button image should change the spherical image.
Add Animations for Button State Transitions
To make the buttons more responsive to user interactions, we want to add some hover states and transitions between the default idle and the hover state. To do this, we’ll use the Animated library and Easing functions, and then write to functions for each transition: animateIn
and animateOut
:
/* Button.js */
import React from 'react';
import {
Animated,
asset,
Image,
View,
VrButton,
} from 'react-vr';
const Easing = require('Easing');
class Button extends React.Component {
constructor(props) {
super();
this.state = {
animatedTranslation: new Animated.Value(0),
};
}
animateIn = () => {
Animated.timing(
this.state.animatedTranslation,
{
toValue: 0.125,
duration: 100,
easing: Easing.in,
}
).start();
}
animateOut = () => {
Animated.timing(
this.state.animatedTranslation,
{
toValue: 0,
duration: 100,
easing: Easing.in,
}
).start();
}
onButtonClick = () => {
this.props.onClick();
}
render () {
return (
<Animated.View
style={{
alignItems: 'center',
flexDirection: 'row',
margin: 0.0125,
transform: [
{translateZ: this.state.animatedTranslation},
],
width: 0.7,
}}
>
<VrButton
onClick={this.onButtonClick}
onEnter={this.animateIn}
onExit={this.animateOut}
>
<Image
style={{
width: 0.7,
height: 0.7,
}}
source={asset(this.props.src)}
>
</Image>
</VrButton>
</Animated.View>
);
}
};
export default Button;
After adding the dependencies, we define a new state to hold the translation value we want to animate:
/* Button js */
constructor(props) {
super();
this.state = {
animatedTranslation: new Animated.Value(0),
};
}
Next, we define two animations, each in a separate function, that describe the animation playing when the cursor enters the button, and when the cursor exits the button:
/* Button.js */
animateIn = () => {
Animated.timing(
this.state.animatedTranslation,
{
toValue: 0.125,
duration: 100,
easing: Easing.in,
}
).start();
}
animateOut = () => {
Animated.timing(
this.state.animatedTranslation,
{
toValue: 0,
duration: 100,
easing: Easing.in,
}
).start();
}
To use the state.animatedTranslation
value in the JSX code, we have to make the <View>
component animatable, by adding <Animated.view>
:
/* Button.js */
<Animated.View
style={{
alignItems: 'center',
flexDirection: 'row',
margin: 0.0125,
transform: [
{translateZ: this.state.animatedTranslation},
],
width: 0.7,
}}
>
We’ll call the function when the event listeners onButtonEnter
and onButtonExit
are triggered:
/* Button.js */
<VrButton
onClick={this.onButtonClick}
onEnter={this.animateIn}
onExit={this.animateOut}
>
A test of our code in the browser should show transitions between the position on the z-axis of each button:
Building and Testing the Application
Open your app in a browser that supports WebVR and navigate to your development server, by using not http://localhost:8081/vr/index.html
, but your IP address, for example, http://192.168.1.100:8081/vr/index.html
. Then, tap on the View in VR
button, which will open a full-screen view and start the stereoscopic rendering.
To upload your app to a server, you can run the npm script npm run bundle
, which will create a new build
folder within the vr
directory with the compiled files. On your web server you should have the following directory structure:
Web Server
├─ static_assets/
│
├─ index.html
├─ index.bundle.js
└─ client.bundle.js
Further Resources
This is all we had to do create a small WebVR application with React VR. You can find the entire project code on GitHub.
React VR has a few more components we didn’t discuss in this tutorial:
- There’s a
Text
component for rendering text. - Four different light components can be used to add light to a scene:
AmbientLight
,DirectionalLight
,PointLight
, andSpotlight
. - A
Sound
component adds spatial sound to a location in the 3D scene. - To add videos, the
Video
component or theVideoPano
component can be used. A specialVideoControl
component adds controls for video playback and volume. - With the
Model
component we can add 3D models in theobj
format to the application. - A
CylindricalPanel
component can be used to align child elements to the inner surface of a cylinder — for example, to align user interface elements. - There are three components to create 3D primitives: a
sphere
component, aplane
component and abox
component.
Also, React VR is still under development, which is also the reason for it running only in the Carmel Developer Preview browser. If you’re interested in learning more about React VR, here are a few interesting resources:
- React VR Docs
- React VR on GitHub
- Awesome React VR, a collection of React VR resources.
And if you’d like to dig deeper in WebVR in general, these articles might be right for you:
- “A-Frame: The Easiest Way to Bring VR to the Web Today”
- “Embedding Virtual Reality Across the Web with VR Views”
Have you worked with React VR yet? Have you made any cool projects with it? I’d love to hear about your opinions and experiences in the comments!
If you enjoyed this article and want to learn about React from the ground up, check out our React The ES6 Way course.
This article was peer reviewed by Moritz Kröger and Tim Severien. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
Frequently Asked Questions (FAQs) about Building a Full-Sphere 3D Image Gallery with React VR
How can I make my React VR image gallery responsive?
Making your React VR image gallery responsive involves ensuring that it adjusts to different screen sizes and orientations. This can be achieved by using the ‘resize’ event listener in the window object. This event listener triggers every time the window size changes. You can then use the ‘setState’ method to adjust the dimensions of your image gallery based on the new window size.
Can I integrate other libraries with React VR for enhanced functionality?
Yes, you can integrate other libraries with React VR to enhance its functionality. For instance, you can use the ‘react-image-gallery’ library to create a more interactive and feature-rich image gallery. This library provides numerous customization options and supports swipe, thumbnail navigation, fullscreen, and autoplay.
How can I optimize the loading time of images in my React VR gallery?
Optimizing the loading time of images in your React VR gallery can be achieved by using techniques such as lazy loading and image compression. Lazy loading involves loading images only when they are needed, i.e., when they come into the viewport. Image compression reduces the file size of your images without significantly affecting their quality.
How can I add navigation controls to my React VR image gallery?
Adding navigation controls to your React VR image gallery can be done by using the ‘react-image-gallery’ library. This library provides built-in navigation controls such as next/previous buttons and thumbnail navigation. You can customize these controls to suit your needs.
Can I build a React VR image gallery without prior experience in VR?
Yes, you can build a React VR image gallery even if you don’t have prior experience in VR. React VR is designed to be easy to use and learn, especially for developers who are already familiar with React. The key is to understand the basic concepts of VR and how they apply to React VR.
How can I add a fullscreen feature to my React VR image gallery?
Adding a fullscreen feature to your React VR image gallery can be done by using the ‘react-image-gallery’ library. This library provides a built-in fullscreen feature that you can enable by setting the ‘showFullscreenButton’ prop to true.
How can I add autoplay to my React VR image gallery?
Adding autoplay to your React VR image gallery can be done by using the ‘react-image-gallery’ library. This library provides a built-in autoplay feature that you can enable by setting the ‘autoPlay’ prop to true.
Can I use React VR to build other types of VR experiences?
Yes, you can use React VR to build other types of VR experiences. Besides image galleries, you can use React VR to build interactive VR experiences such as games, simulations, and educational apps.
How can I handle errors in my React VR image gallery?
Handling errors in your React VR image gallery can be done by using the ‘componentDidCatch’ lifecycle method. This method catches errors that occur during rendering and allows you to display a fallback UI.
Can I use React VR to build a 3D image gallery for mobile devices?
Yes, you can use React VR to build a 3D image gallery for mobile devices. React VR is designed to be platform-agnostic, meaning it can run on various platforms including mobile devices. However, keep in mind that the performance and user experience may vary depending on the device’s capabilities.
Michaela is a front-end developer and UX designer from Berlin. She has co-founded the development studio GeilDanke. In her free time she enjoys making games, practicing yoga, surfing and knitting.