JavaScript
Article
By Michaela Lehr

Building a Full-Sphere 3D Image Gallery with React VR

By Michaela Lehr
Last chance to win! You'll get a... FREE 6-Month Subscription to SitePoint Premium Plus you'll go in the draw to WIN a new Macbook SitePoint 2017 Survey Yes, let's Do this It only takes 5 min

React VR is a JavaScript library by Facebook that reduces the effort of creating a WebVR application. You may compare React VR with A-Frame by Mozilla, but instead of writing HTML, with React VR we are using JavaScript to create a WebVR scene.

React VR is built on the WebGL library three.js and the React Native framework. This means we are able to use JSX tags, React Native components, like <View> or <Text>, or React Native concepts, like the flexbox layout. To simplify the process of creating a WebVR scene, React VR has a built-in support for 3D meshes, lights, videos, 3D shapes, or spherical images.

via GIPHY

Today we want to use React VR to build a viewer for spherical images. For this we will use four equirectangular photos, I shot at React Conf 2017 with my Theta S camera. The gallery will have four buttons to swap the images, that will work with the mouse and or VR headset. You can download the equirectangular images as well as the button graphics here. Last but not least, we will take a look at how animations work with React VR by adding a simple button transition.

For development, we are using a browser like Chrome on the desktop. To check if the stereoscopic rendering for VR devices works, we are using a Samsung phone with Gear VR. In theory, any mobile browser capable of WebVR should be able to render our app in a stereoscopic way for the usage with GearVR, Google Cardboard, or even Google Daydream. But the library, as well as the API, are still under development, so the support may not be reliable. Here is a good summary of browsers currently supporting WebVR features.

Development Setup and Project Structure

Let us start by installing the React VR CLI tool. Then we are creating a new React VR project with all its dependencies in a new folder called GDVR_REACTVR_SITEPOINT_GALLERY:

npm install -g react-vr-cli
react-vr init GDVR_REACTVR_SITEPOINT_GALLERY
cd GDVR_REACTVR_SITEPOINT_GALLERY

To start a local development server, we will run an npm script and browse to http://localhost:8081/vr/ in Chrome.

npm start

If you see a black and white room with stairs, pillars, and a “hello” text plane, everything is correct.

via GIPHY

The most important files and folders scaffolded by the React VR CLI are:

  • index.vr.js: This is the entry point of the application. At the moment the file contains the whole source code of React VR’s default scene, we already saw in the browser.
  • static_assets: This folder should contain all assets used in the application. We will put the equirectangular images and the button graphics in this folder.

We want our project to have three components:

  • a Canvas component, that holds the code for the full-sphere images,
  • a Button component, that creates a VR button to swap the images,
  • and a UI component, that builds a UI out of four Button components.

The three components will each have their own file, so let us create a components folder to contain these files. Then, before we start creating the Canvas component, let us remove the scaffolded example code from the index.vr.js file so it looks like this:

/* index.vr.js */
import React from 'react';
import {
  AppRegistry,
  View,
} from 'react-vr';

export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {
  render() {
    return (
      <View>
      </View>
    );
  }
};

AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => GDVR_REACTVR_SITEPOINT_GALLERY);

Adding a Spherical Image to the Scene

To add a spherical image to the scene, we will create a new file Canvas.js in the components folder:

/* Canvas.js */
import React from 'react';
import {
  asset,
  Pano,
} from 'react-vr';

class Canvas extends React.Component {

  constructor(props) {
    super(props);

    this.state = {
      src: this.props.src,
    }
  }

  render() {
    return (
      <Pano source={asset(this.state.src)}/>
    );
  }
};

export default Canvas;

In the first six lines of code, we import the dependencies. Then we declare our Canvas component and define how it renders by using the JSX syntax.

If you want to learn more about JSX, you might want to read “Getting Started with React and JSX”.

A look at the JSX code reveals that the Canvas component returns only one component, the React VR <Pano> component. It has a parameter, the source prop, that uses an asset function to load the image from the static_assets folder. The argument refers to a state, which we initialized in the constructor function.

In our case, we do not want to define the path in the Canvas component itself, but use the index.vr.js file to define all image paths. This is why the state.src object refers to the component’s props object.

Check out the ReactJS documentation for React.Component if you would like to know more about state and props.

Let us continue by modifying the index.vr.js file to use the Canvas component and render it to the scene:

/* index.vr.js */
import React from 'react';
import {
  AppRegistry,
  View,
} from 'react-vr';
import Canvas from './components/Canvas';

export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {

  constructor() {
    super();

    this.state = {
      src: 'reactconf_00.jpg',
    };
  }

  render() {
    return (
      <View>
        <Canvas
          src={this.state.src}
        />
      </View>
    );
  }
};

AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => GDVR_REACTVR_SITEPOINT_GALLERY);

Besides the already used React VR dependencies, we need to import our custom Canvas component. Next, we declare the application class in line six:

/* index.vr.js */
import Canvas from './components/Canvas';

Then, we add the <Canvas> component as a child component of the <View> component. We are using src as the component’s prop because we are referring to it in the Canvas component. A look in the browser should now show the panoramic image and we should already be able to interact with it.

via GIPHY

Create a UI Component to Hold Four Buttons

What we want to do now is to create four buttons that a user can trigger to swap the images. Thus we will add two new components: a UI component, and its child component, a Button component. Let’s start with the Button component:

/* Button.js */
import React from 'react';
import {
  asset,
  Image,
  View,
  VrButton,
} from 'react-vr';

class Button extends React.Component {

  onButtonClick = () => {
    this.props.onClick();
  }

  render () {
    return (
      <View
        style={{
          alignItems: 'center',
          flexDirection: 'row',
          margin: 0.0125,
          width: 0.7,
        }}
      >
        <VrButton
          onClick={this.onButtonClick}
        >
          <Image
            style={{
              width: 0.7,
              height: 0.7,
            }}
            source={asset(this.props.src)}
          >
          </Image>
        </VrButton>
      </View>
    );
  }
};

export default Button;

To build the button, we are using React VR’s <VrButton> component, which we import in line six. Also, we are using an image component to add our asset images to each button, since the <VrButton> component itself has no appearance. Like before, we are using a prop to define the image source. Another feature we are using twice in this component is the style prop, to add layout values to each button and its image. The <VrButton> also makes use of an event listener, onClick.

To add four Button components to our scene, we will use the UI parent component, which we will add as a child in index.vr.js afterward. Before writing the UI component, let’s create a config object defining the relation between the equirectangular images, the button images, and the buttons themselves. To do this, we declare a constant right after the import statements in the index.vr.js file:

/* index.vr.js */
const Config = [
  {
    key: 0,
    imageSrc: 'reactconf_00.jpg',
    buttonImageSrc: 'button-00.png',
  },
  {
    key: 1,
    imageSrc: 'reactconf_01.jpg',
    buttonImageSrc: 'button-01.png',
  },
  {
    key: 2,
    imageSrc: 'reactconf_02.jpg',
    buttonImageSrc: 'button-02.png',
  },
  {
    key: 3,
    imageSrc: 'reactconf_03.jpg',
    buttonImageSrc: 'button-03.png',
  }
];

The UI component will use the values defined in the config to handle the gaze and click events:

/* UI.js */
import React from 'react';
import {
  View,
} from 'react-vr';
import Button from './Button';

class UI extends React.Component {

  constructor(props) {
    super(props);

    this.buttons = this.props.buttonConfig;
  }

  render () {
    const buttons = this.buttons.map((button) =>
      <Button
        key={button.key}
        onClick={()=>{
          this.props.onClick(button.key);
        }}
        src={button.buttonImageSrc}
      />
      );

    return (
      <View
        style={{
          flexDirection: 'row',
          flexWrap: 'wrap',
          transform: [
            {rotateX: -12},
            {translate: [-1.5, 0, -3]},
          ],
          width: 3,
        }}
      >
        {buttons}
      </View>
    );
  }
};

export default UI;

To set the source of an image, we are using the config values, we already added to the index.vr.js file. We are also using the prop onClick to handle the click event, which we will add in a few moments to the index.vr.js file, too. Then, we create as many buttons as defined in the button config object, to add them later in the JSX code that will be rendered to the scene:

/* UI.js */
const buttons = this.buttons.map((button) =>
  <Button
    key={button.key}
    onClick={()=>{
      this.props.onClick(button.key);
    }}
    src={button.buttonImageSrc}
  />
);

Now, all we have to do is add the UI component to the scene defined in the index.vr.js file. Therefore we import the UI component right after importing the Canvas component:

/* index.vr.js */
import UI from './components/UI';

Next, we add the <Canvas> component to the scene:

/* index.vr.js */
<View>
  <Canvas
    src={this.state.src}
  />
  <UI
    buttonConfig={Config}
    onClick={(key)=>{
      this.setState({src: Config[key].imageSrc});
    }}
  />
</View>

When checking this code in the browser, you’ll notice that the click does not trigger an image source swap at the moment. To listen for updated props, we will have to add another function to the Canvas component right after the constructor function.

If you are interested in the lifecycle of a React component, you might want to read about React.Component in the React docs.

/* Canvas.js */
componentWillReceiveProps(nextProps) {
  this.setState({src: nextProps.src});
}

A test in the browser should now be successful and a click on a button image should change the spherical image.

via GIPHY

--ADVERTISEMENT--

Add Animations for Button State Transitions

To make the buttons more responsive to user interactions, we want to add some hover states and transitions between the default idle and the hover state. To do this, we will use the Animated library and Easing functions, and then write to functions for each transition: animateIn and animateOut:

/* Button.js */
import React from 'react';
import {
  Animated,
  asset,
  Image,
  View,
  VrButton,
} from 'react-vr';

const Easing = require('Easing');

class Button extends React.Component {

  constructor(props) {
    super();

    this.state = {
      animatedTranslation: new Animated.Value(0),
    };
  }

  animateIn = () => {
    Animated.timing(
      this.state.animatedTranslation,
      {
        toValue: 0.125,
        duration: 100,
        easing: Easing.in,
      }
    ).start();
  }

  animateOut = () => {
    Animated.timing(
      this.state.animatedTranslation,
      {
        toValue: 0,
        duration: 100,
        easing: Easing.in,
      }
    ).start();
  }

  onButtonClick = () => {
    this.props.onClick();
  }

  render () {
    return (
      <Animated.View
        style={{
          alignItems: 'center',
          flexDirection: 'row',
          margin: 0.0125,
          transform: [
            {translateZ: this.state.animatedTranslation},
          ],
          width: 0.7,
        }}
      >
        <VrButton
          onClick={this.onButtonClick}
          onEnter={this.animateIn}
          onExit={this.animateOut}
        >
          <Image
            style={{
              width: 0.7,
              height: 0.7,
            }}
            source={asset(this.props.src)}
          >
          </Image>
        </VrButton>
      </Animated.View>
    );
  }
};

export default Button;

After adding the dependencies, we define a new state to hold the translation value we want to animate:

/* Button js */
constructor(props) {
  super();

  this.state = {
    animatedTranslation: new Animated.Value(0),
  };
}

Next, we define two animations, each in a separate function, that describe the animation playing when the cursor enters the button, and when the cursor exits the button:

/* Button.js */
animateIn = () => {
  Animated.timing(
    this.state.animatedTranslation,
    {
      toValue: 0.125,
      duration: 100,
      easing: Easing.in,
    }
  ).start();
}

animateOut = () => {
  Animated.timing(
    this.state.animatedTranslation,
    {
      toValue: 0,
      duration: 100,
      easing: Easing.in,
    }
  ).start();
}

To use the state.animatedTranslation value in the JSX code, we have to make the <View> component animatable, by adding <Animated.view>:

/* Button.js */
<Animated.View
  style={{
    alignItems: 'center',
    flexDirection: 'row',
    margin: 0.0125,
    transform: [
      {translateZ: this.state.animatedTranslation},
    ],
    width: 0.7,
  }}
>

We will call the function when the event listeners onButtonEnter and onButtonExit are triggered:

/* Button.js */
<VrButton
  onClick={this.onButtonClick}
  onEnter={this.animateIn}
  onExit={this.animateOut}
>

A test of our code in the browser should show transitions between the position on the z-axis of each button:

via GIPHY

Building and Testing the Application

Open your app in a browser that supports WebVR and navigate to your development server, by using not http://localhost:8081/vr/index.html, but your IP address, for example, http://192.168.1.100:8081/vr/index.html. Then, tap on the View in VR button, which will open a full-screen view and start the stereoscopic rendering.

via GIPHY

To upload your app to a server, you can run the npm script npm run bundle, which will create a new folder build within the vr directory with the compiled files. On your web server you should have the following directory structure:

Web Server
├─ static_assets/
│
├─ index.html
├─ index.bundle.js
└─ client.bundle.js

Further Resources

This is all we had to do create a small WebVR application with React VR. You can find the entire project code on GitHub.

React VR has a few more components we did not discuss in this tutorial:

  • There is a Text component, to render text.
  • Four different light components can be used to add light to a scene: AmbientLight, DirectionalLight, PointLight, and Spotlight.
  • A Sound component adds spatial sound to a location in the 3D scene.
  • To add videos, the Video component or the VideoPano component can be used. A special VideoControl component adds controls the video playback and its volume.
  • With the Model component we can add 3D models in the obj format to the application.
  • A CylindricalPanel component can be used to align child elements to the inner surface of a cylinder, for example, to align user interface elements.
  • There are three components to create 3D primitives: a sphere component, a plane component and a box component.

Also, React VR is still under development, which is also the reason for it running only in the Carmel Developer Preview browser. If you are interested in learning more about React VR, here are a few interesting resources:

And if you like to dig deeper in WebVR in general, these articles might be right for you:

Have you worked with React VR yet? Have you made any cool projects with it? I’d love to hear about your opinions and experiences in the comments!

If you enjoyed this article and want to learn about React from the ground up, check out our course: React The ES6 Way

This article was peer reviewed by Moritz Kröger and Tim Severien. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!

Login or Create Account to Comment
Login Create Account
Recommended
Sponsors
Get the most important and interesting stories in tech. Straight to your inbox, daily.Is it good?