SitePoint
  • Premium
  • Library
  • Community
  • Jobs
  • Blog
LoginStart Free Trial
6 JavaScript Projects
6 JavaScript Projects
Notice of Rights
Notice of Liability
Trademark Notice
About SitePoint
Preface
Who Should Read This Book?
Conventions Used
Development Setup and Project Structure
Adding a Spherical Image to the Scene
Create a UI Component to Hold Four Buttons
Add Animations for Button State Transitions
Building and Testing the Application
Further Resources
What is SimpleWebRTC
Building the WebRTC Video Chat App
Dependencies
Project Setup
Markup
Templates
Main App Script
Chat Room Script
Remote Video Camera
Deployment
Conclusion
Prerequisites
Building the Project
Project Directories and Dependencies
Application Base
Front-end Skeleton Templates
Client-side Routing
Latest Currency Rates
Exchange Conversion
Historical Currency Rates
Summary
So, What Is HyperApp?
Getting Started
Hello Hyperapp!
Components
State
Actions
Hyperlist
Initial State and View
Adding a Task
Mark a Task as Completed
Delete a Task
Delete All Completed Tasks
That’s All, Folks!
Basic Setup
Folder Structure
Some Basic HTML
ES6 Modules
Add Some Style
Babel
Parcel
npm Scripts
Deploying to GitHub Pages
Workflow
That’s All, Folks!
The Project
Sketching
Borrowing Code
Walking Through Our Code
Adapting Code
Making it Dynamic
Making it Interactive

Build a Full-Sphere 3D Image Gallery with React VR

React VR is a JavaScript library by Facebook that reduces the effort of creating a WebVR application. You may compare React VR with A-Frame by Mozilla, but instead of writing HTML, with React VR we’re using JavaScript to create a WebVR scene.

React VR is built on the WebGL library three.js and the React Native framework. This means that we’re able to use JSX tags, React Native components, like <View> or <Text>, or React Native concepts, like the flexbox layout. To simplify the process of creating a WebVR scene, React VR has built-in support for 3D meshes, lights, videos, 3D shapes, or spherical images.

The finished app

In this chapter, we want to use React VR to build a viewer for spherical images. For this, we’ll use four equirectangular photos, which I shot at React Conf 2017 with my Theta S camera. The gallery will have four buttons to swap the images, which will work with the mouse and/or VR headset. You can download the equirectangular images as well as the button graphics here. Last but not least, we’ll take a look at how animations work with React VR by adding a simple button transition.

For development, we’re using a browser like Chrome on the desktop. To check if the stereoscopic rendering for VR devices works, we’re using a Samsung phone with Gear VR. In theory, any mobile browser capable of WebVR should be able to render our app in a stereoscopic way for the usage with GearVR, Google Cardboard, or even Google Daydream. But the library, as well as the API, are still under development, so the support may not be reliable. Here’s a good summary of browsers currently supporting WebVR features.

Development Setup and Project Structure

Let’s start by installing the React VR CLI tool. Then create a new React VR project with all its dependencies in a new folder called GDVR_REACTVR_SITEPOINT_GALLERY:

Code snippet

npm install -g react-vr-clireact-vr init GDVR_REACTVR_SITEPOINT_GALLERYcd GDVR_REACTVR_SITEPOINT_GALLERY

To start a local development server, we’ll run an npm script and browse to http://localhost:8081/vr/ in Chrome.

Code snippet

npm start

If you see a black and white room with stairs, pillars, and a “hello” text plane, everything’s correct.

a black and white room with stairs, pillars, and a 'hello' text plane

The most important files and folders scaffolded by the React VR CLI are:

  • index.vr.js. This is the entry point of the application. Currently, the file contains the whole source code of React VR’s default scene, as we already saw in the browser.
  • static_assets. This folder should contain all assets used in the application. We’ll put the equirectangular images and the button graphics in this folder.

We want our project to have three components:

  • a Canvas component, which holds the code for the full-sphere images
  • a Button component, which creates a VR button to swap the images
  • a UI component, which builds a UI out of four Button components.

The three components will each have their own file, so let’s create a components folder to contain these files. Then, before we start creating the Canvas component, let’s remove the scaffolded example code from the index.vr.js file so it looks like this:

Code snippet

/* index.vr.js */import React from 'react';import {  AppRegistry,  View,} from 'react-vr';
export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {  render() {    return (      <View>      </View>    );  }};
AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => ➥GDVR_REACTVR_SITEPOINT_GALLERY);

Adding a Spherical Image to the Scene

To add a spherical image to the scene, we’ll create a new file Canvas.js in the components folder:

Code snippet

/* Canvas.js */import React from 'react';import {  asset,  Pano,} from 'react-vr';
class Canvas extends React.Component {
  constructor(props) {    super(props);
    this.state = {      src: this.props.src,    }  }
  render() {    return (      <Pano source={asset(this.state.src)}/>    );  }};
export default Canvas;

In the first six lines of code, we import the dependencies. Then we declare our Canvas component and define how it renders by using the JSX syntax.

More on JSX

If you want to learn more about JSX, I recommend you check out "Getting Started with React and JSX".

A look at the JSX code reveals that the Canvas component returns only one component, the React VR <Pano> component. It has a parameter, the source prop, that uses an asset function to load the image from the static_assets folder. The argument refers to a state, which we initialized in the constructor function.

In our case, we don’t want to define the path in the Canvas component itself, but use the index.vr.js file to define all image paths. This is why the state.src object refers to the component’s props object.

More on State and Props

Check out the ReactJS documentation for React.Component if you would like to know more about state and props.

Let’s continue by modifying the index.vr.js file to use the Canvas component and render it to the scene:

Code snippet

/* index.vr.js */import React from 'react';import {  AppRegistry,  View,} from 'react-vr';import Canvas from './components/Canvas';
export default class GDVR_REACTVR_SITEPOINT_GALLERY extends React.Component {
  constructor() {    super();
    this.state = {      src: 'reactconf_00.jpg',    };  }
  render() {    return (      <View>        <Canvas          src={this.state.src}        />      </View>    );  }};
AppRegistry.registerComponent('GDVR_REACTVR_SITEPOINT_GALLERY', () => ➥GDVR_REACTVR_SITEPOINT_GALLERY);

Besides the already used React VR dependencies, we need to import our custom Canvas component. Next, we declare the application class in line six:

Code snippet

/* index.vr.js */import Canvas from './components/Canvas';

Then, we add the <Canvas> component as a child component of the <View> component. We’re using src as the component’s prop because we’re referring to it in the Canvas component. A look in the browser should now show the panoramic image, and we should already be able to interact with it.

A Panoramic image

Create a UI Component to Hold Four Buttons

What we want to do now is to create four buttons that a user can trigger to swap the images. So we’ll add two new components: a UI component, and its child component, a Button component. Let’s start with the Button component:

Code snippet

/* Button.js */import React from 'react';import {  asset,  Image,  View,  VrButton,} from 'react-vr';
class Button extends React.Component {
  onButtonClick = () => {    this.props.onClick();  }
  render () {    return (      <View        style={{          alignItems: 'center',          flexDirection: 'row',          margin: 0.0125,          width: 0.7,        }}      >        <VrButton          onClick={this.onButtonClick}        >          <Image            style={{              width: 0.7,              height: 0.7,            }}            source={asset(this.props.src)}          >          </Image>        </VrButton>      </View>    );  }};
export default Button;

To build the button, we’re using React VR’s <VrButton> component, which we import in line six. Also, we’re using an image component to add our asset images to each button, since the <VrButton> component itself has no appearance. Like before, we’re using a prop to define the image source. Another feature we’re using twice in this component is the style prop, to add layout values to each button and its image. The <VrButton> also makes use of an event listener, onClick.

To add four Button components to our scene, we’ll use the UI parent component, which we’ll add as a child in index.vr.js afterward. Before writing the UI component, let’s create a config object defining the relation between the equirectangular images, the button images, and the buttons themselves. To do this, we declare a constant right after the import statements in the index.vr.js file:

Code snippet

/* index.vr.js */const Config = [  {    key: 0,    imageSrc: 'reactconf_00.jpg',    buttonImageSrc: 'button-00.png',  },  {    key: 1,    imageSrc: 'reactconf_01.jpg',    buttonImageSrc: 'button-01.png',  },  {    key: 2,    imageSrc: 'reactconf_02.jpg',    buttonImageSrc: 'button-02.png',  },  {    key: 3,    imageSrc: 'reactconf_03.jpg',    buttonImageSrc: 'button-03.png',  }];

The UI component will use the values defined in the config to handle the gaze and click events:

Code snippet

/* UI.js */import React from 'react';import {  View,} from 'react-vr';import Button from './Button';
class UI extends React.Component {
  constructor(props) {    super(props);
    this.buttons = this.props.buttonConfig;  }
  render () {    const buttons = this.buttons.map((button) =>      <Button        key={button.key}        onClick={()=>{          this.props.onClick(button.key);        }}        src={button.buttonImageSrc}      />      );
    return (      <View        style={{          flexDirection: 'row',          flexWrap: 'wrap',          transform: [            {rotateX: -12},            {translate: [-1.5, 0, -3]},          ],          width: 3,        }}      >        {buttons}      </View>    );  }};
export default UI;

To set the source of an image, we’re using the config values we already added to the index.vr.js file. We’re also using the prop onClick to handle the click event, which we’ll also add in a few moments to the index.vr.js file. Then we create as many buttons as defined in the button config object, to add them later in the JSX code that will be rendered to the scene:

Code snippet

/* UI.js */const buttons = this.buttons.map((button) =>  <Button    key={button.key}    onClick={()=>{      this.props.onClick(button.key);    }}    src={button.buttonImageSrc}  />);

Now, all we have to do is add the UI component to the scene defined in the index.vr.js file. So we import the UI component right after importing the Canvas component:

Code snippet

/* index.vr.js */import UI from './components/UI';

Next, we add the <Canvas> component to the scene:

Code snippet

/* index.vr.js */<View>  <Canvas    src={this.state.src}  />  <UI    buttonConfig={Config}    onClick={(key)=>{      this.setState({src: Config[key].imageSrc});    }}  /></View>

When checking this code in the browser, you’ll notice that the click doesn’t trigger an image source swap at the moment. To listen for updated props, we’ll have to add another function to the Canvas component right after the constructor function.

Component Lifecycle

If you’re interested in the lifecycle of a React component, you might want to read about React.Component in the React docs.

Code snippet

/* Canvas.js */componentWillReceiveProps(nextProps) {  this.setState({src: nextProps.src});}

A test in the browser should now be successful, and a click on a button image should change the spherical image.

Clicking the button changes the image

Add Animations for Button State Transitions

To make the buttons more responsive to user interactions, we want to add some hover states and transitions between the default idle and the hover state. To do this, we’ll use the Animated library and Easing functions, and then write to functions for each transition: animateIn and animateOut:

Code snippet

/* Button.js */import React from 'react';import {  Animated,  asset,  Image,  View,  VrButton,} from 'react-vr';
const Easing = require('Easing');
class Button extends React.Component {
  constructor(props) {    super();
    this.state = {      animatedTranslation: new Animated.Value(0),    };  }
  animateIn = () => {    Animated.timing(      this.state.animatedTranslation,      {        toValue: 0.125,        duration: 100,        easing: Easing.in,      }    ).start();  }
  animateOut = () => {    Animated.timing(      this.state.animatedTranslation,      {        toValue: 0,        duration: 100,        easing: Easing.in,      }    ).start();  }
  onButtonClick = () => {    this.props.onClick();  }
  render () {    return (      <Animated.View        style={{          alignItems: 'center',          flexDirection: 'row',          margin: 0.0125,          transform: [            {translateZ: this.state.animatedTranslation},          ],          width: 0.7,        }}      >        <VrButton          onClick={this.onButtonClick}          onEnter={this.animateIn}          onExit={this.animateOut}        >          <Image            style={{              width: 0.7,              height: 0.7,            }}            source={asset(this.props.src)}          >          </Image>        </VrButton>      </Animated.View>    );  }};
export default Button;

After adding the dependencies, we define a new state to hold the translation value we want to animate:

Code snippet

/* Button js */constructor(props) {  super();
  this.state = {    animatedTranslation: new Animated.Value(0),  };}

Next, we define two animations, each in a separate function, that describe the animation playing when the cursor enters the button, and when the cursor exits the button:

Code snippet

/* Button.js */animateIn = () => {  Animated.timing(    this.state.animatedTranslation,    {      toValue: 0.125,      duration: 100,      easing: Easing.in,    }  ).start();}
animateOut = () => {  Animated.timing(    this.state.animatedTranslation,    {      toValue: 0,      duration: 100,      easing: Easing.in,    }  ).start();}

To use the state.animatedTranslation value in the JSX code, we have to make the <View> component animatable, by adding <Animated.view>:

Code snippet

/* Button.js */<Animated.View  style={{    alignItems: 'center',    flexDirection: 'row',    margin: 0.0125,    transform: [      {translateZ: this.state.animatedTranslation},    ],    width: 0.7,  }}>

We’ll call the function when the event listeners onButtonEnter and onButtonExit are triggered:

Code snippet

/* Button.js */<VrButton  onClick={this.onButtonClick}  onEnter={this.animateIn}  onExit={this.animateOut}>

A test of our code in the browser should show transitions between the position on the z-axis of each button:

Transitions on the z-axis

Building and Testing the Application

Open your app in a browser that supports WebVR and navigate to your development server, by using not http://localhost:8081/vr/index.html, but your IP address, for example, http://192.168.1.100:8081/vr/index.html. Then, tap on the View in VR button, which will open a full-screen view and start the stereoscopic rendering.

Stereoscopic rendering

To upload your app to a server, you can run the npm script npm run bundle, which will create a new build folder within the vr directory with the compiled files. On your web server you should have the following directory structure:

Code snippet

Web Server├─ static_assets/│├─ index.html├─ index.bundle.js└─ client.bundle.js

Further Resources

Full Project Code

This is all we had to do create a small WebVR application with React VR. You can find the entire project code on GitHub.

This is all we had to do create a small WebVR application with React VR. React VR has a few more components we didn’t discuss in this tutorial:

  • There’s a Text component for rendering text.
  • Four different light components can be used to add light to a scene: AmbientLight, DirectionalLight, PointLight, and Spotlight.
  • A Sound component adds spatial sound to a location in the 3D scene.
  • To add videos, the Video component or the VideoPano component can be used. A special VideoControl component adds controls for video playback and volume.
  • With the Model component we can add 3D models in the obj format to the application.
  • A CylindricalPanel component can be used to align child elements to the inner surface of a cylinder — for example, to align user interface elements.
  • There are three components to create 3D primitives: a sphere component, a plane component and a box component.

Also, React VR is still under development, which is also the reason for it running only in the Carmel Developer Preview browser. If you’re interested in learning more about React VR, here are a few interesting resources:

  • React VR Docs
  • React VR on GitHub
  • Awesome React VR, a collection of React VR resources.

And if you’d like to dig deeper in WebVR in general, these articles might be right for you:

  • “ A-Frame: The Easiest Way to Bring VR to the Web Today ”
  • “ Embedding Virtual Reality Across the Web with VR Views ”
End of PreviewSign Up to unlock the rest of this title.

Community Questions