Microsoft Tech
By Eric Greene

Getting Started with React, GraphQL and Relay (Part 1 of 2)

By Eric Greene
Last chance to win! You'll get a... FREE 6-Month Subscription to SitePoint Premium Plus you'll go in the draw to WIN a new Macbook SitePoint 2017 Survey Yes, let's Do this It only takes 5 min

This article is part of a web development series from Microsoft. Thank you for supporting the partners who make SitePoint possible.


Unlike frameworks such as AngularJS and Ember, React is a library that provides a limited number of functions, and is largely agnostic with respect to the libraries that are used for other aspects of the application. Fundamentally, React provides UI component functionality, whereby it provides a mechanism to create components, manage data within components, render components, and compose components to build larger components. React can be used regardless of where data comes from, how it’s retrieved, or how it’s managed as part of a larger application. To attack these issues, other libraries and patterns are utilized. One common pattern used with React applications is Flux.

Flux was developed as an alternative to the MVC (model-view-controller) pattern for managing the flow of data in response to actions. Unlike MVC’s bi-directional flow of data, Flux relies upon a unidirectional flow of data between the various parts of the system. Flux was created by Facebook because their developers found it difficult to reason about the movement of data within massive applications which employed MVC.

Instead of MVC’s multiple circuits of data flow, Flux uses a single circuit of data flow. The flow runs in continuous cycle of Action -> Dispatcher -> Store (or multiple stores) -> Component (aka View) -> Action.

The action represents some kind of event which enters the system. It could be user-generated, such as a button click requesting a refresh of data, or it could be a message received over a web socket. The action is passed to the dispatcher which sends it to all of the stores. The dispatcher is really nothing more than a forwarding mechanism. It understands nothing about the action, the data being passed by the action, nor the responsibilities or interests of each store concerning the action. It simply dispatches the action to all stores, and then each store chooses whether or not it should process the action. The stores are responsible for maintaining a local copy of the data, enforcing business rules, and notifying components of new data so they can be refreshed. The stores can be thought of as maintaining the application state, and the overall Flux process is essentially a state machine. React Components utilize a state machine pattern. In a sense, Flux utilizes this same state machine pattern for the architecture of the entire application.

While Flux is a pattern for solving the problem of the data flow, it does not itself provide an implementation of that pattern. To use the Flux pattern, developers are forced to create all of the components of the system, except for the dispatcher which is provided by Facebook. Creating a Flux system is relatively easy, but it requires a lot of boilerplate code. In this regard, it suffers from the same problem as Backbone.js. It’s easy to get up and running, but lots of coding is ultimately required.

Evolution of Flux

As developers worked with Flux, they began to find ways to refactor boilerplate code into reusable libraries. In addition, they identified subpatterns of Flux which made reasoning about an application’s data flow easier, and reduced the complexity of the application without sacrificing the general benefits of Flux. These subpatterns included reducing the application from many stores to one, combining the dispatcher and store into the same component (which makes sense when there is only one store), and wrapping components in a container which handles action creation, dispatching, and store management in a black box. The Flux derivatives were not purely Flux, but they retained its essential elements without the usual disadvantage of Flux, namely lots of boilerplate code.

While there are many derivatives of Flux, Redux is one of the more popular ones. Redux is built purely around the concept of a state machine and immutable data, where actions are handled by a single dispatcher-store which uses reducer functions (which themselves are composable) to transition from one state to another. It greatly simplifies Flux, while introducing many aspects of functional programming, which, once mastered, make coding React applications much easier.

Relay is another Flux derivative from Facebook which is growing in popularity. For information about how Facebook is using Relay, and their thoughts on its relation to Flux, click here.

Relay, the All-Inclusive Solution

While Redux simplified the management of the application, it is agnostic regarding the location of the actual data. It can work with any data storage system, once again leading to more boilerplate code (albeit less than with Flux). Now comes Relay (another Facebook creation–they have been busy in the JavaScript space–React, Relay, Immutable.js, GraphQL, Jest, Flow, etc.), which seeks to refactor away the boilerplate code for data access as well as including the introduction of a new kind of data service–GraphQL. GraphQL differs from traditional REST services in that it views data as a graph, and seeks to represent that graph in a hierarchical form, allowing the consumer of the data to specify the data they need, as opposed to traditional REST services which serve up a fixed set of data, irrespective of the consumer’s needs.

So what does Relay do? Relay is a framework that connects React Components to GraphQL services through a container which implements Actions, a Dispatcher, and a Store. The developer need not code the Actions, Dispatcher or Store, and instead may trigger the actions and access the results through the Relay API. To configure the container, the developer must provide GraphQL query and mutation fragments to describe the data’s graph structure to the container, but otherwise Relay takes care of all of the details of managing the data.

Relay is really a framework (such as Angular), not a library. It is not implementation agnostic–it requires the UI components to be implemented with React, and the data services to be provided by GraphQL. Once the configuration of both the GraphQL server and the React components is in place, Relay takes over and performs all of the needed operations. Therefore, the key to using Relay is to master the configuration process.

Additionally, unlike frameworks such as Angular–which makes specific requirements of the client only–Relay also requires the GraphQL server interface, which provides the data query and mutation operations for the Relay containers. Relay doesn’t care how the data is stored as long as the data is provided through a specific GraphQL interface.

Relay therefore requires both the back-end and front-end development teams to understand how it works, and how each of their parts need to be coded and configured.

Relay and React

The goal of this post is to examine Relay from the viewpoint of React. GraphQL servers can be coded and configured with any number of languages and deployed on many kinds of platforms. For GraphQL implementations in Node.js, a package named graphql-relay can be used to simplify the coding and configuration requirements for the GraphQL server. On the React side, another package named relay-react will be used to configure the Relay Containers and Routes, as well as fire off the actions for mutating the data and so forth.

Getting Started

Getting started with Relay is difficult. Because the technology is so new, and there are lots of competitors, there are limited resources on how to use Relay. Where there are resources, the examples are limited, and the developer is ultimately forced to read through blog posts, GitHub issues, and formal specifications in order to create a simple CRUD application. Add to that a fairly complex development environment, and the need to have a properly built GraphQL server, and the task can be quite daunting especially for new JavaScript/front-end developers.

To get started, clone the following GitHub repository to your computer and open the folder for blog-post-5+6. This folder contains a complete GraphQL/React/Relay application. To get the application up and running, open a terminal, change to the blog-post-5+6 folder, and run the following Gulp commands.

$ npm i  
$ npm i -g gulp eslint eslint-config-airbnb eslint-plugin-react@^4.3.0 webpack babel-cli babel-eslint eslint-plugin-jsx-a11y@^0.6.2 
$ gulp 
$ npm run update-schema 
$ gulp 
$ gulp server 

Open Microsoft Edge, then navigate to the following URL: http://localhost:3000.

A list of widgets, styled with Bootstrap 4, should appear and look similar to this:

The basic development structure of the project is the typical folder organization where the source code files are edited in the src folder, and then copied to the distribution folder, dist, from which the application is executed. The copying process is accomplished via Gulp through a combination of simply copying files, a task for processing SASS files, and WebPack processing for JavaScript. The WebPack processing mechanism uses the Babel transpiler to convert RelayQL, JSX and ES2015 code into ES5.1 compliant JavaScript for execution in any browser. ES2015 and JSX transpilation were covered in earlier posts, but the transpilation of RelayQL is a new topic.

RelayQL and the Babel-Relay Plugin

GraphQL servers have the ability to produce a schema through the use of introspection. The schema is a JSON file of all of the types used by that particular GraphQL server. It includes both custom and built-in types. The Babel-Relay plugin uses this schema to validate the GraphQL fragments coded with RelayQL. The fragments are coded using ES2015 string templates, and will be converted to JavaScript once they are validated against the schema. This validation can be helpful in preventing GraphQL errors before they occur.

The easiest way to configure the Babel-Relay Plugin, as well as generate the schema, is to use the examples from the Relay website or one of the Relay Starter Kit projects. These are the files the Github repository for this blog post uses, and follow the pattern on the Relay website.

From the Relay Starter kits there are two files which are needed: build/babelRelayPlugin.js and scripts/updateSchema.js. The updateSchema.js file will be used to produce the schema, while the babelRelayPlugin.js will use the schema file to validate the GraphQL fragments as well as transform the RelayQL code.

Configuring GraphQL to Work with Relay

Typically, a standard GraphQL server implementation needs to be modified to work with Relay. A package named graphql-relay is used to help configure a Node.js-based GraphQL server to be Relay-compliant. There are three main aspects of a GraphQL server which need a Relay-specific configuration: Object Identification, the Type Connections, and Mutations.

Using a globally unique ID, the Object Identification allows Relay to query from the GraphQL server any type which implements the node interface. The global ID is a base64 encoded value comprising the type name and local ID value concatenated together with a colon. The graphql-relay library provides functions for converting to and from the global ID using the functions named toGlobalID and fromGlobalID, respectively. The type name comes from the GraphQL custom type name specified in the type configuration. Typically, the local ID value comes from the data storage mechanism, e.g., a relational database identity.

import { nodeInterface } from './../node-definitions'; 
export const widgetType = new GraphQLObjectType({ 
  name: 'Widget', 
  description: 'A widget object', 
  fields: () => ({ 
    id: globalIdField('Widget'), 
    // more fields 
  interfaces: () => [nodeInterface] 

The file node-definitions.js (and its accompanying file type-registry) are used to provide the configuration and type registry for making objects available through the node-interface.

The second Relay-specific configuration, Type Connections, is the connection between parent types and their child types with which they have a many-to-one relationship. These are managed using a special connection type structure which supports the notion of graph edges and cursors for limiting result sets, and generating pages of results. Connection and edge types can be configured to support additional properties allowing metadata about the nature of the connection or the edge, such as weighted edges.

import { widgetType } from './types/widget-type'; 
import { connectionDefinitions } from 'graphql-relay'; 
export const { connectionType: widgetConnection, edgeType: WidgetEdge } = 
  connectionDefinitions({name: 'Widget', nodeType: widgetType}); 

The connectionDefinitions function is used to create the connection types in the structure that Relay expects.

import { widgetConnection } from '../connections/widget-connection'; 
// inside of fields function of viewer type declaration 
widgets: { 
  type: widgetConnection, 
  description: 'A list of widgets', 
  args: connectionArgs, 
  resolve: (_, args) => connectionFromPromisedArray(getWidgets(), args) 

The widgetConnection type is imported from the widget-connection.js file, and is used to configure the widgets field of the viewer type. The package graphql-relay also provides an object named connectionArgs which contains the standard arguments passed in by Relay for working with connections. These arguments contain the values needed for the cursor operations.

The third and final Relay-specific configuration is the configuration of mutations. The graphql-relay package provides a special helper method named mutationWithClientMutationId for easily configuring mutations. Four fields are required: the mutation name, the input fields, the output fields, and finally the mutate and get payload field. In GraphQL, all mutations are accompanied with a query to fetch whatever data may have been changed. Relay further adds to the capability by intelligently deciding what data needs to be refreshed after mutations are made.

The mutation name is the name which the React-Relay application use to invoke the mutation when it accesses the GraphQL server. The input fields correspond to the args of the GraphQL mutation. The output fields represent the fields of the type to be returned from the mutation. The mutate and get payload field will perform the actual database operations, and can return a promise which will delay the response to the application from GraphQL until the promise is resolved.


React and GraphQL combined with Relay provides a promising framework for building web applications. While a fair amount of setup is required, once the setup is complete, development moves along smoothly, eliminating boilerplate code, and intelligently handling the management of data. The Relay framework could prove to be a game changer for building next generation web applications. In the next article, we will explore the process of consuming GraphQL resources with React using Relay.

This article is part of the web development series from Microsoft tech evangelists and DevelopIntelligence on practical JavaScript learning, open source projects, and interoperability best practices including Microsoft Edge browser and the new EdgeHTML rendering engine. DevelopIntelligence offers JavaScript Training and React Training Courses through appendTo, their front-end focused blog and course site.

We encourage you to test across browsers and devices including Microsoft Edge – the default browser for Windows 10 – with free tools on, including virtual machines to test Microsoft Edge and versions of IE6 through IE11. Also, visit the Edge blog to stay updated and informed from Microsoft developers and experts.

Login or Create Account to Comment
Login Create Account
Get the most important and interesting stories in tech. Straight to your inbox, daily.Is it good?