By Patrick Catanzariti

Streaming a Raspberry Pi Camera Into VR With JavaScript

By Patrick Catanzariti

I spent the week tinkering with a Raspberry Pi Camera and exploring ways to get it to stream images to a web browser. In this article, we’ll explore the simplest and most effective way I found to stream images into client side JavaScript. In the end, we’ll stream those images into the Virtual Reality viewer built in my earlier article on Filtering Reality with JavaScript and Google Cardboard.

What You’ll Need

For this demo, you’ll currently need a Raspberry Pi (I used the Raspberry Pi 2 Model B) with Raspbian installed (NOOBS has you covered here), an Internet connection for it (I recommend getting a Wi-Fi adaptor so your Pi can be relatively portable) and a Camera module.

If your Pi is brand new and not currently set up, follow the instructions on the Raspberry Pi NOOBS setup page to get your Pi ready to go.

If you’ve got a bunch of stuff on your Pi already, please make sure you back everything up as the installation process replaces various files. Hopefully everything should play nicely but it’s always important to be on the safe side!

The Code

Our demo code that uses the camera data is accessible on GitHub for those eager to download and have a go.

Attaching Your Pi Camera

If you are new to the Raspberry Pi and attaching a camera, I’ll cover it quickly here. Basically, there is a plastic container (called the flex cable connector) around the opening which you’ll want to gently open. To do so, pull the tabs on the top of the connector upwards and towards the Ethernet port. Once you’ve got it loosened, you’ll be able to slot in your camera’s flex cable. The cable has a blue strip on it on one side, connect it so that side is facing the ethernet port. Be careful to keep the cable straight (don’t place it into the slot at an angle, it should fit straight in). Here’s a photo of my camera flex cable connected correctly to show what we’re looking for here:

Raspberry Pi Camera Connected

RPi Cam Web Interface

The easiest way I’ve found to stream images from the Pi camera was to use the RPi Cam Web Interface. You run a few basic terminal commands to install it and then it sets up your camera on an Apache server ready to use.

If you’ve installed Raspbian from scratch already, you may have also already enabled the camera in the config screen that appeared afterwards. If not, you can get to it by typing in the following command:

sudo raspi-config

On that screen, you’ll be able to select “Enable Camera”, click that option and and choose “Enable” from the screen that appears.

Raspberry Pi Camera Enable Screen

Next up, make sure your Raspberry Pi is up to date (before doing this, I want to reiterate – back things up to be safe). We start by downloading the latest repository package lists:

sudo apt-get update

We then make any updates to existing repositories on our Pi that we might have found:

sudo apt-get dist-upgrade

Finally, we update our Raspberry Pi software itself too:

sudo rpi-update

Then, we install the RPi Cam Web Interface itself from its GitHub repo. Go to the location on your Pi that you’d like to clone the repo to and run the git clone command:

git clone

This will create a RPi_Cam_Web_Interface folder ready with a bash installer. Firstly, go to that directory:

cd RPi_Cam_Web_Interface

Update the permissions on the bash file so you can run it:

chmod u+x

Then run the bash install program:

./ install

The install program has slightly more of a visual interface. I personally installed it via the Apache server option (the first option), so the following will all focus on that method. If you prefer to use an Nginx server, you can. I’d imagine much of the process is relatively similar though.

Raspberry Pi Cam Install Screen

You’ll then specify where you’d like to place the RPi Cam Web Interface on your server’s /var/www directory. If you don’t list anything, it will install in the root /var/www folder. I installed it in a folder called picam to keep it separate.

Cam Installation Location Screen

On the next screen, I selected “yes” to whether I wanted the camera to auto start on boot time.

Camera boot at start prompt

The installation program will then ask what port you’d like it to run on. I kept it at the default of port 80.

Camera select port screen

You will then be prompted for whether you’d like web server security. This will create a htaccess username and password for your server. I said no for testing purposes and because I’ve got it in a subfolder. In this demo, we’ll be creating other functionality in other subfolders, so I’d recommend putting security on your whole server at the root level if you’re worried about people spying on your Pi’s server!

Enabling web security on Apache server

The program will ask if you want to reboot the system, type in y and let your Pi set itself back up. When it turns back on, the light on your camera should come on to show that it is now watching its surroundings.

To see what your camera is seeing, you can visit the pre-built camera interface that the RPi Cam Web Interface provides. To do this, you’ll first need to know your Pi’s IP address. Not sure how? To do so, you can type in:


It’ll be one of the few actual IP addresses with that listing. Depending on the settings of your local network, it should be something relatively simple like For me, it was as my network has a bunch of other devices on it.

Open up a web browser on a computer that is on the same local network and type in your Pi’s IP address, followed by the folder name you installed the Pi camera web stuff into (e.g. It should open up a web view of your camera! Here’s a view showing the unbelievably dull sight of my keyboard:

Raspberry Pi Camera Running

If you’d like to remove the text with the date and time at the top, open up “Camera Settings” and remove the text within “Annotation”:

Removing annotations from the camera


Accessing Camera Images Via JavaScript

Whilst this interface alone can do a lot of very neat things including remote image capture, video recording, motion detection and so on, as a developer who likes to tinker and build my own things – I wanted to plug these images into my own creations. In particular, I wanted to try pulling it into the Google Cardboard VR/AR set up I created in my earlier article on Filtering Reality with JavaScript and Google Cardboard. This way, we can put on our Google Cardboard headset and watch our camera from a distance. Attach your Raspberry Pi to your household pet, a remote control car, keep it next to a fish tank or hamster enclosure, then enjoy a realtime VR experience sitting back and watching things from a new perspective!

To access images from the camera remotely from JavaScript, we’ll need this URL structure (substituting the IP address and folder for those you’ve got in your environment):

"" + new Date().getTime()

We ensure we’re getting the latest image by appending the current timestamp via new Date().getTime().

In order to access these images in JavaScript and the HTML5 canvas without encountering Cross-Origin Resource Sharing errors, we’ll be running this JavaScript on our Pi as well. It keeps things nice and simple. If you are looking to access the images from a different server, read up on Cross-Origin Resource Sharing and the same-origin policy.

We won’t cover all of the VR and Three.js theory in this article, so have a read of my previous articles on Filtering Reality with JavaScript and Google Cardboard and Bringing VR to the Web with Google Cardboard and Three.js for more info if you’re new to these.

The bits that have changed from my Filtering Reality with JavaScript and Google Cardboard article are that all the bits involved in the actual filtering process have been removed. You could very well keep them in there and filter your Pi camera images too! However, to keep our example simple and the code relatively clean, I’ve removed those.

In our init() function I’ve adjusted the canvas width and height to match the default incoming size that the RPi Cam software provides:

canvas.width = 512;
  canvas.height = 288;

However, when it does run the nextPowerOf2() function to ensure that it works nicest as a Three.js texture, it will end up as a canvas of 512×512 (just with black on the top and bottom from my experience).

I resize our PlaneGeometry to be 512×512 too:

var cameraPlane = new THREE.PlaneGeometry(512, 512);

I also move the camera a bit closer to our plane to ensure it covers the view:

cameraMesh.position.z = -200;

The animate() function is quite different, as we no longer are looking at the device’s camera but instead are pulling in the image from a HTTP request to our Pi camera on each animation frame. The function looks like so:

function animate() {
    if (context) {
      var piImage = new Image();

      piImage.onload = function() {
        console.log('Drawing image');
        context.drawImage(piImage, 0, 0, canvas.width, canvas.height);

        texture.needsUpdate = true;

      piImage.src = "" + new Date().getTime();



We store our Pi’s camera image within a variable called piImage. We set its src to the URL we mentioned earlier. When our browser has loaded the image, it fires the piImage.onload() function which draws that image onto our web page’s canvas element and then tells our Three.js texture that it needs to be updated. Our Three.js PlaneGeometry texture will then update to the image from our Pi camera.

Adding To Our Server

There are a variety of ways to get this onto our Pi’s server. By default if you’ve just set up your Pi and its Apache server, the /var/www folder won’t allow you to copy files into it as you don’t own the folder. To be able to make changes to the folder, you’ll need to either use the sudo command or change the owner of the folder and files using:

sudo chown -R pi www

You could then FTP into your Pi as the default “pi” user and copy the files into the directory or add your project into a remote Git repo and clone it into the folder (I did the second option and thus could do it just via sudo git clone https://mygitrepo without needing to change the owner of the folder or files).

I added them into a folder called piviewer within the /var/www folder.

In Action

If we add this code onto our server and then go to our server from a mobile Chrome browser with our Pi’s IP address and the folder name of our custom code (e.g. mine was you should see a VR set up that you can view within Google Cardboard!

Our Raspberry Pi Cam in Action!


We now have a virtual reality view of our Raspberry Pi camera, ready for attaching that Pi absolutely anywhere we desire! While Virtual Reality is a fun option for the camera data, you could pull it into any number of JavaScript or web applications too. So many possibilities, so little time! I’ve got my own plans for some additions to this set up that will be covered in a future article if they work out.

If you try out this code and make something interesting with it, leave a note in the comments or get in touch with me on Twitter (@thatpatrickguy), I’d love to have a look!

  • Woah!!! That’s super awesome. I have a RPi and Arduino with me right now. This would be cool!!! :)

    • Patrick Catanzariti

      Time to get tinkering away and make something cool! Or come along to NodeBots Day :)

  • Craig Buckler

    Amazing! Next question: can you attach two cams to the Pi so you transmit a stereoscopic 3D image?

    • Patrick Catanzariti

      Oh how I would love to do that! :D

      From what I’ve read, you can only connect one Pi camera to the board itself, but you could theoretically connect up a USB webcam… wouldn’t be the same though and might drain a lot of power! You could have two raspberry pis with a camera each and find a complicated way to make them work together…

      Could be an area for someone to make a stereoscopic camera Pi module to make the process easier… at the moment it’d be complicated!

      • Craig Buckler

        I guess it may be possible to attach a single 3D camera? Two Pis sounds like it should work and may not be that complicated if your googles can show two streams?

        • Patrick Catanzariti

          A single 3D camera should be possible :) Two Pis should work, but keeping the streaming images in time with each other might be tough as they’d each run their own server to host it all.

          I’m probably over thinking it though and there’s an easier way to pair the two Pis…

          • Shaun Towers

            Might be possible to set up the two Pis to work as a cluster, I know there are projects out there that demonstrate how to cluster the Pi computers to build inexpensive super computers. If you did that with two of them you’d have the plug ins for 2 cameras, and resources to run both with equal speed, If one Pi can run this setup with one camera, it should easily be possible to use 2 clustered Pis, would still only need one set of software, but would just need to configure the folders and settings for 2 cameras.

          • Patrick Catanzariti

            That would be absolutely brilliant! I’ve never attempted to do anything like that but if it worked, it would be so very very cool.

      • Two cameras, on two separate Pis would let you actually try what was dreamed up in

        • Patrick Catanzariti

          YES. I can see myself doing that in a future article now ;)

  • Shaun Towers

    So this looks like it could be similar to what I was looking for as a piece of a project I’m planning on attempting. One question though, as I don’t currently have the means to set this up (I’m not at home, and I’m still waiting for my camera module to arrive in the mail), what is the refresh rate of the picture through the java stream? I’m just looking for a rough estimate as far as fps. Like the camera module is listed as supporting 30 fps or higher (depending on the resolution settings), will java update the streaming picture at that rate (assuming there is no bandwidth issues)?

    • Patrick Catanzariti

      I’m afraid I don’t have it running at the moment myself to take a look but looking at the documentation (, it mentions 25fps, so I’d go with about that. The stream was pretty close to real time when I was testing it, not too much noticeable lag. Because the cam interface is running on the Pi itself, you don’t have too much of the lag that comes from network delays in that regard (only when accessing it from another computer). Hope that helps :)

  • Paul

    For anyone else who runs into trouble, I could only get the settings in the web browser to work when running this in debug mode. I’ve reported in github

    • Patrick Catanzariti

      Thanks for sharing! I’ll have to look into that later and see if I have the same issue on mine.

    • Patrick Catanzariti

      I notice that Dani had to add a line to avoid a cross-domain error, maybe you’re having the same issue? Check here:

  • Paul

    Hi, great article thanks! I’ve extensively used RPi-Cam and think it’s a great piece of software.

    However you lost me after “Accessing Camera Images Via JavaScript” – where do I find the all the files I need to put in the piviewer folder? You state the links to previous articles are for useful information, but do I need to go through those to find the code I need to then modify with what you have above?


    • Patrick Catanzariti

      The Filtering Reality article I linked to has all the functionality in terms of setting up the Three.js scene and displaying an image within it. You’d just switch out the webcam image with the Pi image we discuss in this article.

  • Perfect Prime

    There should be a way to attach 2 camera to the Pi – physically and with signal processing for either stitching the 2 video together or putting them side by side.

    Using 2 separate Pi seems not the way to go

    • Patrick Catanzariti

      I’d love to see someone find the best solution for this one :D

  • David Roll

    I think UV4L is much better as a streaming solution. It has basically no dependencies, no configuration required, can stream in MJPEG/JPEG(stills),H264 and standard WebRTC – with all its benefits like encryption,NAT traversal,adaptive streaming – to any browser or smartphone, both live audio and video on poor networks and almost no delay. It’s also possible to change the image settings on the fly. Recent versions of UV4L allow to broadcast live audio /video 1->N to Jitsi Meet Web Conferences (public or private). All in few clicks from the UV4L Web interface.

    • Patrick Catanzariti

      That sounds promising :D

  • Heinrich du Plooy

    Thanks, my wife will eventually see more of me now, I have been battling to stream over wifi to my laptop from my pi. Your program works 100.

    • Patrick Catanzariti

      Fantastic to hear :D

  • Patrick Catanzariti

    Yay! That’s exactly where I was hoping someone would take this idea :D Fantastic work! I’ll be sharing it on Twitter soon and shall mention it in next week’s Dev Diner newsletter ( Seriously well done!

  • Stuart Clarke

    This is fantastic. Do you have any guidance on enabling remote access to the web server? I guess SSL and a fixed IP?

    • Patrick Catanzariti

      I’d say those are the best options. You’d want to enable a way to access to your Pi’s Node server via your internet connection – some ISPs don’t allow this. I personally haven’t found a simple way yet to open my own server out to the web apart from ngrok and similar options ( Those are a bit slower though so there’s a big delay.

      • Stuart Clarke

        OK, thanks a lot.

      • Stuart Clarke

        OK, thanks a lot.

  • Zuhair Mehtab

    Can I use a simple webcam instead of Raspberry Pi camera module for this project?

    • Patrick Catanzariti

      I’m actually not sure if connecting a webcam to a Raspberry Pi works with the Picam software I use in the guide. I don’t think it supports USB webcams.

      • Zuhair Mehtab

        Can you please suggest an efficient way to connect usb webcam for the same purpose? (I tried motion but the page needs to be refreshed and the video kind of lags)

        • Patrick Catanzariti

          From what I’ve seen, motion should be able to achieve similar results to the set up above if you pull in “snapshots” from motion using JavaScript similar to what I do in the demo above. Neither option is going to provide a smooth real time video feed (the demo above doesn’t achieve that either, so there’s a bit of lag).

          To do a similar thing with real time video would take a bit more processing and gets a bit more complex to do via HTML & JS. I’m considering building a demo and writing a guide to showcase that once I build something stable enough!

          • Zuhair Mehtab

            I’ve finally managed to get the videostream via mjpg streamer but the resolution sucks. Your article was extremely helpful for me. Thank you and i’ll be waiting for u to come up with sth as innovative as this. God bless you.

          • Patrick Catanzariti

            That’s great news! I’m glad you got something working at least :D

          • Patrick Catanzariti

            Also – the comment below by Doug has his own example which does look like it uses a webcam – Might help!

  • Patrick Catanzariti

    That’s absolutely brilliant Doug! Great prototype :D Sorry I hadn’t responded earlier, for some reason I wasn’t alerted to your comment!

    I’ll be sharing your link on Twitter soon and shall mention it in my Dev Diner newsletter next week (

    Thanks for sharing :D

  • Shubhojyoti Ganguly

    I tried this out but instead but when I enter on my browser, I get a black screen which is continuously loading. The RPiCam_Web_interface works perfectly but your code doesn’t.

  • Shubhojyoti Ganguly

    Can we use the MediaStreamTrack from the Media Stream API (Like in your tutorial of Filtering reality) to stream from the PiCamera instead of the RPi Cam Web Interface??

  • Shubhojyoti Ganguly

    Is there a way to get the feed from the PiCamera directly into your code without using the RpiCamWeb Interface

    • Patrick Catanzariti

      I’d say it’s definitely possible, you’d just need to do a lot of what the RpiCam software does manually. I preferred using their existing software to handle a lot of the complexity.

  • joao

    Hi, i tried to run your code with my usb camera. Turns out you’re missing a bracket in the end of the script tag (init function).

    • Shubhojyoti Ganguly

      Fixed it.. Must have been a bug from the last commit.

  • Apurva Jain

    it asked me to reboot without asking that question to put the camera on right after the boot. What to do?

    • Patrick Catanzariti

      The RPi-Cam site ( says this:

      “To change the default startup-settings, edit the config-file /etc/raspimjpeg. If you want to disable autostart completely, navigate back to the directory, where you cloned the git-repo in Step 4 and run one of the following command: ./ autostart_no –> the interface doesn’t start at startup, you need to run a command to use it (commands below) ./ autostart_yes –> the interface starts at startup and takes control over the camera (standard)”

  • Martín Pérez

    How to set the default directory that save the images and videos?

    • Patrick Catanzariti

      I’m not sure if you can change it, I haven’t seen that in the documentation at least and haven’t had to do so myself, did you have any luck with this?

      • Patrick Catanzariti

        Actually, if you look at the section “Move saved images and videos to another folder” on this page —, it might have something that’ll help?

  • Will

    This is an excellent guide. Thanks for sharing Patrick! Now I have a pi zero w/camera that I can move around, and watch from a pi3 w/touchscreen via wifi!

    • Patrick Catanzariti

      Yay! Great to hear!!! :D

  • William Mendx

    i have one problem when i try tu run your code in the chrome browser only show me a dark screen and i dont know what is bad, when i try the default interface runs good but when i try to run the vr demo do not works

    • Patrick Catanzariti

      Are you receiving any errors in your console when running the VR demo?

      • Yuri Klebanov

        Hey Im having the same issue, and getting the console error:
        Uncaught TypeError: MediaStreamTrack.getSources is not a function
        at init ((index):69)
        at (index):43

        Trying to run this on mt laptop chrome browser as i cant see the console in my phone.
        but it doesn’t seem to work on the phone as well


      • Yuri Klebanov

        It works now after downloading the Source code from your git. could you place a link in the article so its easier to access?
        it has several more changes that needs to be done in the ” Filtering Reality with JavaScript and Google Cardboard.” code than described in this tutorial, a direct link to the repo would be really helpful.


  • Patrick Catanzariti

    I’m not sure if you can force fullscreen via JavaScript to be honest, I think the user needs to request that functionality via click.

  • Patrick Catanzariti

    Glad it helped :)

    • 조영호

      I have a problem today.
      Streaming scene is delayed while using this PiViewer, it is about 0.6s.
      I needed a more real-time working so I found GSstreamer. Could you give me an advise how apply this VR viewer to GSstreamer?

      Thank you very much.

  • Mark Bruckert

    Hello… great tutorial! I just have one question, once we have the Web Viewer (without VR), where do we upload the code you show us?

    • Mark Bruckert

      Nevermind… I think I found it. Thank You!!!

Get the latest in Mobile, once a week, for free.