Face Recognition using React Native Camera and Firebase

Written by Carlos Angarita, Software Engineer

In a previous post, Using React Native Camera in your app, we covered how to use the camera in React Native. In this post, we will explore using the camera for face recognition by creating two AR effects.


Face Recognition in iOS


Add the FaceDetectorMLKit kit to the Podfile to enable face recognition in the React Native Camera package on iOS.


    
// ios/Podfile
...
pod 'react-native-camera', path: '../node_modules/react-native-camera', subspecs: [
  'FaceDetectorMLKit'
]
    
  

To use the MLKit we need to create and login to a Firebase account and create a new project. Click on the iOS icon and it will ask for the iOS bundle id. First, open AwesomeCamera.xcodeproj in Xcode by selecting the project name in the sidebar. Then, click on the general tab to show the bundle identifier.


Find project id on iOS

Back to Firebase, we paste the bundle id – if you read my previous post it will be something like org.reactjs.native.example.AwesomeCamera. When we click next, we can download the GoogleService-Info.plist file; place it in the ios folder. In Xcode we need to right-click on the AwesomeCamera folder, click on Add files to AwesomeCamera, and select the GoogleService-Info.plist file.


Then in the Podfile and AppDelegate.m we add:


    
// ios/Podfile
...
pod 'Firebase/Core'
    
  
    
// ios/AwesomeCamera/AppDelegate.m
#import <Firebase.h> // <--- add this
...

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  [FIRApp configure]; // <--- add this
  ...
}
    
  

Then cd ios && pod install && cd ..


Troubleshooting iOS


If the following error message appears when you try to run the project:


    
error: Cycle inside AwesomeCameraTests; building could produce unreliable results.
Cycle details:
→ Target 'AwesomeCameraTests' has target dependency on Target 'AwesomeCamera'
○ That command depends on command in Target 'AwesomeCamera': script phase "[CP] Copy Pods Resources"


** BUILD FAILED **
    
  

Run:


    
npx react-native unlink react-native-vector-icons
    
  

Then, add the following to Podfile:


    
// ios/Podfile
...
pod 'RNVectorIcons', :path => '../node_modules/react-native-vector-icons'
    
  

Then, run:

    
pod update
    
  

Finally, add these fonts to the Info.plist:


    
<!-- ios/AwesomeCamera/Info.plist -->
...
<key>UIAppFonts</key>
<array>
	<string>AntDesign.ttf</string>
	<string>Entypo.ttf</string>
	<string>EvilIcons.ttf</string>
	<string>Feather.ttf</string>
	<string>FontAwesome.ttf</string>
	<string>FontAwesome5_Brands.ttf</string>
	<string>FontAwesome5_Regular.ttf</string>
	<string>FontAwesome5_Solid.ttf</string>
	<string>Foundation.ttf</string>
	<string>Ionicons.ttf</string>
	<string>MaterialIcons.ttf</string>
	<string>MaterialCommunityIcons.ttf</string>
	<string>SimpleLineIcons.ttf</string>
	<string>Octicons.ttf</string>
	<string>Zocial.ttf</string>
</array>
    
  

Face Recognition in Android


To add face recognition to Android we need to go to build.gradle:


    
//  android/app/build.gradle
...
   defaultConfig {
        applicationId "com.awesomecamera"
...
+      missingDimensionStrategy 'react-native-camera', 'mlkit' 
    }
...
    
  

We also need to login to the Firebase console and create a new project, but we can also reuse the same project created for the Face Recognition in iOS section. Click on the Android icon and when it requests the application id, paste the one under android/app/build.gradle – in our case is com.awesomecamera. Then, we download the google-service.json file and place it in the android/app folder. In android/build.gradle we add:

    
// android/build.gradle
buildscript {
  ...
  dependencies {
  ...
  // Add this line
  classpath 'com.google.gms:google-services:4.3.3' // <--- you might want to use different version
  }
}
    
  

Then, back in android/app/build.gradle at the bottom of the file we add:


    
// android/app/build.gradle
...
apply plugin: 'com.google.gms.google-services'
    
  

And in the android/app/src/main/AndroidManifest.xml we add:


    
<!-- android/app/src/main/AndroidManifest.xml -->
...
<application>
...
<meta-data
      android:name="com.google.firebase.ml.vision.DEPENDENCIES"
      android:value="ocr, face" /> 

</application>
...
    
  

Troubleshooting Android


If the camera icon is not showing up, then at the bottom of android/app/build.gradle you need to add:

    
// android/app/build.gradle
...
apply from: "../../node_modules/react-native-vector-icons/fonts.gradle"
    
  

Tracking Faces


For the next sections we’ll be using the Camera.js component that we built in the previous post.


    
// components/Camera.js
import React, {PureComponent} from 'react';
import {RNCamera} from 'react-native-camera';

import Icon from 'react-native-vector-icons/dist/FontAwesome';
import {TouchableOpacity, Alert, StyleSheet} from 'react-native';

export default class Camera extends PureComponent {
  constructor(props) {
    super(props);
    this.state = {
      takingPic: false,
      box: null,
    };
  }

  takePicture = async () => {
    if (this.camera && !this.state.takingPic) {
      let options = {
        quality: 0.85,
        fixOrientation: true,
        forceUpOrientation: true,
      };

      this.setState({takingPic: true});

      try {
        const data = await this.camera.takePictureAsync(options);
        this.setState({takingPic: false}, () => {
          this.props.onPicture(data);
        });
      } catch (err) {
        this.setState({takingPic: false});
        Alert.alert('Error', 'Failed to take picture: ' + (err.message || err));
        return;
      }
    }
  };

  onFaceDetected = ({faces}) => {
    if (faces[0]) {
      this.setState({
        box: {
          width: faces[0].bounds.size.width,
          height: faces[0].bounds.size.height,
          x: faces[0].bounds.origin.x,
          y: faces[0].bounds.origin.y,
          yawAngle: faces[0].yawAngle,
          rollAngle: faces[0].rollAngle,
        },
      });
    } else {
      this.setState({
        box: null,
      });
    }
  };

  render() {
    return (
      <RNCamera
        ref={ref => {
          this.camera = ref;
        }}
        captureAudio={false}
        style={{flex: 1}}
        type={RNCamera.Constants.Type.front}
        onFacesDetected={this.onFaceDetected}
        androidCameraPermissionOptions={{
          title: 'Permission to use camera',
          message: 'We need your permission to use your camera',
          buttonPositive: 'Ok',
          buttonNegative: 'Cancel',
        }}>
        <TouchableOpacity
          activeOpacity={0.5}
          style={styles.btnAlignment}
          onPress={this.takePicture}>
          <Icon name="camera" size={50} color="#fff" />
        </TouchableOpacity>
      </RNCamera>
    );
  }
}

const styles = StyleSheet.create({
  btnAlignment: {
    flex: 1,
    flexDirection: 'column',
    justifyContent: 'flex-end',
    alignItems: 'center',
    marginBottom: 20,
  },
});
    
  

We have a few new things in this component. First, we’ll be using the front camera to detect faces; we also added a box property to the state; and finally, we added the onFaceDetected prop to RNCamera.

    
onFacesDetected={this.onFaceDetected}
    
  
    
// components/Camera.js
...
onFaceDetected = ({faces}) => {
    if (faces[0]) {
      this.setState({
        box: {
          width: faces[0].bounds.size.width,
          height: faces[0].bounds.size.height,
          x: faces[0].bounds.origin.x,
          y: faces[0].bounds.origin.y,
          yawAngle: faces[0].yawAngle,
          rollAngle: faces[0].rollAngle,
        },
      });
    } else {
      this.setState({
        box: null,
      });
    }
  };
...
    
  

The onFaceDetected function will check if a face was found and save the position and size of the box as well as the angles of the face in the box state object. Now, we are going to use these positions to create a simple effect that tracks the face and shows a random image on top of the head.


    
// components/FSLTechFilters.js
import React, {useState, useEffect, useRef} from 'react';
import {Image, View, StyleSheet} from 'react-native';

const images = [
  require('./img/logo-angular.png'),
  require('./img/logo-ember.png'),
  require('./img/logo-node.png'),
  require('./img/logo-python.png'),
  require('./img/logo-react-native.png'),
  require('./img/logo-react.png'),
  require('./img/logo-ruby-on-rails.png'),
  require('./img/logo-vue.png'),
];

function randomInteger(min, max) {
  return Math.floor(Math.random() * (max - min + 1)) + min;
}

const FSLTechFilter = props => {
  const [currentImg, setCurrentImg] = useState(0);

  const alive = useRef(true);
  useEffect(() => {
    for (let index = 0; index < 50; index++) {
      setTimeout(() => {
        alive.current && setCurrentImg(randomInteger(0, images.length - 1));
      }, 100 * index);
    }
    return () => {
      alive.current = false;
    };
  }, []);
  return (
    <View style={styles.filter(props)}>
      <Image source={images[currentImg]} />
    </View>
  );
};

export default FSLTechFilter;

const styles = StyleSheet.create({
  filter: function({width, height, x, y, yawAngle, rollAngle}) {
    return {
      position: 'absolute',
      top: y - height, // place the filter over the head
      left: x,
      width,
      height,
      transform: [{rotateX: `${yawAngle}deg`}, {rotateY: `${-rollAngle}deg`}],
    };
  },
});
    
  

First, we load an array of images, in this case, I’m using the logos of some of the tech we use here at FullStack Labs. Then, when the component mounts, we show a random logo 50 times for around 0.1 seconds or until the component is unmounted. We use props to position the image on top of the head and the angles to transform it, so when a person turns their head, the image will follow.


In our Camera component, we will pass the box object as props to FSLTechFilter. In case no face is detected, we show nothing.


    
// components/Camera.js
...
+import FSLTechFilter from './FSLTechFilter';
...
  }}>
+       {this.state.box && <FSLTechFilter {...this.state.box} />}
        <TouchableOpacity
          activeOpacity={0.5}
          style={styles.btnAlignment}
          onPress={this.takePicture}>
          <Icon name="camera" size={50} color="#fff" />
        </TouchableOpacity>
      </RNCamera>
...    
    
  

Detecting Face Landmarks


The MLKit also allows us to use landmarks for things like the right and left eye, ears, cheeks, or mouth. We will use these landmarks to make a filter that puts glasses on a face. First, on RNCamera add the faceDetectionLandmarks prop to active landmarks.


    
// components/Camera.js
...
<RNCamera
...
faceDetectionLandmarks={RNCamera.Constants.FaceDetection.Landmarks.all}
...
    
  

Now, when a face is detected, we get the landmark position. For this example, we are going to save the leftEyePosition and the rightEyePosition to state.

    
// components/Camera.js
...
 constructor(props) {
    super(props);
    this.state = {
      takingPic: false,
      box: null,
+     leftEyePosition: null,
+     rightEyePosition: null,
    };
  }
...
onFaceDetected = ({faces}) => {
    if (faces[0]) {
      this.setState({
        box: {
          width: faces[0].bounds.size.width,
          height: faces[0].bounds.size.height,
          x: faces[0].bounds.origin.x,
          y: faces[0].bounds.origin.y,
          yawAngle: faces[0].yawAngle,
          rollAngle: faces[0].rollAngle,
        },
+       rightEyePosition: faces[0].rightEyePosition,
+       leftEyePosition: faces[0].leftEyePosition,
      });
    } else {
      this.setState({
        box: null,
+       rightEyePosition: null,
+       leftEyePosition: null,
      });
    }
  };
...
    
  

We take the left and right eye positions from the Face object (you can check the complete API here). Now, we need to create a new component to use these positions.

    
// components/GlassesFilter.js
import React from 'react';
import {StyleSheet, View, Image} from 'react-native';

const GlassesFilter = ({
  rightEyePosition,
  leftEyePosition,
  yawAngle,
  rollAngle,
}) => {
  return (
    <View>
      <Image
        source={require('./img/glasses.png')}
        style={styles.glasses({
          rightEyePosition,
          leftEyePosition,
          yawAngle,
          rollAngle,
        })}
      />
    </View>
  );
};

export default GlassesFilter;

const styles = StyleSheet.create({
  glasses: ({rightEyePosition, leftEyePosition, yawAngle, rollAngle}) => {
    const width = Math.abs(leftEyePosition.x - rightEyePosition.x) + 150;
    return {
      position: 'absolute',
      top: rightEyePosition.y - 100,
      left: rightEyePosition.x - 100,
      resizeMode: 'contain',
      width,
      transform: [{rotateX: `${yawAngle}deg`}, {rotateY: `${-rollAngle}deg`}],
    };
  },
});
    
  

Most of the “magic” happens with the styles. Since the width of the glasses should grow if the face gets close to the screen, we take the distance between points in the X axis as reference for the size. We then start positioning the glasses a little on the side of the right eye so it will cover the eye fully. We also make the height fit automatically using resizeMode. Finally, we add some rotation based on the face angles.


Now we just need to add the filter to the Camera component:

    
// components/Camera.js
...
import GlassesFilter from './GlassesFilter';
...
- {this.state.box && <FSLTechFilter {...this.state.box} />}
+ {this.state.box && (
+          <>
+            <FSLTechFilter {...this.state.box} />
+            <GlassesFilter
+              rightEyePosition={this.state.rightEyePosition}
+              leftEyePosition={this.state.leftEyePosition}
+              rollAngle={this.state.box.rollAngle}
+              yawAngle={this.state.box.yawAngle}
+            />
+          </>
+        )}
...
    
  

And that’s it! You can check the final result in the gif and the source code here.


Resulting filter

Thank you for reading this tutorial, I hope that the Face Recognition API that the React Native Camera library offers can help you build better experiences for your users. You can now access and landmark the eyes, mouth, ears, and cheeks for every face in the camera frame, which allows you to create AR experiences, small AR videogames, or add new accessibility features.


---
At FullStack Labs, we are consistently asked for ways to speed up time-to-market and improve project maintainability. We pride ourselves on our ability to push the capabilities of these cutting-edge libraries. Interested in learning more about speeding up development time on your next form project, or improving an existing codebase with forms? Contact us.

Let’s Talk!

We’d love to learn more about your project. Contact us below for a free consultation with our CEO.
Projects start at $25,000.

FullStack Labs
This field is required
This field is required
Type of project
Reason for contact:
How did you hear about us? This field is required