The Faces Attributes Detection model is useful to determine if an image contains faces and get descriptive attributes on those faces (gender, sex, face landmarks, presence of sun glasses etc).
Face Detection has been designed to be efficient and robust, with a high precision/recall, supporting different face sizes, arbitrary face orientations as well as partially occluded faces, faces with sunglasses, hats or sanitary masks.
To help you locate each individual face, the corners of the boxes containing those faces will be returned (x1,x2,y1,y2 values).
For each detected face, the positions of the main face features are returned. Face features include:
For each detected face, the Face Attribute Model will return a "gender" field that will help you determine if a face is a male or female face, solely based on the characteristics of this face.
Gender properties are determined solely using the face. So other signs such as clothes or context will not influence the result. Males dressing up as females or females dressing up as males should therefore be correctly classified.
The API returns a "female" value and a "male" value. The sum of those are 1, each one being between 0 and 1. The largest of the two corresponds to the expected gender of the face. The closer to 1 it is, the higher the API's confidence.
For each detected face, the Face Attribute Model will return a "minor" field that will help you determine if a given face belongs to someone that is less than 18 years or more than 18 years old.
The 18-year threshold corresponds to the legal age of majority (adulthood) in many countries.
The Age Group information is determined solely using the face. Other signs such as clothes or context will not influence the result.
The returned value is between 0 and 1, face with a minor value closer to 1 indicate that the person is minor while faces with a minor value closer to 0 indicate that the person is major
Determining if someone is 17 or 19 based on a single face can be tricky. The API has therefore been developed to show that its confidence is low whenever it encounters pictures of users that are visually close to 18. The "minor" value would in this case be close to 0.5.
For each detected face, the Face Attribute Model will return a "sunglasses" field that will help you determine if a face is covered with sunglasses or not.
The returned value is between 0 and 1, face with a sunglasses value closer to 1 indicate that the person wear sunglasses while faces with a sunglasses value closer to 0 indicate that the person doesn't wear sunglasses.
If you haven't already, create an account to get your own API keys.
Let's say you want to moderate the following image:
You can either upload a public URL to the image, or upload the raw binary image. Here's how to proceed if you choose to share the image's public URL:
curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
-d 'models=face-attributes' \
-d 'api_user={api_user}&api_secret={api_secret}' \
--data-urlencode 'url=https://sightengine.com/assets/img/examples/example7.jpg'
# this example uses requests
import requests
import json
params = {
'url': 'https://sightengine.com/assets/img/examples/example7.jpg',
'models': 'face-attributes',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)
output = json.loads(r.text)
$params = array(
'url' => 'https://sightengine.com/assets/img/examples/example7.jpg',
'models' => 'face-attributes',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios
const axios = require('axios');
axios.get('https://api.sightengine.com/1.0/check.json', {
params: {
'url': 'https://sightengine.com/assets/img/examples/example7.jpg',
'models': 'face-attributes',
'api_user': '{api_user}',
'api_secret': '{api_secret}',
}
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
The API will then return a JSON response:
{
"status": "success",
"request": {
"id": "req_0MsK5ptZx713xt5aRmckl",
"timestamp": 1494406445.3718,
"operations": 1
},
"faces": [
{
"x1": 0.5121,
"y1": 0.1879,
"x2": 0.6926,
"y2": 0.6265,
"features": {
"left_eye": {
"x": 0.6438,
"y": 0.3634
},
"right_eye": {
"x": 0.5578,
"y": 0.3714
},
"nose_tip": {
"x": 0.6047,
"y": 0.4801
},
"left_mouth_corner": {
"x": 0.6469,
"y": 0.5305
},
"right_mouth_corner": {
"x": 0.5719,
"y": 0.5332
}
},
"attributes": {
"female": 0.96,
"male": 0.04,
"minor": 0.01,
"sunglasses": 0.01
}
}
],
"media": {
"id": "med_0MsK3A6i2vNxQgHkc11j9",
"uri": "https://sightengine.com/assets/img/examples/example7.jpg"
}
}
See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...
Was this page helpful?