The PixLab Computer Vision team is pleased to announce that a milestone have been reached for the Not Safe For Work API endpoint.
Over the course of the last 12 months, the /nsfw API endpoint have already analyzed millions of our user's media files with high accuracy.
For those not familiar with this endpoint. /nsfw let you detect not suitable for work (i.e. nudity & adult) content in a given image or video frame. NSFW is of particular interest, if mixed with some media processing API endpoints like /blur, /encrypt or /mogrify to censor images on the fly according to their nsfw score.
A typical blurred image with a high NSFW score should look like the following:
To obtain such image result, two endpoints were actually used:
/NSFW is the analysis endpoint that must be called first. It does perform nudity & adult content detection and return a score value between 0..1. The more this value approaches 1, the more your picture/frame is highly nsfw.
/blur is called later only if the nsfw score value returned earlier is greater than certain threshold. In our case, it is set to 0.5.
The Python code below was used to generate the blurred picture programmatically without any human intervention. This can help you automate things such as verifying user's uploads:
import requests
import json
# Target Image: Change to any link (Possibly adult) you want or switch to POST if you want to upload your image directly, refer to the sample set for more info.
img = 'https://i.redd.it/oetdn9wc13by.jpg'
# Your PixLab key
key = 'Pixlab_Key'
# Censor an image according to its NSFW score
req = requests.get('https://api.pixlab.io/nsfw',params={'img':img,'key':key})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
elif reply['score'] < 0.5 :
print ("No adult content were detected on this picture")
else:
# Highly NSFW picture
print ("Censoring NSFW picture...")
# Call blur with the highest possible radius and sigma
req = requests.get('https://api.pixlab.io/blur',params={'img':img,'key':key,'rad':50,'sig':30})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
print ("Censored image: "+ reply['link'])
Finally, the official endpoint documentation is available to consult at https://pixlab.io/cmd?id=nsfw and a set of working samples in various programming language are available at the PixLab samples pages.
The SOD development team just published a new computer vision article on how to detect vehicles registration plates without heavy Machine Learning techniques, just standard image processing routines already implemented in the SOD library.
The article is available to consult here.
The /facemotion endpoint now besides outputting the rectangle coordinates for each detected human face, you'll be able to accurately extract their gender, age and emotion pattern via their facial shapes in just a matter of few milliseconds thanks to our newly deployed machine learning models hosted on OVH and AWS instances simultaneously for worldwide availability.
Below a Python sample to show you how easy is to predict the Age and Gender of any human face.
import requests
import json
# Detect all human faces present in a given image and try to guess their age, gender and emotion state via their facial shapes
# Target image: Feel free to change to whatever image holding as many human faces as you want
img = 'http://www.scienceforums.com/uploads/1282315190/gallery_1625_35_9165.jpg'
req = requests.get('http://api.pixlab.io/facemotion',params={
'img': img,
'key':'PixLab_API_Key',
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
exit();
total = len(reply['faces']) # Total detected faces
print(str(total)+" faces were detected")
# Extract each face now
for face in reply['faces']:
cord = face['rectangle']
print ('Face coordinate: width: ' + str(cord['width']) + ' height: ' + str(cord['height']) + ' x: ' + str(cord['left']) +' y: ' + str(cord['top']))
# Guess emotion
for emotion in face['emotion']:
if emotion['score'] > 0.5:
print ("Emotion - "+emotion['state']+': '+str(emotion['score']))
# Grab the age and gender
print ("Age ~: " + str(face['age']))
print ("Gender: " + str(face['gender']))
You can visit the PixLab Github repository for additional code samples in various programming languages including PHP and Java.
The PixLab team is pleased to announce major enhancement to the /tagimg endpoint.
The image labeling endpoint let you programmatically generates a description of an image in human readable language with complete sentences. The description is based on the visual content as reported by our state-of-the-art image labeling algorithm. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English.
the /tagimg endpoint documentation is available to consult here and below a working Python code sample:
import requests
import json
# Tag an image based on detected visual content which mean running a CNN on top of it.
# Target Image
img = 'https://s-media-cache-ak0.pinimg.com/originals/35/d0/f6/35d0f6ee0e40306c41cfd714c625f78e.jpg'
# Your PixLab key
key = 'My_PixLab_Key'
req = requests.get('https://api.pixlab.io/tagimg',params={'img':img,'key':key})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
total = len(reply['tags']) # Total tags
print ("Total tags: "+str(total))
for tag in reply['tags']:
print("Tag: "+tag['name']+" - Confidence: "+str(tag['confidence']))
You can visit the PixLab Github repository for additional code samples in various programming languages.
Symisc Systems is pleased to release the first major version of the SOD library! SOD is an embedded, modern cross-platform computer vision and machine learning software library that expose a set of APIs for deep-learning, advanced media analysis & processing including real-time, multi-class object detection and model training on embedded systems with limited computational resource and IoT devices.
SOD was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in open source as well commercial products.
Notable SOD features
Built for real world and real-time applications.
State-of-the-art, CPU optimized deep-neural networks including the brand new, exclusive RealNets architecture.
The documentation works both as an API reference and a programming tutorial. It describes the internal structure of the library and guides one in creating applications with a few lines of code. Note that SOD is straightforward to learn, even for new programmer.
As requested by our users, our /OCR endpoint gets more support for various languages including Arabic, Modern Hebrew, Russian & simplified Chinese.
bounding box coordinates are now enabled by default. For each request, besides the full text output, you get a bbox array where each entry of this array hold the target word and its bounding box (i.e. rectangle) coordinates. Each entry in this array is identified by an instance of the following JSON object:
{
word: Extracted word,
x: X coordinate of the top left corner,
y: Y coordinate of the top left corner,
w: Width of the rectangle that englobe this word,
h: Height of the rectangle that englobe this word
}
As requested by our users, the following are the list of endpoints that is provided by PixLab for all your face detection, recognition, generation & landmarks extraction tasks. These includes:
/facecompare which is used for face to face comparison. In other words, it is used for recognition.
/facelookup: find a person’s face in a set or group of people.
Here is two working Python code to illustrate this:
1.Detect all human faces present in a given image or video frame via facedetect and extract each one of them via crop:
import requests
import json
# Target image: Feel free to change to whatever image holding as many human faces you want
img = 'http://cf.broadsheet.ie/wp-content/uploads/2015/03/jeremy-clarkson_3090507b.jpg'
req = requests.get('https://api.pixlab.io/facedetect',params={
'img': img,
'key':'My_Pix_Key',
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
exit();
total = len(reply['faces']) # Total detected faces
print(str(total)+" faces were detected")
# Extract each face via crop now
for face in reply['faces']:
req = requests.get('https://api.pixlab.io/crop',params={
'img':img,
'key':'My_Pix_Key',
'width': face['width'],
'height': face['height'],
'x': face['left'],
'y': face['top']
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
print ("Face #"+str(face['face_id'])+" location: "+ reply['link'])
Detect all human faces in a given image via facedetect and apply a blur filter to each one of them via mogrify:
import requests
import json
img = 'http://anewscafe.com/wp-content/uploads/2012/05/Brave-Faces-Group-shot.jpg'
# Detect all human faces in a given image via facedetect and blur all of them via mogrify.
req = requests.get('https://api.pixlab.io/facedetect',params={
'img': img,
'key':'Pix_Key',
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
exit();
total = len(reply['faces']) # Total detected faces
print(str(total)+" faces were detected")
if total < 1:
# No faces were detected, exit immediately
exit()
# Pass the detected faces coordinates untouched to mogrify
coordinates = reply['faces']
# Call mogrify & blur the faces
req = requests.post('https://api.pixlab.io/mogrify',headers={'Content-Type':'application/json'},data=json.dumps({
'img': img,
'key':'PIXLAB_API_KEY',
'cord': coordinates #The field of interest
}))
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
print ("Blurred faces URL: "+ reply['link'])
Further code samples are available on the PixLab Github repository or refer to the PixLab Endpoints list for the official documentation.
The PixLab engineering team is pleased to announce the immediate availability of the Real-Time ASCII Art C/C++ Rendering Library.
ASCII Art is a single file C/C++ library that let you transform an input image or video frame into printable ASCII characters at real-time using a single decision tree. Real-time performance is achieved by using pixel intensity comparison inside internal nodes of the tree.
For a general overview on how the algorithm works, please visit the demonstration page at https://art.pixlab.io.
MEMEs are de facto internet standard nowadays. At least, dozen if not hundred of daily top posts on Imgur or Reddit are probably MEMEs. That is, a pop culture image with sarcastic text (always) displayed on Top, Bottom or Center of that image. A lot of web tools out there let you create memes graphically but a few ones actually propose an API for generating memes from your favorite programming language.
In this blog post, we'll try to generate a few MEMEs programmatically using Python, PHP or whatever language that support HTTP requests with the help of the PixLab API but before that, lets dive a little bit into the tools needed to build a MEME generator.
Crafting a MEME API
Building a RESTful API capable of generating memes at request is not that difficult. The most important part is to find a good image processing library that support the annotate operation (i.e. Text drawing). The most capable & open source libraries are the ImageMagick suite and its popular fork GraphicsMagick. Both provides advanced annotate & draw capability such as selecting the target font, its size, text position, the stroke width & height and beyond. Both should be a good fit and up to the task. Here is some good tutorials to follow if you wanna build your own RESTful API:
In our case, we'll stick with the PixLab API due to the fact that is shipped with robust API endpoints such as Image compositing, facial landmarks extraction, dynamic image creation that proves of great help when working with complex stuff such as cloning Snapchat filters or playing with GIFs.
So, without further ado, let's start programming some memes..
Draw some funny text on top & bottom of that image to obtain something like this:
Using the following code:
import requests
import json
# Draw some funny text on top & button of the famous Cool Cat, pubic domain image.
# https://pixlab.io/cmd?id=drawtext is the target command
req = requests.get('https://api.pixlab.io/drawtext',params={
'img': 'https://pixlab.io/images/jdr.jpg',
'top': 'someone bumps the table',
'bottom':'right before you win',
'cap':True, # Capitalize text,
'strokecolor': 'black',
'key':'Pix_Key',
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
print ("Meme: "+ reply['link'])
/*
* PixLab PHP Client which is just a single class PHP file without any dependency that you can get from Github
* https://github.com/symisc/pixlab-php
*/
require_once "pixlab.php";
# Draw some funny text on top & button of the famous Cool cat, public domain photo
# https://pixlab.io/cmd?id=drawtext is the target command
/* Target image */
$img = 'https://pixlab.io/images/jdr.jpg';
# Your PixLab key
$key = 'My_Pix_Key';
/* Process */
$pix = new Pixlab($key);
if( !$pix->get('drawtext',array(
'img' => $img,
'top' => 'someone bumps the table',
'bottom' => 'right before you win',
'cap' => true, # Capitalize text,
'strokecolor' => 'black'
)) ){
echo $pix->get_error_message()."n";
die;
}
echo "Pic Link: ".$pix->json->link."n";
If this is the first time you've seen the PixLab API in action, your are invited to take a look at the excellent introduction to the API in 5 minutes or less.
Only one command (API endpoint) is actually needed in order to generate such a meme:
drawtext is the API endpoint used for text annotation. It expect the text to be displayed on Top, Center or Bottom of the target image and support a bunch of other options such as selecting the text font, its size & colors, whether to capitalize the text or not, stroke width & opacity and so on. You can find out all the options the drawtext command takes here.
There is a more flexible command named drawtextat that let you draw text on any desired region of the input image by specifying the target coordinates (X,Y) of where the text should be displayed. Here is an usage example.
Dynamic MEME
This example is similar to the previous one except that the image we'll draw something on top is generated dynamically. That is, we will request from the PixLab API server to create a new image for us with a specified height, width, background color and output format and finally we'll draw our text at the center of the generated image to obtain something like this:
Using this code:
import requests
import json
# Dynamically create a 300x300 PNG image with a yellow background and draw some text on the center of it later.
# Refer to https://pixlab.io/cmd?id=newimage && https://pixlab.io/cmd?id=drawtext for additional information.
req = requests.get('https://api.pixlab.io/newimage',params={
'key':'My_Pix_Key',
"width":300,
"height":300,
"color":"yellow"
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
exit();
# Link to the new image
img = reply['link'];
# Draw some text now on the new image
req = requests.get('https://api.pixlab.io/drawtext',params={
'img':img, #The newly created image
'key':'My_Pix_Key',
"cap":True, #Uppercase
"color":"black", #Text color
"font":"wolf",
"center":"bonjour"
})
reply = req.json()
if reply['status'] != 200:
print (reply['error'])
else:
print ("Pic location: "+ reply['link'])
/*
* PixLab PHP Client which is just a single class PHP file without any dependency that you can get from Github
* https://github.com/symisc/pixlab-php
*/
require_once "pixlab.php";
# Dynamically create a 300x300 PNG image with a yellow background and draw some text on top of it later.
# Refer to https://pixlab.io/cmd?id=newimage && https://pixlab.io/cmd?id=drawtext for additional information.
# Your PixLab key
$key = 'My_Pix_Key';
/* Process */
$pix = new Pixlab($key);
echo "Creating new 300x300 PNG image...n";
/* Create the image first */
if( !$pix->get('newimage',[
"width" => 300,
"height" => 300,
"color" => "yellow"
]) ){
echo $pix->get_error_message()."n";
die;
}
# Link to the new image
$img = $pix->json->link;
echo "Drawing some text now...n";
if( !$pix->get('drawtext',[
'img' => $img, #The newly created image
"cap" => True, #Uppercase
"color" => "black", #Text color
"font" => "wolf",
"center" => "bonjour"
]) ){
echo $pix->get_error_message()."n";
die;
}
echo "New Pic Link: ".$pix->json->link."n";
Here, we request a new image using the newimage API endpoint which export to PNG by default but you can change the output format at request. We set the image height, width and the background color respectively to 300x300 with a yellow background color.
Note that if one of the height or width parameter is missing (but not both), then the available length is applied to the missing side and if you want a transparent image, set the color parameter to none.
We finally draw our text at the center of the newly created image using the wolf font, black color and 35 px font size. Of course, one could draw lines, a rectangle for example to surround faces, merge with other images and so forth...
Mimic Snapchat Filters
This last example, although relatively unrelated to our subject here is about to show how to mimic the famous Snapchat filters programmatically. So, given an input image:
and this eye mask:
output something like this:
Well, in order to achieve that effect except for the MEME we draw on the bottom of that image, lots of computer vision algorithms are involved here such as face detection, facial landmarks extraction, pose estimation and so on. You are invited to take a look at our previous blog post on how such filter is produced, what techniques are involved and so on: Mimic Snapchat Filters Programmatically.
Conclusion
Generating MEMEs is quite easy providing a good image manipulation library. We saw that ImageMagick and GraphicsMagick with their PHP/Node.js binding can be used to create your own MEME Restful API.
Our simple yet elegant solution is to rely on the PixLab API. Not only generating MEMEs is straightforward but also, you'll be able to perform advanced analysis & processing operations on your input media such as face analysis, nsfw content detection and so forth. Your are invited to take a look at the Github sample page for dozen of the others interesting samples in action such as censoring images based on their nsfw score, blurring human faces, making gifs, etc. All of them are documented on the PixLab API endpoints reference doc and the 5 minutes intro the the API.
Finally, if you have any suggestion or critics, please leave a comment below.