OCR performance improved

As requested by our users, our /OCR endpoint gets more support for various languages including Arabic, Modern Hebrew, Russian & simplified Chinese.

bounding box coordinates are now enabled by default. For each request, besides the full text output, you get a bbox array where each entry of this array hold the target word and its bounding box (i.e. rectangle) coordinates. Each entry in this array is identified by an instance of the following JSON object:


{
    word: Extracted word,
    x: X coordinate of the top left corner,
    y: Y coordinate of the top left corner,
    w: Width of the rectangle that englobe this word,
    h: Height of the rectangle that englobe this word
}

The documentation is updated and available to consult at https://pixlab.io/cmd?id=ocr and a Python sample is provided on Github at https://github.com/symisc/pixlab/blob/master/python/ocr.py.

With that in hand, you can further tune your analysis phase for example by extracting each word via /crop and perform another pass if desired.

List of face detection & recognition endpoints

As requested by our users, the following are the list of endpoints that is provided by PixLab for all your face detection, recognition, generation & landmarks extraction tasks. These includes:

Here is two working Python code to illustrate this:

1.Detect all human faces present in a given image or video frame via facedetect and extract each one of them via crop:

import requests
import json
     
    # Target image: Feel free to change to whatever image holding as many human faces you want
    img = 'http://cf.broadsheet.ie/wp-content/uploads/2015/03/jeremy-clarkson_3090507b.jpg'
     
    req = requests.get('https://api.pixlab.io/facedetect',params={
        'img': img,
        'key':'My_Pix_Key',
    })
    reply = req.json()
    if reply['status'] != 200:
        print (reply['error'])
        exit();
     
    total = len(reply['faces']) # Total detected faces
    print(str(total)+" faces were detected")
     
    # Extract each face via crop now 
    for face in reply['faces']:
        req = requests.get('https://api.pixlab.io/crop',params={
            'img':img,
            'key':'My_Pix_Key',
            'width': face['width'],
            'height': face['height'],
            'x': face['left'],
            'y': face['top']
        })
        reply = req.json()
        if reply['status'] != 200:
            print (reply['error'])
        else:
            print ("Face #"+str(face['face_id'])+" location: "+ reply['link'])
  1. Detect all human faces in a given image via facedetect and apply a blur filter to each one of them via mogrify:

import requests
import json

img = 'http://anewscafe.com/wp-content/uploads/2012/05/Brave-Faces-Group-shot.jpg' 

# Detect all human faces in a given image via facedetect and blur all of them via mogrify.
req = requests.get('https://api.pixlab.io/facedetect',params={
    'img': img,
    'key':'Pix_Key',
})
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
    exit();

total = len(reply['faces']) # Total detected faces
print(str(total)+" faces were detected")
if total < 1:
    # No faces were detected, exit immediately
    exit()
# Pass the detected faces coordinates untouched to mogrify 
coordinates = reply['faces']
# Call mogrify & blur the faces
req = requests.post('https://api.pixlab.io/mogrify',headers={'Content-Type':'application/json'},data=json.dumps({
    'img': img,
    'key':'PIXLAB_API_KEY',
    'cord': coordinates #The field of interest
}))
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
else:
    print ("Blurred faces URL: "+ reply['link'])

Further code samples are available on the PixLab Github repository or refer to the PixLab Endpoints list for the official documentation.