SOD Embedded 1.1.7 Released

Symisc Systems is pleased to release the first major version of the SOD library! SOD is an embedded, modern cross-platform computer vision and machine learning software library that expose a set of APIs for deep-learning, advanced media analysis & processing including real-time, multi-class object detection and model training on embedded systems with limited computational resource and IoT devices.

SOD was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in open source as well commercial products.

Notable SOD features

  • Built for real world and real-time applications.
  • State-of-the-art, CPU optimized deep-neural networks including the brand new, exclusive RealNets architecture.
  • Patent-free, advanced computer vision algorithms.
  • Support major image format.
  • Simple, clean and easy to use API.
  • Brings deep learning on limited computational resource, embedded systems and IoT devices.
  • Easy interpolatable with OpenCV or any other proprietary API.
  • Pre-trained models available for most architectures.
  • CPU capable, RealNets model training.
  • Production ready, cross-platform, high quality source code.
  • SOD is dependency free, written in C, compile and run unmodified on virtually any platform & architecture with a decent C compiler.
  • Amalgamated - All SOD source files are combined into a single C file (sod.c) for easy deployment.
  • Open-source, actively developed & maintained product.
  • Developer friendly support channels.

Programming Interfaces

The documentation works both as an API reference and a programming tutorial. It describes the internal structure of the library and guides one in creating applications with a few lines of code. Note that SOD is straightforward to learn, even for new programmer.

SOD in 5 minutes or less

A quick introduction to programming with the SOD Embedded C/C++ API with real-world code samples implemented in C.


C/C++ API Reference Guide

This document describes each API function in details. This is the reference document you should rely on.


SOD Github Repository

The official Github repository.


C/C++ Code Samples

Real world code samples on how to embed, load models and start experimenting with SOD.

OCR performance improved

As requested by our users, our /OCR endpoint gets more support for various languages including Arabic, Modern Hebrew, Russian & simplified Chinese.

bounding box coordinates are now enabled by default. For each request, besides the full text output, you get a bbox array where each entry of this array hold the target word and its bounding box (i.e. rectangle) coordinates. Each entry in this array is identified by an instance of the following JSON object:


{
    word: Extracted word,
    x: X coordinate of the top left corner,
    y: Y coordinate of the top left corner,
    w: Width of the rectangle that englobe this word,
    h: Height of the rectangle that englobe this word
}

The documentation is updated and available to consult at https://pixlab.io/cmd?id=ocr and a Python sample is provided on Github at https://github.com/symisc/pixlab/blob/master/python/ocr.py.

With that in hand, you can further tune your analysis phase for example by extracting each word via /crop and perform another pass if desired.

List of face detection & recognition endpoints

As requested by our users, the following are the list of endpoints that is provided by PixLab for all your facial detection, recognition, generation & landmarks extraction tasks. These includes:

Here is two working Python code to illustrate this:

1.Detect all human faces present in a given image or video frame via facedetect and extract each one of them via crop:

import requests
import json
     
    # Target image: Feel free to change to whatever image holding as many human faces you want
    img = 'http://cf.broadsheet.ie/wp-content/uploads/2015/03/jeremy-clarkson_3090507b.jpg'
     
    req = requests.get('https://api.pixlab.io/facedetect',params={
        'img': img,
        'key':'My_Pix_Key',
    })
    reply = req.json()
    if reply['status'] != 200:
        print (reply['error'])
        exit();
     
    total = len(reply['faces']) # Total detected faces
    print(str(total)+" faces were detected")
     
    # Extract each face via crop now 
    for face in reply['faces']:
        req = requests.get('https://api.pixlab.io/crop',params={
            'img':img,
            'key':'My_Pix_Key',
            'width': face['width'],
            'height': face['height'],
            'x': face['left'],
            'y': face['top']
        })
        reply = req.json()
        if reply['status'] != 200:
            print (reply['error'])
        else:
            print ("Face #"+str(face['face_id'])+" location: "+ reply['link'])
  1. Detect all human faces in a given image via facedetect and censure all of them via mogrify:

import requests
import json

img = 'http://anewscafe.com/wp-content/uploads/2012/05/Brave-Faces-Group-shot.jpg' 

# Detect all human faces in a given image via facedetect and censure all of them via mogrify.
req = requests.get('https://api.pixlab.io/facedetect',params={
    'img': img,
    'key':'Pix_Key',
})
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
    exit();

total = len(reply['faces']) # Total detected faces
print(str(total)+" faces were detected")
if total < 1:
    # No faces were detected, exit immediately
    exit()
# Pass the detected faces coordinates untouched to mogrify 
coordinates = reply['faces']
# Call mogrify & proceed to the censure
req = requests.post('https://api.pixlab.io/mogrify',headers={'Content-Type':'application/json'},data=json.dumps({
    'img': img,
    'key':'Pix_Key',
    'cord': coordinates #The field of interest
}))
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
else:
    print ("Censured pic location: "+ reply['link'])

You can take a look at the PixLab Github repository for additional samples on working with faces or refer to the PixLab Endpoints list for the official documentation.