SandLogic’s CORE now supports Intel’s OpenVINO

SandLogic’s CORE (Code_Once_Run_Everywhere) tool now includes support for Intel’s OpenVINO for Intel’s x86 processors. CORE tool can now convert their favourite neural nets from any framework (TF 1.x, TF2.x, PyTorch, ONNX, TFLite) and optimize the neural net for any x86 device of the user’s choice.

Accelerate the porting and optimization of deep learning models on Edge devices CORE is powered by SL’s proprietary automated Low code / No code conversion and optimization engine. This engine utilized Intel’s OpenVINO toolkit to revise a given trained model to optimally quantize and speed up its inference while preserving the model’s baseline accuracy. Developers can quickly port their pre-trained models to various x86-based target hardware devices.

Why Core

Environment and setup differs for every AI development framework, and it depends different versions of packages and modules to be compatible to get it to work

Solution creators and integrators spend time to convert a bunch of open source or custom models to the target of their choice

Every framework or AI chip manufacturer publishes model zoo however they are in a specific format and is meant for specific GPUs, NPUs, AI chips, or specific hardware

Conversions of pretrained neural nets end up in errors releated un-supported operator or layer by the targeted framework or format of choice.

Pretrained neural nets in any format

  • TensorFlow 1.x,2.x
  • TFLite
  • PyTorch
  • ONNX
  • Many more

Single or Batch conversion

Handles multiple formats in a batch

Pretrained neural nets in the requested output format

  • Intel OpenVINO IR
  • ONNX TensorFlow 1.x, 2.x
  • TFLite
  • PyTorch

Hardware specific

  • Intel X86 devices
  • NVIDIA TensorRT
  • Xilinx VITIS AI
  • NVIDIA GPUs

BENEFITS OF COLLABORATION

1.Code only once, and make your neural network on multiple target types

2. Forget frameworks, versions, compatibility issues, environments

3. Log messages for details of the conversion, issues, or workarounds done by the tool

4. The tool can also recommend alternatives in case the requested conversion is not supported

5. Supports Single or batch conversions

6. Get neural net support on multiple target hardware along with inference stubs

7. Get pre-trained models from any model zoo of any AI framework TensorFlow, PyTorch, CAFFE, etc., and get it converted to the format of your choice, for the device of your choice

8. With conversion, get the pre-trained model optimized and quantized

The objective of the collaboration is to provide AI Solution creators, integrators, and Edge AI developers the CORE tool for conversion and optimisation of the neural nets, and offloading AI inference pipelines to the Intel's x86 processor-based edge devices. Users can deploy Intel OpenVINO optimised models easily without worrying about the “how” and “what” of installation.

Check out our demo video of the CORE tool with Intel’s OpenVINO –