Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel.
Labs for Intel® Certified Developer Program — MLOps Professional
-
Android Neural Networks HAL with OpenVINO
-
OpenCL and L0 Compute Drivers,
ocloc
compiler
- Intel® Graphics Compiler
- Intel® oneAPI DPC++/C++ Compiler
- Intel® Graphic Drivers and environment setup for OpenVINO
- Intel® Edge AI Performance Evaluation Toolkit
For PyTorch
For TensorFlow
For Triton
For Chainer
For Deepspeed
For OpenXLA
For Horovod
For MLIR
For SciKit-Learn
Based on OpenVINO
- For Blender
- For Audacity
- For Gimp
- For OBS Project
- For Rust
- XPU Operators for PyTorch
- Intel® Xe Super Sampling library
- Optimization library for LLMs
- Intel® Neural Compressor (library for model compression)
- Neural Speed (library for efficient LLMs inference)
- Robot Operating System framework for OpenVINO based inference
- Data Flow Facilitator for Machine Learning
- Intel® End-to-End AI Optimization Kit
- AutoRound - Advanced Weight-Only Quantization Algorithm for LLMs
- NN-Based Cost Model for NPU Devices
-
$\rho$ -Diffusion - A diffusion-based density estimation model for computational physics - Text-Visual Prompting Model
- Intel® solution for RecSys challenge 2023
- For AWS
- For Microsoft Azure
- For Google Cloud Platform (GCP)
- For https://www.terraform.io/
- For Databricks on AWS
- For AWS Sagemaker
- For Virtual Machines on AWS
- For Azure Data Explorer
- For Databricks Microsoft Azure
- For Databricks cluster
- For Virtual Machines on Google Cloud Platform (GCP)
For BigDL
- https://github.com/intel/BigDL-Distributed-Training-and-Inference-Workflow
- https://github.com/intel/BigDL-Privacy-Preserving-Machine-Learning-Toolkit
- https://github.com/intel/BigDL-Recommender-System-Toolkit
- https://github.com/intel/BigDL-Time-Series-Toolkit
- https://github.com/intel/cloud-client-ai-service-framework
- https://github.com/intel/cloud-native-ai-pipeline
- https://github.com/intel/credit-card-fraud-detection