Menu
Onepanel logoGetting StartedUser GuideOperator ManualReleasesRoadmap
GitHubTwitter
Onepanel logo
  • Getting Started
  • User Guide
  • Operator Manual
  • Releases
  • Roadmap
  • GitHub
  • Twitter
  • User guide
  • Workspaces
    • Launching a Workspace
    • Pause and resume
    • Switching node pool
    • Terminating a Workspace
    • Workspace Templates
  • Workflows
    • Executing a Workflow
    • Templates and parameters
    • Working with artifacts
    • Creating a Workflow Template
    • Persisting training metrics
    • Accessing TensorBoard
    • Hyperparameter tuning
    • Troubleshooting Workflows
  • Inference APIs
    • Overview
    • Create with Web UI
    • Create with Workflow Task
    • Create with Python SDK
  • Built-in and custom Workflows
    • Model training
    • Data augmentation
  • CVAT Workspace
    • Getting started with image and video annotation
    • Training with built-in models
    • Using trained models for automatic annotation
    • Adding custom training Workflows
    • CVAT 1.6.0 Workspace
  • JupyterLab Workspace
    • JupyterLab overview
    • Using Git and browsing GitHub
    • Using TensorBoard
    • Using the built-in debugger
    • Auto completion and language features
  • Sidecars
    • Filesyncer
  • SDKs
    • Python SDK
Version: Next

Overview

You can deploy your TensorFlow and PyTorch models using Onepanel's Inference APIs. These APIs can scale to and from zero on CPUs and GPUs, and can be deployed using the Web UI, Workflow Tasks and Python SDK.

important

Onepanel's Inference APIs are based on KFServing and are fully compatible.

Edit this page
Previous
« Troubleshooting Workflows
Next
Create with Web UI »
Copyright © 2021 Onepanel, Inc.