MicroK8s on remote machines
important
We are only providing this guide as a reference. Due to the complexity of different installations, we can only provide open source support for clusters running Ubuntu 20.04 or higher as host OS on a local cluster.
Please visit our Website and contact us for other bare metal installation options.
You can set up and access Onepanel on a remote computer that you can access via SSH.
This can be a VM in the cloud, or Multipass running locally. In either case, it has to be running Ubuntu 20.04 or higher.
Microk8s installation and setup
Set up your VM
- Cloud VM
- Multipass
Set up your VM according to your cloud provider instructions.
important
Onepanel requires at least 40GB of hard disk space. If your VM doesn't have that much you'll have to mount an external disk. In that case, make sure to do step 5.
Install MicroK8s
note
All further instructions are in your remote computer/vm unless otherwise indicated.
sudo snap install microk8s --channel=1.19/stable --classicUpdate your permissions
sudo usermod -a -G microk8s $USERsudo chown -f -R $USER ~./kubeLog out and in for changes to take effect
Wait for microk8s to be ready
microk8s status --wait-readyUpdate API server config
sudo nano /var/snap/microk8s/current/args/kube-apiserverAdd to the top
--service-account-signing-key-file=${SNAP_DATA}/certs/serviceaccount.key--service-account-key-file=${SNAP_DATA}/certs/serviceaccount.key--service-account-issuer=api--service-account-api-audiences=api,natsRestart microk8s for the changes to take effect
microk8s stop && microk8s start && microk8s status --wait-readyEnable microk8s addons
sudo microk8s enable storage dns rbacmicrok8s status --wait-readyConfigure DNS
- Cloud VM
- Multipass
i. Edit the resolvconf
sudo nano /var/snap/microk8s/current/args/kubeletAdd to the top
--resolv-conf=/run/systemd/resolve/resolv.confii. Edit coredns configmap so we point to the resolv.conf file
microk8s kubectl edit cm coredns -n kube-systemSet the forward section to:
forward . /etc/resolv.conf 8.8.8.8 8.8.4.4iii. Restart microk8s
microk8s stop && microk8s start && microk8s status --wait-ready
(Optional) Mount external disks
If you are using a VM in the cloud, you need at least 40GB of hard disk space.
Mount your disk if you haven't already. We'll assume your disk is mounted at /data
We need to tell microk8s to use the mounted disk so we have more storage space.
Create the following directories
Edit the containerd arguments
Change the contents to
Then, restart microk8s
Install Onepanel
Install
curl -sLO https://github.com/onepanelio/onepanel/releases/latest/download/opctl-linux-amd64chmod +x opctl-linux-amd64sudo mv ./opctl-linux-amd64 /usr/local/bin/opctlInitialize Onepanel
opctl init --provider microk8s \--enable-metallb \--artifact-repository-provider absnote
I used Azure Block Storage (abs) as the artifact-repository-provider above, but you can use s3, abs, or gcs
Populate
params.yaml
by following the instructions in the template, and referring to configuration file sections for more detailed information.Here's a mostly filled out
params.yaml
for a quickstart. You'll need to fill outartifactRepository
application:defaultNamespace: exampledomain: onepanel.testfqdn: app.onepanel.testinsecure: truenodePool:label: node.kubernetes.io/instance-typeoptions:- name: 'Local machine'value: 'local'provider: microk8s# You need to fill this part out according to your artifact repository providerartifactRepository:# FILL ME OUTcertManager:metalLb:addresses:- 192.168.99.0/32Deploy onepanel
microk8s config > kubeconfigKUBECONFIG=./kubeconfig opctl applyLabel your nodes
To allow Workspaces to run on your machine(s) you need to label them.
First, get the names of your nodes by running:
microk8s kubectl get nodesYou will get results similar to below:
NAME STATUS ROLES AGE VERSIONsample Ready <none> 11m v1.19.8-34+811e9feeade1d3Then, for each node name, add the label from your
application.nodePool.label
from yourparams.yaml
, above we used,nodePool:label: node.kubernetes.io/instance-typeoptions:- name: 'Local machine'value: localand the node above is called
sample
, you can label the node as follows:microk8s kubectl label node sample node.kubernetes.io/instance-type=local
Expose Onepanel using Nginx
Since Onepanel is running inside a VM, we need to expose it so we can access it on our local computers. To do so, we use nginx.
First, install nginx.
Then, configure nginx to expose Onepanel
Change this
to
Then, stop editing and run
Configure Local DNS
On the client machine, we need to point DNS so your browser can find Onepanel using the FQDN you configured.
Below we edit the hosts file, but you can use DNSMasq for a more robust set up.
Get the IP address of your VM. For VMs in the cloud, this is given to you.
In multipass you can see it with multipass list
For this example, we will assume the IP is: 15.92.2.237
Then, edit your hosts file using a text editor. The location depends on your operation system.
- Linux
- Mac
- Windows
/etc/hosts
and add to the top
Use Onepanel
In your VM, Get your authentication login with
Open up app.onepanel.test
in your browser, paste in the credentials, and you are good to go!
GPU Setup
note
For instances running with GPUs we recommend having a disk size of at least 100GB.
All further instructions are in your remote computer/vm unless otherwise indicated.
Verify you have a CUDA capable GPU.
From the command line, enter
lspci | grep -i nvidiasample output
0001:00:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)Verify the system has the correct kernel headers and development packages installed
The CUDA Driver requires that the kernel headers and development packages for the running version of the kernel be installed at the time of the driver installation, as well whenever the driver is rebuilt.
The version of the kernel your system is running can be found by running the following command:
uname -rThe kernel headers and development packages for the currently running kernel can be installed with:
sudo apt-get install linux-headers-$(uname -r)Install NVIDIA CUDA
You can download it at https://developer.nvidia.com/cuda-downloads
Or run the commands below
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pinsudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600wget https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda-repo-ubuntu2004-11-3-local_11.3.1-465.19.01-1_amd64.debsudo dpkg -i cuda-repo-ubuntu2004-11-3-local_11.3.1-465.19.01-1_amd64.debsudo apt-key add /var/cuda-repo-ubuntu2004-11-3-local/7fa2af80.pubsudo apt-get updatesudo apt-get -y install cudaYou'll need to restart your machine after installation.
Verify installation with:
nvidia-smisample output:
+-----------------------------------------------------------------------------+| NVIDIA-SMI 465.19.01 Driver Version: 465.19.01 CUDA Version: 11.3 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. || | | MIG M. ||===============================+======================+======================|| 0 NVIDIA Tesla K80 On | 00000001:00:00.0 Off | 0 || N/A 41C P8 25W / 149W | 0MiB / 11441MiB | 0% Default || | | N/A |+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| No running processes found |+-----------------------------------------------------------------------------+Enable NVIDIA GPU support for MicroK8s.
microk8s enable gpuIn your
params.yaml
make sure to update thenodePool:
sectionIt should look something like this:
nodePool:# Cloud providers will automatically set label key as "node.kubernetes.io/instance-type" on all nodes# For Kubernetes 1.16.x, use "beta.kubernetes.io/instance-type"label: node.kubernetes.io/instance-type# These are the machine type options that will be available in Onepanel# `name` can be any user friendly name# `value` should be the instance type in your cloud provider# `resources.limits` should only be set if the node pool has GPUs# The first option will be used as default.options:- name: 'Local GPU'value: gpuresources:limits:nvidia.com/gpu: 1provider: microk8sand then overwrite the label for your gpu nodes.
microk8s kubectl label node sample node.kubernetes.io/instance-type=gpu --overwriteMake sure to apply changes to use GPU nodes in your deployment:
microk8s config > kubeconfigKUBECONFIG=./kubeconfig opctl apply