Nano-vLLM is a stripped-down, no-fluff engine designed purely for blazing-fast offline inference with large language models. It’s lightweight (just ~1,200 lines of code) but packs a serious punch — featuring smart optimizations like prefix caching, tensor parallelism, CUDA graphs, and more.
Whether you’re testing models locally or building a custom inference stack, Nano-vLLM gives you raw speed, full transparency, and zero dependency bloat. It mirrors the vLLM API for easy migration, while staying small enough to dive into and hack on.
If you’re running models like Qwen3-0.6B on your own GPU or a cloud VM — this is your toolkit.
GPU Configuration Table (For Smooth Experience)
GPU Model | vCPUs | RAM (GB) | VRAM (GB) | Precision | Recommended Use Case |
---|
RTX A6000 | 48 | 45 | 48 | FP16 / BF16 | Full-speed inference, no quantization needed |
A100 40GB | 96 | 90+ | 40 | FP16 / BF16 | Multi-instance inference, high throughput |
T4 | 16 | 16 | 16 | INT4 / Quantized | Use quantized models only, basic inference |
Recommended: 1× RTX A6000 or higher for smooth performance with Qwen3-0.6B and above.
Prerequisites Before You Run Nano-vLLM
Before jumping into Nano-vLLM, make sure your environment is ready:
System Setup
- Python 3.10 or 3.11
- Conda
- A Linux-based system (Ubuntu 22.04 preferred)
- NVIDIA GPU with at least 16 GB VRAM
- CUDA 12.0+ with proper driver and toolkit
- Git, wget, and
huggingface-cli
installed
Resources
Link: https://github.com/GeeeekExplorer/nano-vllm
Step-by-Step Process to Install Nano-VLLM Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running Nano-vLLM, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.
We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04
This image is essential because it includes:
- Full CUDA toolkit (including
nvcc
)
- Proper support for building and running GPU-based applications like Nano-vLLM
- Compatibility with CUDA 12.1.1 required by certain model operations
Launch Mode
We selected:
Interactive shell server
This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching Python-based apps like Nano-vLLM.
Docker Repository Authentication
We left all fields empty here.
Since the Docker image is publicly available on Docker Hub, no login credentials are required.
Identification
nvidia/cuda:12.1.1-devel-ubuntu22.04
CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
This setup ensures that the Nano-vLLM engine runs in a GPU-enabled environment with proper CUDA access and high compute performance — making it ideal for both inference and benchmarking.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, If you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Miniconda & Packages
After completing the steps above, install Miniconda.
Miniconda is a free minimal installer for conda. It allows the management and installation of Python packages.
Anaconda has over 1,500 pre-installed packages, making it a comprehensive solution for data science projects. On the other hand, Miniconda allows you to install only the packages you need, reducing unnecessary clutter in your environment.
We highly recommend installing Python using Miniconda. Miniconda comes with Python and a small number of essential packages. Additional packages can be installed using the package management systems Mamba or Conda.
For Linux/macOS:
Download the Miniconda installer script:
sudo update && apt install wget -y
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
For Windows:
- Download the Windows Miniconda installer from the official website.
- Run the installer and follow the installation prompts.
Run the installer script:
bash Miniconda3-latest-Linux-x86_64.sh
After Installing Miniconda, you will see the following message:
Thank you for installing Miniconda 3! This means Miniconda is installed in your working directory or on your operating system.
Check the screenshot below for proof:
Step 9: Activate Conda and Create a Environment
After the installation process, activate Conda using the following command:
conda init
source ~/.bashrc
Create a Conda Environment using the following command:
conda create -n nano python=3.11 -y
conda create
: This is the command to create a new environment.
-n nano
: The -n
flag specifies the name of the environment you want to create. Here nano
is the name of the environment you’re creating. You can name it anything you like.
python=3.11
: This specifies the version of Python that you want to install in the new environment. In this case, it’s Python 3.11.
-y
: This flag automatically answers “yes” to all prompts during the creation process, so the environment is created without asking for further confirmation.
Step 10: Install Dependencies
Run the following command to install the dependencies:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate
pip install huggingface_hub
Step 11: Install Nano VLLM
Run the following command to install Nano VLLM:
pip install git+https://github.com/GeeeekExplorer/nano-vllm.git
Step 12: Login Using Your Hugging Face API Token
Use the huggingface_hub
cli to login directly in the terminal.
Run the following command to login in huggingface-cli:
huggingface-cli login
Then, enter the token and press the Enter key. Ensure you press Enter after entering the token so the input will not be visible.
After entering the token, you will see the following output:
Login Successful.
The current active token is (your_token_name).
Check the screenshot below for reference.
Step 13: Clone the Repository
Run the following command to clone the nano vllm repository:
https://github.com/GeeeekExplorer/nano-vllm.git
cd nano-vllm
Step 14: Download the Model
Run the following command to download the model:
huggingface-cli download --resume-download Qwen/Qwen3-0.6B --local-dir checkpoints --local-dir-use-symlinks False
Nano-vLLM doesn’t automatically download models from Hugging Face when given a model name — it expects a local model directory. So, to use a model like Qwen3-0.6B
, we need to manually download its full weights and configuration files from Hugging Face into a local folder.
This command ensures that:
- You have all model files locally.
- Nano-vLLM can load the model without internet access.
- You avoid any runtime errors related to missing files.
Step 15: Connect to your GPU VM using Remote SSH
- Open VS Code on your Mac.
- Press
Cmd + Shift + P
, then choose Remote-SSH: Connect to Host
.
- Select your configured host.
- Once connected, you’ll see
SSH: 38.29.145.28
(Your VM IP) in the bottom-left status bar (like in the image).
Step 16: Open the Project Folder on VM, Make file and Paste the Code
- Click on “Open Folder”
- Choose the directory where your script is located:
/root/nano-vllm
- VS Code will reload the window inside the remote environment.
- In the
/root/nano-vllm
folder, right-click → New File
- Name it:
app.py
Paste This Full Code into app.py:
from nanovllm import LLM, SamplingParams
llm = LLM("/YOUR/MODEL/PATH", enforce_eager=True, tensor_parallel_size=1)
sampling_params = SamplingParams(temperature=0.6, max_tokens=256)
prompts = ["Hello, Nano-vLLM."]
outputs = llm.generate(prompts, sampling_params)
outputs[0]["text"]
Step 17: Run the File
python3 run_toto.py
See example.py
for usage. The API mirrors vLLM’s interface with minor differences in the LLM.generate
method.
See bench.py
for benchmark.
Test Configuration:
- Hardware: RTX 4070 Laptop (8GB)
- Model: Qwen3-0.6B
- Total Requests: 256 sequences
- Input Length: Randomly sampled between 100–1024 tokens
- Output Length: Randomly sampled between 100–1024 tokens
Step by Step Process to Run the Nano VLLM on Gradio
Step 1: Install Gradio
Run the following command to install the gradio:
pip install gradio
Step 2: Create Gradio File
Create a nano_ui.py
file with this content:
import gradio as gr
from nanovllm import LLM, SamplingParams
import os
# Load model from checkpoints
model_path = "/root/nano-vllm/checkpoints"
llm = LLM(model_path, enforce_eager=True, max_model_len=4096)
sampling_params = SamplingParams(temperature=0.7, max_tokens=256)
def generate_response(prompt):
outputs = llm.generate([prompt], sampling_params)
return outputs[0]["text"]
# Create the Gradio UI
gr.Interface(
fn=generate_response,
inputs=gr.Textbox(lines=2, placeholder="Ask something..."),
outputs="text",
title="Nano-vLLM Chat",
description="Minimal UI for Nano-vLLM running on your GPU"
).launch(server_name="0.0.0.0", server_port=7860)
Step 3: Run the UI
Execute the following command to run the UI:
python3 nano_ui.py
You’ll see:
Running on local URL: http://0.0.0.0:7860
Step 4: Run SSH Port Forwarding Command to access the Gradio Web App
Run the following command to access the Gradio web app (or any other port from your VM) on your local machine:
ssh -N -L 7860:localhost:7860 -p 40758 <YOUR VM IP>
Step 6: Access the Gradio Web App
Access the Gradio Web App on:
Running on local URL: http://localhost:7860
Conclusion
Nano-vLLM proves that you don’t need a heavyweight engine to run high-performance inference. With just ~1,200 lines of code, it offers GPU-level speed, minimal setup, and full transparency — making it the perfect tool for developers who value both control and performance. Whether you’re experimenting with Qwen3-0.6B or integrating into a larger inference stack, Nano-vLLM gives you everything you need to go fast and stay flexible — all without the bloat. Powered by a NodeShift GPU VM, you’re up and running in minutes with offline capabilities and a clean Gradio UI.