【Nano-vLLM源码分析(一)】环境配置及整体流程概览

环境配置

  • 整体环境还是很干净的,跟着readme应该能很快配置起来。

  • 不过我这里是下载了源代码,然后使用了容器nvcr.io/nvidia/pytorch:25.04-py3来挂载文件夹运行,在容器内还需要pip3 install transformers xxhash,这样就配置好了基本的python环境。

  • 然后还需要下载Qwen3模型,因为Nano-vLLM目前只专门适配了它。这里readme使用的是比较老的huggingface-cli来下载,如果使用最新的版本,直接使用如下的代码即可将模型下载好,注意这里下载的位置是./huggingface/Qwen3-0.6B

1
hf download   Qwen/Qwen3-0.6B   --force-download   --local-dir ./huggingface/Qwen3-0.6B
  • 然后修改example.py中的模型位置为./huggingface/Qwen3-0.6B再直接运行python3 example.py就可以运行了。运行结果如下:
1
2
3
4
5
6
7
8
9
10
11
12
root@830fe60dca79:/workspace/nano-vllm# python3 example.py 
`torch_dtype` is deprecated! Use `dtype` instead!
['<|im_start|>user\nintroduce yourself<|im_end|>\n<|im_start|>assistant\n', '<|im_start|>user\nlist all prime numbers within 100<|im_end|>\n<|im_start|>assistant\n']
Generating: 100%|█████████████████████████| 2/2 [00:12<00:00, 6.24s/it, Prefill=24tok/s, Decode=30tok/s]


Prompt: '<|im_start|>user\nintroduce yourself<|im_end|>\n<|im_start|>assistant\n'
Completion: "<think>\nOkay, the user asked me to introduce myself. I need to make sure I respond in a friendly and helpful manner. Let me start by acknowledging their question. I should mention that I'm an AI assistant designed to help with various tasks. It's important to add a bit about my capabilities, like answering questions, providing information, etc. I should also offer assistance in a conversational tone. Let me check if I'm using the right structure and if there's anything else they might need. Alright, I think that covers it.\n</think>\n\nHello! I'm an AI assistant designed to help you with a wide range of questions and tasks. I can answer questions, provide information, offer recommendations, and even assist with writing or other creative tasks. How can I help you today?<|im_end|>"


Prompt: '<|im_start|>user\nlist all prime numbers within 100<|im_end|>\n<|im_start|>assistant\n'
Completion: "<think>\nOkay, so I need to list all the prime numbers between 100. Let me think. First, I remember that a prime number is a number greater than 1 that has no divisors other than 1 and itself. So, starting from 100, I need to check each number to see if it's prime. \n\nLet me start by recalling some prime numbers. The primes less than 100 are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, and 83. Wait, but the question is about numbers between 100. So, starting from 100 upwards. \n\nLet me check each number. Starting with 100. Is 100 a prime? Well, 100 is even, so it's divisible by 2. So 100 is not prime. Next is 1"
  • 注意到在运行过程中由于其使用的是GPU 0,故会分配GPU 0的大量显存,主要占用显存的地方是模型以及kv cache

流程概览

项目文件

首先整体概览一下项目的文件结构,如下所示,添加了一些介绍

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
.
├── assets # 项目展示资源(如 logo),不参与核心逻辑
│ └── logo.png
├── bench.py # 简单的推理性能测试脚本(吞吐 / 延迟)
├── example.py # 最小可运行示例,展示 Nano-vLLM 的基本使用方式
├── huggingface # 本地 HuggingFace 模型目录
│ └── Qwen3-0.6B # Qwen3-0.6B 模型权重与 tokenizer 文件
│ ├── config.json # 模型结构与超参数配置
│ ├── generation_config.json # 默认生成参数
│ ├── LICENSE
│ ├── merges.txt # BPE 合并规则
│ ├── model.safetensors # 模型权重文件
│ ├── README.md
│ ├── tokenizer_config.json # tokenizer 配置
│ ├── tokenizer.json # tokenizer 定义
│ └── vocab.json # 词表
├── LICENSE
├── nanovllm # Nano-vLLM 核心代码
│ ├── config.py # 全局配置定义(设备、dtype、模型路径等)
│ ├── engine # 推理引擎核心实现(vLLM 风格)
│ │ ├── block_manager.py # KV Cache 的 block 管理与分配逻辑
│ │ ├── llm_engine.py # 推理主循环,驱动调度与模型执行
│ │ ├── model_runner.py # 封装模型 forward 的执行逻辑
│ │ ├── scheduler.py # 序列级调度器,决定每一步执行哪些请求
│ │ └── sequence.py # 生成序列的状态表示与管理
│ ├── __init__.py
│ ├── layers # Transformer 推理所需的基础算子
│ │ ├── activation.py # 激活函数实现
│ │ ├── attention.py # 自注意力计算与 KV Cache 访问
│ │ ├── embed_head.py # Embedding 与 LM Head
│ │ ├── layernorm.py # LayerNorm 实现
│ │ ├── linear.py # 线性层实现
│ │ ├── rotary_embedding.py # RoPE 位置编码
│ │ └── sampler.py # 从 logits 中采样下一个 token
│ ├── llm.py # 对外暴露的 LLM 接口(继承自LLMEngine,目前里面是pass)
│ ├── models # 模型结构定义
│ │ └── qwen3.py # Qwen3 模型的最小实现
│ ├── sampling_params.py # 文本生成的采样参数定义
│ └── utils # 工具模块
│ ├── context.py # 推理上下文与辅助状态管理
│ └── loader.py # HuggingFace 权重加载与映射
├── pyproject.toml # Python 项目与依赖配置
└── README.md # 项目整体说明

整体流程

example.py的代码如下所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import os
from nanovllm import LLM, SamplingParams
from transformers import AutoTokenizer

def main():
path = os.path.expanduser("./huggingface/Qwen3-0.6B/")
tokenizer = AutoTokenizer.from_pretrained(path)
llm = LLM(path, enforce_eager=True, tensor_parallel_size=1)

sampling_params = SamplingParams(temperature=0.6, max_tokens=256)
prompts = [
"introduce yourself",
"list all prime numbers within 100",
]
prompts = [
tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
tokenize=False,
add_generation_prompt=True,
)
for prompt in prompts
]
outputs = llm.generate(prompts, sampling_params)

for prompt, output in zip(prompts, outputs):
print("\n")
print(f"Prompt: {prompt!r}")
print(f"Completion: {output['text']!r}")

if __name__ == "__main__":
main()

现在跟着example的执行过程大致梳理一下整体流程:

  1. 根据Qwen3-0.6B文件夹内容初始化Tokenizer以及LLM模型

    1. LLM是对LLMEngine的包装,主要是为了对齐vLLM的行为,其代码如下所示
    1
    2
    3
    4
    5
    from nanovllm.engine.llm_engine import LLMEngine

    class LLM(LLMEngine):
    pass

    • LLMEngine初始化的代码如下所示,其读取了配置文件,然后其支持TP并行,会加载TP个进程,每个进程运行对应的ModelRunner。其还配置了对应的tokenizer与scheduler
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    class LLMEngine:

    def __init__(self, model, **kwargs):
    config_fields = {field.name for field in fields(Config)}
    config_kwargs = {k: v for k, v in kwargs.items() if k in config_fields}
    config = Config(model, **config_kwargs)
    self.ps = []
    self.events = []
    ctx = mp.get_context("spawn")
    for i in range(1, config.tensor_parallel_size):
    event = ctx.Event()
    process = ctx.Process(target=ModelRunner, args=(config, i, event))
    process.start()
    self.ps.append(process)
    self.events.append(event)
    self.model_runner = ModelRunner(config, 0, self.events)
    self.tokenizer = AutoTokenizer.from_pretrained(config.model, use_fast=True)
    config.eos = self.tokenizer.eos_token_id
    self.scheduler = Scheduler(config)
    atexit.register(self.exit)
  2. 配置sampling_params

    1. 其代码如下所示:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    @dataclass
    class SamplingParams:
    temperature: float = 1.0
    max_tokens: int = 64
    ignore_eos: bool = False

    def __post_init__(self):
    assert self.temperature > 1e-10, "greedy sampling is not permitted"

    • 主要是支持配置:

      1. temperature: $$p_i = \mathrm{softmax}!\left(\frac{z_i}{T}\right)$$,T负责控制缩放程度,T小于1.0分布会更尖锐,大于1.0分布会更加平缓。其通过assert self.temperature > 1e-10避免了完全确定性的输出

      2. max_tokens:最多生成的token数量

      3. ignore_eos:如果为true,即使生成 EOS,也继续生成直到达到 max_tokens

  3. 使用tokenizer模板转换prompt

    1. 经过chat模板转换后,得到的prompts结果如下
    1
    ['<|im_start|>user\nintroduce yourself<|im_end|>\n<|im_start|>assistant\n', '<|im_start|>user\nlist all prime numbers within 100<|im_end|>\n<|im_start|>assistant\n']
  4. 通过llm.generate(prompts, sampling_params)调用得到生成结果

    1. generate的相关代码如下所示:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    def generate(
    self,
    prompts: list[str] | list[list[int]],
    sampling_params: SamplingParams | list[SamplingParams],
    use_tqdm: bool = True,
    ) -> list[str]:
    if use_tqdm:
    pbar = tqdm(total=len(prompts), desc="Generating", dynamic_ncols=True)
    if not isinstance(sampling_params, list):
    sampling_params = [sampling_params] * len(prompts)
    for prompt, sp in zip(prompts, sampling_params):
    self.add_request(prompt, sp)
    outputs = {}
    prefill_throughput = decode_throughput = 0.
    while not self.is_finished():
    t = perf_counter()
    output, num_tokens = self.step()
    if use_tqdm:
    if num_tokens > 0:
    prefill_throughput = num_tokens / (perf_counter() - t)
    else:
    decode_throughput = -num_tokens / (perf_counter() - t)
    pbar.set_postfix({
    "Prefill": f"{int(prefill_throughput)}tok/s",
    "Decode": f"{int(decode_throughput)}tok/s",
    })
    for seq_id, token_ids in output:
    outputs[seq_id] = token_ids
    if use_tqdm:
    pbar.update(1)
    outputs = [outputs[seq_id] for seq_id in sorted(outputs.keys())]
    outputs = [{"text": self.tokenizer.decode(token_ids), "token_ids": token_ids} for token_ids in outputs]
    if use_tqdm:
    pbar.close()
    return outputs

    • 首先会将各prompt与sampling_param组合为Sequence然后加入到scheduler中的waiting队列中

      1. add_request的代码如下所示
      1
      2
      3
      4
      5
      def add_request(self, prompt: str | list[int], sampling_params: SamplingParams):
      if isinstance(prompt, str):
      prompt = self.tokenizer.encode(prompt)
      seq = Sequence(prompt, sampling_params)
      self.scheduler.add(seq)
      • 其主要会将prompt转换为token id列表,如这里的'<|im_start|>user\nintroduce yourself<|im_end|>\n<|im_start|>assistant\n'会被转化为[151644, 872, 198, 396, 47845, 6133, 151645, 198, 151644, 77091, 198]

      • 然后scheduler.add的代码如下所示,就是直接append到waiting队列中

      1
      2
      def add(self, seq: Sequence):
      self.waiting.append(seq)
    • 然后进入while循环,循环结束的条件是self.is_finished()

      1. LLM_Engine的is_finished()的代码如下所示:
      1
      2
      def is_finished(self):
      return self.scheduler.is_finished()
      • Scheduler的is_finished代码如下所示,即介绍的标准就是两个队列中都没有请求
      1
      2
      def is_finished(self):
      return not self.waiting and not self.running
    • 在while循环中主要的就是不断执行self.step(),然后更新进度条,根据perf_counter()得到的运算时间以及生成的token数量num_tokens计算吞吐并将输出结果存储在outputs

      1. self.step()的代码如下所示
      1
      2
      3
      4
      5
      6
      7
      def step(self):
      seqs, is_prefill = self.scheduler.schedule()
      token_ids = self.model_runner.call("run", seqs, is_prefill)
      self.scheduler.postprocess(seqs, token_ids)
      outputs = [(seq.seq_id, seq.completion_token_ids) for seq in seqs if seq.is_finished]
      num_tokens = sum(len(seq) for seq in seqs) if is_prefill else -len(seqs)
      return outputs, num_tokens
      • 其主要就是从scheduler中获取到要运行的seqs

        1. scheduler.schedule()的代码如下所示
        1
        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        def schedule(self) -> tuple[list[Sequence], bool]:
        # prefill
        scheduled_seqs = []
        num_seqs = 0
        num_batched_tokens = 0
        while self.waiting and num_seqs < self.max_num_seqs:
        seq = self.waiting[0]
        if num_batched_tokens + len(seq) > self.max_num_batched_tokens or not self.block_manager.can_allocate(seq):
        break
        num_seqs += 1
        self.block_manager.allocate(seq)
        num_batched_tokens += len(seq) - seq.num_cached_tokens
        seq.status = SequenceStatus.RUNNING
        self.waiting.popleft()
        self.running.append(seq)
        scheduled_seqs.append(seq)
        if scheduled_seqs:
        return scheduled_seqs, True

        # decode
        while self.running and num_seqs < self.max_num_seqs:
        seq = self.running.popleft()
        while not self.block_manager.can_append(seq):
        if self.running:
        self.preempt(self.running.pop())
        else:
        self.preempt(seq)
        break
        else:
        num_seqs += 1
        self.block_manager.may_append(seq)
        scheduled_seqs.append(seq)
        assert scheduled_seqs
        self.running.extendleft(reversed(scheduled_seqs))
        return scheduled_seqs, False

        • 其划分为了两大阶段的处理,首先是对多个请求进行prefill处理:

          1. 进入prefill的while循环的要求是waiting队列中有请求,并且现在获取的seq数量没有超过max_num_seqs限制。此外还要求新获取的seq叠加的num_batched_tokens要小于self.max_num_batched_tokens,并且block_manager还有足够的显存进行分配

          2. 然后正常在while循环中处理的主要流程就是block_manager为新的seq分配cache,seq状态修改为running,num_batched_tokens添加 len(seq) - seq.num_cached_tokenswaiting队列popleftrunning队列appendscheduled_seqs结果append

          3. 如果最后scheduled_seqs确实获取了seq,就直接返回scheduled_seqs及True标明这一次调度只处理了prefill

        • 如果没有prefill请求了,就去处理decode

          1. 进入decode的while循环要求running队列中有请求,并且现在获取的seq数量没有超过max_num_seqs限制

          2. 在循环中需要从running队列中popleft获取seq,然后通过block_manager查看是否还有足够的显存进行分配,如果没有就需要去抢占其他seq,优先抢占running队列最新来的seq,如果都抢占完了再抢占自己,不过如果抢占了自己就直接退出while循环了,因为确实没有显存了。如果有足够的显存或者抢占得到了足够的显存,那么就调用block_manager进行may_append分配显存,并且结果队列scheduled_seqs append该seq

          3. 最后返回结果队列scheduled_seqs 及False标明是decode处理

      • 然后调用model_runner运行模型推理

        1. 这里调用的是model_runner的run函数,如下所示
        1
        2
        3
        4
        5
        6
        7
        def run(self, seqs: list[Sequence], is_prefill: bool) -> list[int]:
        input_ids, positions = self.prepare_prefill(seqs) if is_prefill else self.prepare_decode(seqs)
        temperatures = self.prepare_sample(seqs) if self.rank == 0 else None
        logits = self.run_model(input_ids, positions, is_prefill)
        token_ids = self.sampler(logits, temperatures).tolist() if self.rank == 0 else None
        reset_context()
        return token_ids
        • 其主要流程是依据是否是prefill来做一些kv cache的准备工作

        • 然后运行模型得到运行结果

        • 最后采样得到token_ids并返回

      • 然后执行scheduler的一些后处理流程

        1. postprocess的相关代码如下所示,
        1
        2
        3
        4
        5
        6
        7
        def postprocess(self, seqs: list[Sequence], token_ids: list[int]) -> list[bool]:
        for seq, token_id in zip(seqs, token_ids):
        seq.append_token(token_id)
        if (not seq.ignore_eos and token_id == self.eos) or seq.num_completion_tokens == seq.max_tokens:
        seq.status = SequenceStatus.FINISHED
        self.block_manager.deallocate(seq)
        self.running.remove(seq)
        • 其主要是逐个遍历token_ids,并将推理结果token_id添加到seq中,然后检查如果没有忽略eos而当前就是eos或者已经到达最大长度了,就将seq状态修改为FINISHED,然后block_manager释放掉这个seq对应的显存,并且也将其从running队列中删除
      • 如果seq是已经推理结束了是FINISHED状态就将其记录到outputs中,并计算中共处理了多少tokens,然后返回

    • 最后将outputs使用tokenizer.decode解码得到text,并将token_id与text都返回

  5. 打印生成结果


【Nano-vLLM源码分析(一)】环境配置及整体流程概览
http://example.com/2026/01/10/nano-vllm-process/
作者
滑滑蛋
发布于
2026年1月10日
许可协议