Windows平台下,YOLOX目标检测环境部署教程

1、克隆YOLOX源码

git clone https://github.com/Megvii-BaseDetection/YOLOX

可转到gitee仓库再克隆到本地

git clone https://gitee.com/monkeycc/YOLOX.git

2、安装requirements依赖文件


注释requirements.txt文件中的torch>=1.7,使用#号注释

然后安装依赖库

pip install -U pip && pip3 install -r requirements.txt

根据显卡型号和python版本,选择对应版本的cuda和cudnn

并通过whl文件手动安装GPU或CPU版本的pytorch

3、克隆cocotools仓库(coco数据集评估工具库)

pip install cython
pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

可将cocoapi仓库转到gitee,再运行上述命令

pip install 'git+https://gitee.com/lishan666/cocoapi.git#subdirectory==PythonAPI'

4、安装visual studio 2019

选择c++开发安装即可

5、进入YOLOX目录,安装库文件

cd YOLOX
python setup.py develop

6、安装apex(混合精度加速训练库)

git clone https://github.com/NVIDIA/apex
cd apex
python setup.py install

7、下载YOLOX权重文件


在YOLOX下创建weights文件夹,并下载权重文件放到YOLOX/weights

(1)yolox_nano下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth

(2)yolox_tiny下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_tiny.pth
(3)yolox_s下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth
(4)yolox_m下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.pth
(5)yolox_l下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.pth
(6)yolox_x下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth
(7)yolox_darknet53下载链接:

https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.pth

8、演示模型

使用官方coco数据类别测试yolox的默认权重网络
打开yolox/exp/yolox_base.py,确保__init__函数下的self.num_classes=80
打开yolox/data/datasets/__init__.py,确保from .coco_classes import COCO_CLASSES正常

以YOLOX-Nano模型测试为例,测试结果在YOLOX_outputs/yolox_nano/vis_res文件夹下


测试图片

python tools/demo.py image -n yolox-nano -c weights/yolox_nano.pth --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result --device gpu

测试视频(自备视频文件)

python tools/demo.py video -n yolox-nano -c weights/yolox_nano.pth --path assets/intersection.mp4 --conf 0.3 --nms 0.65 --tsize 640 --save_result --device gpu

【注意】:若出现 No module named 'yolox' 错误时,在tools/demo.py文件的from yolox.data.data_augment import ValTransform前加上

import sys
sys.path.append(r'xxx/YOLOX')

其中,r'xxx/YOLOX'为自己YOLOX项目的绝对路径

9、训练yolox


    1、准备VOC数据集,放在datasets文件夹下


        VOC数据集目录如下:
            -VOCdevkit
            |   -VOC2007
            |   |   -Annotation
            |   |   |   -xxx.xml
            |   |   |   -......
            |   |   -ImageSets
            |   |   |   -Main
            |   |   -JPEGImages
            |   |   |   -xxx.jpg
            |   |   |   -......

若数据集没有xml文件,使用labelimg标注工具箱

在datasets/VOCdevkit/VOC2007/Annotation目录下生成xxx.xml标注文件

运行在datasets/voc_annotation.py,划分数据集比例

在datasets/VOCdevkit/VOC2007/ImageSets/Main目录下生成trainval.txt、train.txt、val.txt、test.txt文件,如下:
 

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import os
import random

VOCdevkit_path = 'VOCdevkit/VOC2007/'

train_percent = 0.6  # 训练集比例
test_percent = 0.2   # 测试集比例
val_percent = 1 - train_percent - test_percent  # 验证集比例
trainval_percent = train_percent + val_percent  # 训练验证集比例

xmlfilepath = VOCdevkit_path + 'Annotations'
txtsavepath = VOCdevkit_path + 'ImageSets'
total_xml = os.listdir(xmlfilepath)

num = len(total_xml)
list_num = range(num)
tv = int(num * trainval_percent)  # 训练集+验证集
tr = int(num * train_percent)     # 训练集
trainval = random.sample(list_num, tv)
train = random.sample(trainval, tr)

# ------------------------------------------------------------------------#
#   trainval.txt  :  训练集+验证集数据
#   train.txt     :  训练集数据
#   val.txt       :  验证集数据
#   test.txt      :  测试集数据
# ------------------------------------------------------------------------#

ftrainval = open(VOCdevkit_path + 'ImageSets/Main/trainval.txt', 'w')
ftest = open(VOCdevkit_path + 'ImageSets/Main/test.txt', 'w')
ftrain = open(VOCdevkit_path + 'ImageSets/Main/train.txt', 'w')
fval = open(VOCdevkit_path + 'ImageSets/Main/val.txt', 'w')

for i in list_num:
    name = total_xml[i][:-4]
    if i in trainval:
        ftrainval.write(name + '\n')
        if i in train:
            print("train: " + name + '.xml')
            ftrain.write(name + '\n')
        else:
            print("val  : " + name + '.xml')
            fval.write(name + '\n')
    else:
        print("test : " + name + '.xml')
        ftest.write(name + '\n')

ftrainval.close()
ftrain.close()
fval.close()
ftest.close()
print("训练集:%d, 验证集:%d, 测试集:%d" % (tr, tv - tr, num - tv))

运行datasets/voc2yolo.py,将voc格式数据集标签转换为yolo格式标签,
在datasets/VOCdevkit/VOC2007目录下生成labels文件夹
在datasets/VOCdevkit/VOC2007/data目录下生成train.txt、val.txt、test.txt文件,如下:

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import os
import xml.etree.ElementTree as et
from os import getcwd


# -----------------------------------------------------------------------#
#   将voc格式标签转换为yolo格式标签
#   生成VOC2007/labels文件夹
#   在VOC2007/data文件夹下生成train.txt、val.txt、test.txt
# -----------------------------------------------------------------------#

def get_classes(path):
    with open(path, encoding='utf-8') as f:
        class_names = f.readlines()
    class_names = [c.strip() for c in class_names]
    return class_names, len(class_names)


def convert(size, box):
    dw = 1. / size[0]
    dh = 1. / size[1]
    x = (box[0] + box[1]) / 2.0
    y = (box[2] + box[3]) / 2.0
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return x, y, w, h


def convert_annotation(img_set, path, img_id):
    in_file = open(path + 'Annotations/%s.xml' % img_id)
    out_file = open(path + 'labels/%s.txt' % img_id, 'w')
    print(img_set, path + 'Annotations/%s.xml' % img_id, path + 'labels/%s.txt' % img_id)
    tree = et.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')


sets = ['train', 'val', 'test']

VOCdevkit_path = 'VOCdevkit/VOC2007/'
data_path = VOCdevkit_path + "data/"
classes_names = "classes.txt"
os.makedirs(data_path, exist_ok=True)
classes_path = data_path + classes_names

classes, _ = get_classes(classes_path)
print(classes)

wd = getcwd()
print(wd)
for image_set in sets:
    os.makedirs(VOCdevkit_path + 'labels/', exist_ok=True)
    image_ids = open(VOCdevkit_path + 'ImageSets/Main/%s.txt' % image_set).read().strip().split()
    list_file = open(data_path + '%s.txt' % image_set, 'w')
    for image_id in image_ids:
        list_file.write(VOCdevkit_path + 'images/%s.jpg\n' % image_id)
        convert_annotation(image_set, VOCdevkit_path, image_id)
    list_file.close()

运行datasets/get_classes.py,统计数据集类别名称,目标数目

在datasets/VOCdevkit/VOC2007/data目录下生成classes.txt文件,如下:

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import os
from tqdm import tqdm
import xml.etree.ElementTree as et

# -----------------------------------------------------------------------#
#   获取类别名称及该类对应目标数
#   生成VOC2007/data/classes.txt
# -----------------------------------------------------------------------#

VOCdevkit_path = 'VOCdevkit/VOC2007/'
os.makedirs(VOCdevkit_path, exist_ok=True)
xmlfilepath = os.path.join(VOCdevkit_path, 'Annotations')
xml_names = os.listdir(xmlfilepath)

data_path = VOCdevkit_path + "data/"
classes_names = "classes.txt"
os.makedirs(data_path, exist_ok=True)
classes_path = data_path + classes_names

classes_name = {}
for xml_name in tqdm(xml_names):
    if xml_name.lower().endswith('.xml'):
        xml_path = os.path.join(xmlfilepath, xml_name)
        in_file = open(xml_path, encoding='utf-8')
        tree = et.parse(in_file)
        root = tree.getroot()
        for obj in root.iter('object'):
            difficult = 0
            if obj.find('difficult') is not None:
                difficult = obj.find('difficult').text
            cls = obj.find('name').text
            if cls not in classes_name:
                classes_name[cls] = 1
            else:
                classes_name[cls] += 1
print(classes_name)

if os.path.exists(classes_path):
    print("[waring] the file already exist. :'%s' " % classes_path)
else:
    with open(classes_path, 'w') as f:
        for key in classes_name:
            f.write(key + "\n")
        print("classes txt save path: %s" % classes_path)

    2、修改类别标签
 


打开yolox/data/datasets,复制voc_classes.py,修改复制后的文件名为my_voc_classes.py
根据datasets/VOCdevkit/VOC2007/data/classes.txt文件
打开yolox/data/datasets/my_voc_classes.py,修改自己数据集类别名称,如:

VOC_CLASSES = ("vehicle",)

打开yolox/data/datasets,复制coco_classes.py,修改复制后的文件名为my_coco_classes.py
打开yolox/data/datasets/my_coco_classes.py,修改自己数据集类别名称,如:COCO_CLASSES = ("vehicle",)

打开yolox/data/datasets/__init__.py,将from .coco_classes import COCO_CLASSES替换为:
from .my_coco_classes import COCO_CLASSES

打开yolox/data/datasets/voc.py,将from .voc_classes import VOC_CLASSES替换为:
from .my_voc_classes import VOC_CLASSES

    3、自定义yolox模型

进入exps/example/yolox_voc目录

复制yolox_voc_s.py,并修改复制后的文件名为yolox_voc_nano.py


打开exps/example/yolox_voc/yolox_voc_nano.py

修改__init__函数下的self.num_classes为自己的类别数量


打开yolox/exp/yolox_base.py

修改__init__函数下的self.num_classes为自己的类别数量


    4、修改网络大小

打开exps/example/yolox_voc/yolox_voc_nano.py,修改self.depth,self.width,参考YOLOX/exps/default下的模型文件


    5、修改训练集信息


打开exps/example/yolox_voc/yolox_voc_nano.py,修改get_data_loader,image_sets=[('2007', 'train')]

此处修改存疑,一些教程中修改为trainval,但本文作者认为应该修改为train


    6、修改验证集信息


打开exps/example/yolox_voc/yolox_voc_nano.py,修改get_eval_loader,image_sets=[('2007', 'val')]

此处修改存疑,一些教程中不修改,维持test,但本文作者认为训练时应修改为val,训练结束后测试时修改为test

    7、修改batch加载进程数目

打开yolox/exp/yolox_base.py,修改__init__函数下的self.data_num_workers为合适的大小


    8、修改训练回合

打开yolox/exp/yolox_base.py,修改__init__函数下的self.max_epoch

    9、开始训练

python tools/train.py -f exps/example/yolox_voc/yolox_voc_nano.py -d 0 -b 16 --fp16 -c weights/yolox_nano.pth

      【注意】若训练出错,Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
      在tools/train.py文件前加上下列两条语句

import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"

    10、训练可视化

tensorboard --logdir ./YOLOX_outputs/yolox_voc_nano

    11、测试训练结果


    测试map和fps指标(修改exps/example/yolox_voc/yolox_voc_nano.py中的get_eval_loader函数下为image_sets=[('2007', 'test')],)

    测试初始权重效果

python tools/eval.py -f exps/example/yolox_voc/yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/epoch_5_ckpt.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse

    测试最优模型效果

python tools/eval.py -f exps/example/yolox_voc/yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/best_ckpt.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse

    测试视频检测效果,保存测试结果

python tools/demo.py video -f exps/example/yolox_voc/yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/best_ckpt.pth --path assets/intersection.mp4 --conf 0.3 --nms 0.65 --tsize 640 --save_result --device gpu

    测试视频检测效果,不保存测试结果

python tools/demo.py video -f exps/example/yolox_voc/yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/best_ckpt.pth --path assets/intersection.mp4 --conf 0.3 --nms 0.65 --tsize 640 --device gpu

【注意】:重新划分数据集后,务必先删除datasets/VOCdevkit下的annotations_cache、results文件夹后再训练