|
||
---|---|---|
.. | ||
Kconfig | ||
README.md | ||
SConscript | ||
helmet.json | ||
helmet.kmodel | ||
helmet_detect.c |
README.md
Helmet detection demo
A helmet and head without helmet object detection task demo. Running MobileNet-yolo on K210-based edge devices.
Training
Enviroment preparation
Model generated by aXeleRate and converted to kmodel by nncase.
# master branch for MobileNetv1-yolov2 and unstable branch to test MobileNetv1(v2)-yolov2(v3)
git clone https://git.trustie.net/yangtuo250/aXeleRate.git (-b unstable)
cd aXeleRate
pip install -r requirments.txt && pip install -e .
training config setting
Example config, some hyper-parameters:
-
architecture: backbone, MobileNet7_5 for default, MobileNet1_0(α = 1.0) and above cannot run on K210 because of OOM on feature map in master branch. For unstable branch MobileNetV2_1_0 is OK.
-
input_size: fixed model input size, single integer for height equals to width, otherwise a list([height, width]).
-
anchors: yolov2 anchor(for master) or anchor scaled to 1.0(for unstable), can be generate by darknet.
-
labels: labels of all classes.
-
train(valid)_image(annot)_folder: path of images and annoations for training and validation.
-
saved_folder: path for trainig result storage(models, checkpoints, logs ...).
Mine config for unstable:
{
"model": {
"type": "Detector",
"architecture": "MobileNetV2_1_0",
"input_size": [
224,
320
],
"anchors": [
[
[
0.1043,
0.1560
],
[
0.0839,
0.3036
],
[
0.1109,
0.3923
],
[
0.1378,
0.5244
],
[
0.2049,
0.6673
]
]
],
"labels": [
"human"
],
"obj_thresh": 0.5,
"iou_thresh": 0.45,
"coord_scale": 1.0,
"class_scale": 0.0,
"object_scale": 5.0,
"no_object_scale": 3.0
},
"weights": {
"full": "",
"backend": ""
},
"train": {
"actual_epoch": 2000,
"train_image_folder": "mydata/human/Images/train",
"train_annot_folder": "mydata/human/Annotations/train",
"train_times": 2,
"valid_image_folder": "mydata/human/Images/val",
"valid_annot_folder": "mydata/human/Annotations/val",
"valid_times": 1,
"valid_metric": "precision",
"batch_size": 32,
"learning_rate": 2e-5,
"saved_folder": "mydata/human/results",
"first_trainable_layer": "",
"augmentation": true,
"is_only_detect": false,
"validation_freq": 5,
"quantize": false,
"class_weights": [1.0]
},
"converter": {
"type": [
"k210"
]
}
}
(For more detailed config usage, please refer to original aXeleRate repo.)
data preparation
Please refer to VOC format, path as config above.
train it!
python -m aXeleRate.train -c PATH_TO_YOUR_CONFIG
model convert
Please refer to nncase repo.
Deployment
compile and burn
Use (scons --)menuconfig
in bsp folder (Ubiquitous/RT_Thread/bsp/k210), open:
- More Drivers --> ov2640 driver
- Board Drivers Config --> Enable LCD on SPI0
- Board Drivers Config --> Enable SDCARD (spi1(ss0))
- Board Drivers Config --> Enable DVP(camera)
- RT-Thread Components --> POSIX layer and C standard library --> Enable pthreads APIs
- APP_Framework --> Framework --> support knowing framework --> kpu model postprocessing --> yolov2 region layer
- APP_Framework --> Applications --> knowing app --> enable apps/helmet detect
scons -j(n)
to compile and burn in by kflash.
json config and kmodel
Copy json config for deployment o SD card /kmodel. Example config file is helmet.json in this directory. Something to be modified:
- net_input_size: same as input_size in training config file, but array only.
- net_output_shape: final feature map size, can be found in nncase output.
- sensor_output_size: image height and width from camera.
- kmodel_size: kmodel size shown in file system.
- anchors: same as anchor in training config file(multi-dimention anchors flatten to 1 dim).
- labels: same as label in training config file.
- obj_thresh: array, object threshold of each label.
- nms_thresh: NMS threshold of boxes.
Copy final kmodel to SD card /kmodel either.
Run
In serial terminal, helmet_detect
to start a detection thread, helmet_detect_delete
to stop it. Detection results can be found in output.
TODO
- Fix LCD real-time result display.
- Test more object detection backbone and algorithm(like yolox).