• 1 Post
  • 0 Comments
Joined 2Y ago
cake
Cake day: Jan 10, 2023

help-circle
rss
[YOLOv7](https://github.com/WongKinYiu/yolov7) was a success last year, and now we have a [YOLOv8](https://github.com/ultralytics/ultralytics) developed by Ultralytics. In this article, I will show how you can make an inference on pretrained weight and retrain YOLOv8 for a custom dataset. Step 1: Pip install Ultralytics package using (pip install ultralytics) Step 2: Clone the ultralytics repo (git clone https://github.com/ultralytics/ultralytics.git) ![](https://community.telemus.ai/pictrs/image/9cf1a0b7-4ea5-403b-9bd2-9d9bebdcd36d.png) Step 3: Install all dependencies using requirements.txt from the cloned repository ![](https://community.telemus.ai/pictrs/image/ebb27bf3-def1-429c-a3ad-14a62eee6af9.png) Step 4: Since our task now is detection, we keep task = "detect," and we keep mode = "predict." Since we have not trained any model, we will use a pretrained weight, that is, yolov8n.pt. You don't have to separately download the weights; it will do it automatically. yolo task=detect mode=predict model=yolov8n.pt source=peoples.jpg ![](https://community.telemus.ai/pictrs/image/befa880e-3109-4bb6-a4b3-b5adee879c19.png) ![](https://community.telemus.ai/pictrs/image/09ba2540-3914-491d-a217-1e8b1842448e.png) Step 5: Training on custom datasets I kept the datasets in the following format ![](https://community.telemus.ai/pictrs/image/b519d89f-621c-4a88-9fee-7d2965baf423.png) Create a custom_data.yaml file for datasets directory configuration. In this task I will be detecting safety helmet of construction workers. and there are 3 classes that is, helmet, no-helmet and hi-vis. ![](https://community.telemus.ai/pictrs/image/3556b75c-7730-4595-938b-88928fa78493.png) Move the custom_data.yaml in yolo’s dataset folder yolo->data->datasets ![](https://community.telemus.ai/pictrs/image/fa12d140-4a84-4493-a7cd-0f5413f89703.png) Since we want to train the model and our task now is detection, we keep task = "detect," and we keep mode = "train". We will use 100 epochs to train the model and pass the path of custom_data.yaml for datasets. yolo task=detect mode=train data=yolo/data/datasets/custom_data.yaml epochs=100 model=yolov8n.pt ![](https://community.telemus.ai/pictrs/image/32705645-1f35-4bd0-96ac-c20ffeeab005.png) Step 6: After training we will test on new image. The output will be in predict folder in “runs\detect\”. (Here we should pass the path of our new weights for detection) ![](https://community.telemus.ai/pictrs/image/6edfa6b6-0b38-479a-8736-3e3dbe030ce7.png) ![](https://community.telemus.ai/pictrs/image/22d31562-f977-4fc8-bd92-c9389401ead6.png) We can use the same steps for detecting object in videos by passing video path in source. yolo task=detect mode=predict model=../runs/detect/train8/weights/last.pt source=construction.mp4 I hope this article on using YOLOv8 was useful and you can reach out to me through [LinkedIn](https://www.linkedin.com/in/melroy-pereira/). Cheers [Melroy Pereira](https://www.linkedin.com/in/melroy-pereira/)