Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

You are using software which is blocking our advertisements (adblocker).

As we provide the news for free, we are relying on revenues from our banners. So please disable your adblocker and reload the page to continue using this site.
Thanks!

Click here for a guide on disabling your adblocker.

Sign up for our daily Newsletter and stay up to date with all the latest news!

Subscribe I am already a subscriber

Improving the model used to recognize tomato ripeness

A recently published study addresses the challenges of tomato maturity recognition in natural environments, such as occlusion caused by branches and leaves, and the difficulty in detecting stacked fruits.

To overcome these issues, researchers propose a novel YOLOv8n-CA method for tomato maturity recognition, which defines four maturity stages: unripe, turning color, turning ripe, and fully ripe. The model is based on the YOLOv8n architecture, incorporating the coordinate attention (CA) mechanism into the backbone network to enhance the model's ability to capture and express features of the tomato fruits.

Additionally, the C2f-FN structure was utilized in both the backbone and neck networks to strengthen the model's capacity to extract maturity-related features. The CARAFE up-sampling operator was integrated to expand the receptive field for improved feature fusion. Finally, the SIoU loss function was used to solve the problem of insufficient CIoU of the original loss function.

Experimental results showed that the YOLOv8n-CA model had a parameter count of only 2.45 × 106, a computational complexity of 6.9 GFLOPs, and a weight file size of just 4.90 MB. The model achieved a mean average precision (mAP) of 97.3%. Compared to the YOLOv8n model, it reduced the model size slightly while improving accuracy by 1.3 percentage points.

When compared to seven other models—Faster R-CNN, YOLOv3s, YOLOv5s, YOLOv5m, YOLOv7, YOLOv8n, YOLOv10s, and YOLOv11n—the YOLOv8n-CA model was the smallest in size and demonstrated superior detection performance.

Gao, X.; Ding, J.; Zhang, R.; Xi, X. YOLOv8n-CA: Improved YOLOv8n Model for Tomato Fruit Recognition at Different Stages of Ripeness. Agronomy 2025, 15, 188. https://doi.org/10.3390/agronomy15010188

Source: MDPI

Publication date: