NILE TILAPIA HUNGER DETECTION USING DEEP LEARNING METHOD ON IMAGE-BASED FISH SMART FEEDER

Challenges abound for Indonesia to compete with other nations in aquaculture, one of which is automation. Effective automation can address conditions of underfeeding and overfeeding. An automated feeding system based on machine learning emerges as a solution to replace conventional systems. The u...

Full description

Saved in:
Bibliographic Details
Main Author: Owen, Michael
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/80913
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
Description
Summary:Challenges abound for Indonesia to compete with other nations in aquaculture, one of which is automation. Effective automation can address conditions of underfeeding and overfeeding. An automated feeding system based on machine learning emerges as a solution to replace conventional systems. The use of surface wave sensors and top-view cameras has limitations that can be overcome by employing underwater cameras. Therefore, this research project utilizes underwater cameras to gather fish data. The objective is to determine the best architecture among CNN-RNN, CNN-LSTM, and CNN-GRU, along with their configurations. The dataset is obtained through recording fish activities in an aquarium, comprising 1,775 videos, each lasting 5 seconds, divided into two labels: hungry and not hungry, with a ratio of 51% to 49%. This dataset is then trained using CNN-RNN, CNN-LSTM, and CNN-GRU architectures. Experimental results reveal that CNN-RNN achieves test accuracy of 0.9551, CNN-GRU achieves test accuracy of 0.9494, and through this research, it is found that the architecture yielding the highest accuracy is CNN-LSTM with test accuracy of 0.9607. The CNN-LSTM architecture consists of 4 CNN layers, 2 LSTM layers, and concludes with a fully connected neural network. The configuration includes RGB image type, frame size of 64 x 64, 10 frames, batch size of 32, and Adam optimizer with a learning rate of 0.001.