The SmartBin | AI Trash Identification and Sorting

by sirbucezar in Circuits > Raspberry Pi

253 Views, 3 Favorites, 0 Comments

The SmartBin | AI Trash Identification and Sorting

HeadPic.png
Screenshot 2024-06-16 at 20.03.56.png

Hey! This is a integration project I was able to make come true being a Creative Tech & AI 1st year student at Howest(Kortrijk, Belgium). The conditions included coming up with an AI solution, create it in the period of 3 weeks, have it documented and tested.

GitHub repository


Project outcome:


This bin has being designed to identify the type of waste placed onto it and then sort it into designated compartments for PMD (plastics, metals, and drink cartons), residual waste, paper, and glass.

This project is just the starting point, but I'm really hoping it kicks off a whole series of innovations that make recycling an easy job, no matter where you are. From the convenience of home use to meeting the waste disposal needs of businesses generating significant waste, and extending to public spaces in busy European cities where ecological standards are high, this technology seeks to reduce the challenges of incorrect waste sorting. Such an initiative is poised to contribute significantly to creating a more environmentally friendly living environment, aligning with the rapid pace of technological advancement we are witnessing today.

To achieve this, the project integrates computer vision technology. It undertakes a preliminary analysis to identify the type of material being disposed of, recording this initial prediction. This approach ensures a high level of accuracy in material identification.

After pressing the scan button, the AI models identify the type of waste after which the system chooses the appropriate bin designated for the predicted material. Simultaneously, an LCD screen displays the type of material identified and the confidence level of the prediction in percentage terms.

My aim with this project was to find the ideal balance between the speed and accuracy of the detection and sorting process, paving the way for more efficient and environmentally sustainable waste management solutions.

Supplies

Electronics:

Buildables:

  • 3 sheets of 8mm multiplex plywood(600x450) - suitable for laser cutting | top and bottom layer of the bin, top part of the rotating part of the bin.
  • 2 sheets of 5mm multiplex plywood(600x450) - suitable for laser cutting | 2nd top layer(which electronics lay upon) and lower part of the rotating part of the bin.
  • 1 sheet of 10mm clear extruded acrylic(400x400) - suitable for laser cutting
  • 4 sheets of 4mm clear acrylic(380x180) - for the compartment walls of the bin.
  • 4 sheets of 5mm smoked extruded acrylic(404x100) - the cover ups for the top part of the bin.
  • 4 threaded rods(⌀ = 10, L = 1000) - the 4 support rods of the whole bin.
  • 1 threaded rod(⌀ = 12, L = 500) - the center axis of the rotating bin.
  • 2 aluminium round tubes(⌀ = 15, L = 1000) - to cover the support rods for a better user experience(basically handles)
  • 16 M10 nuts - for fixing the support rods to the structure.
  • 4 M12 nuts - for fixing the axis of the rotating part.
  • 20 M10 washers and 8 M10 washer fenders(fasteners) - for fastening the support rods
  • 4 M12 washers and 4 M12 washer fenders - for fastening the axis of the rotating part.
  • 2 rhombic flanged ball bearings (12mm inner diameter) - fix to the bin center axis, for the rotating part to actually rotate.
  • Solder iron, solder wire, epoxy glue, shrinking tubes and screws.

AI Model Creation | Computer Vision

Screenshot 2024-05-24 at 18.07.20.png
Screenshot 2024-06-17 at 00.10.08.png
Screenshot 2024-06-17 at 00.09.14.png
Screenshot 2024-06-17 at 00.31.55.png
Screenshot 2024-06-17 at 00.34.38.png
Screenshot 2024-06-17 at 00.38.11.png
Screenshot 2024-06-17 at 00.36.57.png

Data collection:

  • For the first step, I looked for a dataset that clearly depicts the materials as similar as possible to the way the input images are going to get classified by the trained model. This is important, as training a model can go very well just for it to perform bad on unseen data due to different circumstances, such as different backgrounds, different lighting conditions as well as certain angles.
  • I found a good dataset that fit my requirements. Not only because of the image angles and the clearly distinguishable background, but also because the waste looks realistic. It is deformed, dirty, and in different forms and conditions. This offered the model the variety needed to become overall robust.

Data annotation:

  • I had to realize what type of model do I want to train first for the whole needed process to work at its best. So the first thing I had to do was annotating the data that I will later use to train an object detection model.
  • As dull and time consuming as it might seem at first, this is a very important step. Manually annotating data is one of the important steps of getting important parameters and details for the model to consume and adapt upon.
  • I have done this on Roboflow. We import the folders, create the classes, and then annotate the pictures either using the Bounding Box tool(you have to manually draw bounding boxes that perfectly fit the object inside), or the smart polygon annotation tool(clicking on the object and a shape is created outlining the exact perimeter of the object). I have used the smart polygon tool.

Creating dataset and prepping for training:

  • An important thing before starting to train is checking the balanceness of the dataset. Each class has to have approximately the same number of pictures. Balance is compulsory for the model to not develop a better relationship with a specific majoritary class. The results of that will make you have a love-hate relationship with the model when it's going to give you over 90% accuracy for paper, and misclassify half of the other images as paper(been there, done that, do not recommend).
  • Once the dataset is overall equally distributed through the classes, we go onto applying preprocessing and augmentation steps(check the pics for the available options and my choices). This expands the dataset by editing and pasting every picture based on the steps you need, so the model gains overall better robustness and doesn't overfit(overfitting is when the model performs very well on the test set but poorly on the validation and test set).
  • Once the latter actions have been completed, all you need to do is export the dataset. I have chosen the option to export it in the format for a Jupyter notebook. What this does is it gives you the code that downloads and unzips the dataset in your project directory.
  • You will have to download certain libraries such as roboflow, ultralytics, numpy and others(if you get 'No module found' errors, just copy paste it into the good old ChatGPT, it will give you the installation info for all the libraries and will help you go through troubleshooting )
from roboflow import Roboflow
rf = Roboflow(api_key="your_api_key")
project = rf.workspace("your_workspace_name").project("your_project_name)
version = project.version(3)
dataset = version.download("yolov8")
  • Once the dataset has been successfully imported, all you have to do is upload the model(in this case, i used the YOLOv8 model)start training applying all the parameters.
from ultralytics import YOLO


# Load a YOLOv8 mode
model = YOLO('yolov8.pt')


# Train the model
model.train(data='/Users/cezar/Desktop/Project One/2023-2024-projectone-ctai-sirbucezar/AI/ProjectOne-obj-detection-2/data.yaml', epochs=10)


# Save the trained model weights
model_path = 'runs/train/exp/weights/best.pt'
  • This will start training the model. It takes time, be patient:) After it is done, you must get a result similar to this
Model summary (fused): 168 layers, 3006428 parameters, 0 gradients, 8.1 GFLOPs
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 13/13 [01:11<00:00, 5.48s/it]
all 416 395 0.701 0.745 0.773 0.669
Glass 416 83 0.805 0.855 0.931 0.771
PMD 416 144 0.778 0.799 0.841 0.746
Paper 416 131 0.744 0.947 0.924 0.808
Rest 416 37 0.477 0.378 0.396 0.35
  • Keep in mind that this is the first step of the training. This is just giving general outlines to the model and creates the right parameters. Later on, we can apply fine tuning, which is basically training the already trained model again with a new set of hyperparameters(preprocessing and augmentation techniques). Usually, this step is what brings out the real performance. Every time, the process of the training differs, because the models adapt differently to data based on the learning rate, and other parameters such as the batch size, the number of epochs or the alpha value.
  • For now, what we have achieved is an object detection model, that with the given picture input, is able to draw a bounding box. However, this is just the first of the 2 steps to detect the material in the picture.

Training a classification model:

  • Because we are actually interested in the understanding of the model of the material type, we actually need to train another classification model.
  • After a few tries and a lot of thinking, I figured that the best way to do it is by running the whole dataset I used for object detection through the already trained model, and save it in another new directory. This means that every picture will have a bounding box created, which then is cropped and saved. This gave me the same dataset, but with as little background area as possible.
  • Training a single label classification model doesn't require annotating the data in the same way(with bounding boxes), as it doesn't look for an object in the picture, but rather analyzes and compares the features to then classify it in one of the possible classes.
  • I went over the same steps as before with this model as well, except the annotation would just mean that the whole picture is part of a specific class. This is done by assigning the images to the created classes.
  • Next step is applying preprocessing and augmentation steps to this newly created and annotated dataset(Last picture)
  • For this one, I went with the easier option with the training. I trained the model online, on Roboflow. It is an easy process and you just have to wait for the notification that it is done. Keep in mind, though, that this means that when actually using the model, the code will have to make API calls to roboflow. It means that there is a limit of 10k calls per month.

Creating the Electronic Circuit

Circuit.png

Now, that I have my models up and running, the next step is creating the circuit diagram.

I have used Fritzing for this matter. The workflow of the electronics is:

Power BTN -> Scan Button -> Stepper Motor.

All steps are accompanied by indications on the LCD display and he RGB led.


For the actual assembly, there was some soldering involved.

Creating the 3D Protoype

Screenshot 2024-06-17 at 02.31.55.png
Screenshot 2024-06-17 at 02.32.07.png
Screenshot 2024-06-17 at 02.32.18.png
Screenshot 2024-06-17 at 02.32.24.png

Before getting to the actual prototype build. I created the 3D design in Tinkercad. I adapted everything to the real dimensions so I get all the fittings right.


Developing the Gears | Finding the Right Gear Ratio

image (1).png

I used a gear generator website to calculate the right gearing ratio and sizing.

Preparation for Laser Cutting and the Full Build

image.png

For this step I had to design illustrator files for the laser cutter.

The settings of the file should be:

  • Document mode set to RGB
  • All the lines should be 0.025mm
  • RGB(255,0,0) for cutting
  • RGB(0,0,255) for engraving
  • RGB(0,0,0) for burning

Build

  • For the build, i first connected the 4 support rods to the lower layer, assuring a stable platform. I used one washer, one fender and one nut on each side.
  • I assembled the rotating part, glueing the acrylic bin walls to the circular parts of the bin. I created 4 small channels using a milling machine for the acrylic sheets to slide in.
  • Screwed the bearing flanges into the wooden panels, assuring they are centered to avoid friction because of the angles.
  • Then I inserted the middle axis, securing it with the bolts in the bearing flanges.
  • Accurately measured the height needed between the 2nd top layer and the rotating part so the driver gear connected to the stepper motor has the right spacing to spin the external gear, that is fixed to the top of the rotating part.
  • I then installed all the electronics on the layer, making sure of the access to the RasPi ports.
  • The top part has holes of 4mm out of the full 8mm thickness, so it can be opened later in case of need for fast access to the electronics.
  • At last, I screwed in the side doors that hide the open space between the top layers. One of the doors is connected with 3 plastic piano hinges so it opens up for even faster access to the electronics.

The illustrator files for the laser cutter are exactly designed to fit all the components right.

Code

The first step is connecting to the RasPi via SSH in VsCode to be able to easily create python files that operate all the electronics.

Workflow:

  • PWR button pressed - system powers on.
  • Initiation - the RasPi checks the internet connection, resets all the GPIO pins, clears the LCD display and checks system health. RGB led flashes purple
  • LCD displays that the system is ready to scan. The scan button can be respectively pressed to take the picture. RGB led is static green
  • RasPi camera takes a picture.
  • It gets input into the object detection model.
  • The picture gets cropped inside the bounding box.
  • The cropped picture goes through the classification model. LCD displays that the picture is getting processed, RGB led flashes yellow.
  • The result is saved in a CSV file. LCD displays the identified material and the accuracy prediction
  • The stepper motor reads the last detection from the CSV file to understand the current position and performs the needed steps to go to the designated bin. LCD displays that the trash can be thrown and that it's ready for the next scan.

After having a functional workflow, i separated the code in different files(as you can see in RPi on GitHub). There are some files that are not uploaded to GitHub due to privacy reasons(.env file that contains the roboflow credentials).

Next, I set up a service file on the RasPi that enables the code to be functional at boot, without the need to run it manually. The code that runs as a service constantly checks the state of the power button. As long as the power button is pressed, the program is running. If at any given moment the system is turned off, eveyrhting turns off and then restarts on power back.

Media

marius-cezar.sirbu-thesmartbin
IMG_6574.jpg
IMG_6575.jpg
IMG_6577.jpg
IMG_6578.jpg
Screenshot 2024-06-17 at 03.24.54.png