Automatic Plastic Bottle and Aluminum Can Detector Using AI
by Timothy Y in Circuits > Raspberry Pi
572 Views, 2 Favorites, 0 Comments
Automatic Plastic Bottle and Aluminum Can Detector Using AI
In this project, you will use machine vision run on a Raspberry pi to detect aluminum cans and plastic bottles, then send the predictions to an Arduino Uno to display these results on an LCD display.
Supplies
- Arduino UNO
- Jumper wire
- Keyboard
- Raspberry Pi Camera Module (any)
- Push Button
- Wooden Box Frame
- Monitor with HDMI input
- Raspberry Pi (Any Model)
- Optical mouse
- Basic Red 5mm LED
- Arduino LCD Screen
- LED strip (white)
- Bread Board
Apps and Platforms
- Arduino IDE
- Raspbian
- Edge Impulse Studio
Project Description
Recently, Artificial Intelligence has become an emerging solution for the world’s problems such as the Demographic Drought and recycling contamination . For this reason, I created this project, to spark interest in the many possibilities of Artificial intelligence and to show just how easy it is to create an Artificial Intelligence model of your own to encourage more people to go down the path of Artificial Intelligence to further develop Artificial Intelligence to solve even more problems in the world. In this project, you will train an Artificial Intelligence model using Edge Impulse and run it on a Raspberry Pi, then modify the runner code to send the prediction from the model to the Arduino Uno using serial communication. Once the Arduino Uno receives the results, we will have it print the results to the LCD Screen. We will also make it so when a button is pressed, it will repeat these steps, then stop until we re-press the button.
Setup
Before you begin, make sure to setup your Raspberry Pi . Once you have your Raspberry Pi setup, find a good image dataset to begin training your model on, or create your own image dataset by following my last project! Once you have your image dataset, create an account on Edge Impulse and create a new project.
Edge Impulse
Now it’s time to create your impulse! First step is uploading your data . Before you upload your data, leave the "Upload into category" as "Automatically split between training and testing" and the "Label" as "Infer from file name." Then click the grey "Choose files" button then click the green "Begin Upload" button. Repeat these steps until you have uploaded all your images.
Edge Impulse
Follow the numbered instructions In the image above.
Edge Impulse
Follow the numbered instructions In the image above.
Edge Impulse
Once all your data is uploaded, we have to "Create impulse. Navigate to "impulse design" on the left column, select the "create Impulse bullet point, select the four blocks, each described in the link, and click the the green button “save Impulse”.
Edge Impulse
After you have saved your impulse, select the “image” bullet point under “create impulse.” Now, select "parameters", edit the parameter (I recommend RBG for better accuracy), which are described in the link. Once you have set the parameters, click the blue “save parameters” button.
Edge Impulse
Navigate to "generate features" text to the right of "Parameters" and in the Generate features tab, click the green “generate features” button and look at the "Feature explorer" (The farther away the colors are away from each other the better!).
Edge Impulse
After the features have been generated, select “transfer learning” bullet point underneath “image”, set the Neural Networks settings. After, select the Neural Network architecture then click the green button “starting training” and watch as the computer starts learning! Once the computer is done learning look at the "Training output" to see how well it performed.
Deployment
Once you have trained your model, navigate to the “deployment” tab > Build Firmware > Linux Boards then click the green “build” button. A pop up will appear, describing how Edge Impulse deploys to Linux boards, click the green button “Get Started Now.”
Deployment
Follow the numbered instructions In the image above.
Deployment
The button will redirect you to (https://docs.edgeimpulse.com/docs/edge-impulse-for-linux). On this webpage, click on “Raspberry Pi 4” under “development boards,” follow the instructions, and watch it predict live on your raspberry pi!
Coding: Edge-Impulse-Linux-Runner
Once you’ve had fun watching your Raspberry Pi 4 predict what it thinks it is seeing, it is time to start coding! Login to you raspberry pi.
Our first step is locating the edge-impulse-linux-runner file, which you learned in the deployment instructions is the file that is used to run the model on our raspberry pi, and is that we will edit to send the predictions to the Arduino. Navigate to Folders > /bin > edge-impulse-linux-runner. Once located, we need to change the permissions of the folder using chmod so we can edit, so click on the terminal icon > then type the following code:
Edge-Impulse-Linux-Runner
Once the permissions have been changed, right click > open the file.
Edge-Impulse-Linux-Runner
Once opened, we will be in the Geany IDE with a bunch of Noje.js. In the Geany IDE, we will need to define a few variables. In the lines before their variables, create these variables below:
//COMMUNICATION---------------------------------------------------------
const { pipeline } = require("serialport");
var SerialPort = require("serialport");
const parsers = SerialPort.parsers;
const parser = new parsers.Readline({
deliiter: "\r\n"
});
var port = new SerialPort('/dev/ttyACM0',{
baudRate: 115200,
dataBits: 8,
parity: 'none',
stopBits: 1,
flowControl: false
});
port.pipe(parser);
//COMMUNICATION---------------------------------------------------------
//BUTTON
var stillContinue = false;
//BUTTON
//SMOOTHINGDATA---------------------------------------------------------
var ALAverage = 0;
var PLAAverage = 0;
var counter = 0;
//SMOOTHINGDATA---------------------------------------------------------
//motorLimit
let oldValue = true;
let newValue = false;
//motorLimit
//BUTTON
const button = require("/home/raspberry/buttonAndLED/buttonAndLED.js");
//BUTTON
Now that all the variables are defined, we need to add the code that will actually send the prediction to the Arduino. I also made it so that will only send a prediction when it is 70% confident or more. Write the following code in the else-if statement after the "imageClassifer" is being set to a value:
//Button----------------------------------------------------
console.log("Button state ", button.getter());
await imageClassifier.start(); //IMPORTANT
//Button----------------------------------------------------
let webserverPort = await startWebServer(model, camera, imageClassifier);
console.log('');
console.log('Want to see a feed of the camera and live classification in your browser? ' +'Go to http://' + (get_ips_1.ips.length > 0 ? get_ips_1.ips[0].address : 'localhost') + ':' + webserverPort);console.log('');
imageClassifier.on('result', async (ev, timeMs, imgAsJpg) => {console.log("Starting again-----------------");
if (ev.result.classification) {if (button.getter() == 1)
{
console.log("Starting again-----------------");
if (stillContinue == true)
{
await imageClassifier.start(); //IMPORTANT
stillContinue = false;
port.write("101");
}
// print the raw predicted values for this frame
// (turn into string here so the content does not jump around)
// tslint:disable-next-line: no-unsafe-anylet c = ev.result.classification;
for (let k of Object.keys(c))
{
c[k] = c[k].toFixed(4);
}
console.log('classifyResLine271', timeMs + 'ms.', c, 'Button State ', button.getter());
//SMOOTHINGDATA---------------------------------------------------------
//Replace "AL" and "PL with your own classes (make sure to write them in between quotes)
ALAverage = ALAverage + Number(c["AL"]);
PLAAverage = PLAAverage + Number(c["PL"]);
//If you have more than two classes, add your other classes after this line and follow the same format, replacing "yourClassName" with the class name yourClassNameAverage = yourClassNameAverage + Number(c[“your class name”]);
counter ++;
//console.log("Dividing" + Number(ALAverage), "and" + typeof counter);
//console.log("Dividing" + Number(PLAAverage), "and" + typeof counter);
if (counter >= 20){
ALAverage = Number (ALAverage)/20;
PLAAverage = Number (PLAAverage)/20;
//console.log("Dividing" + typeof ALAverage, "and " + typeof counter);
//console.log("Dividing" + typeof PLAAverage, "and" + typeof counter);
//console.log("Dividing" + ALAverage, "and" + counter);
//console.log("Dividing" + PLAAverage, "and" + counter);
//console.log(ALAverage, PLAAverage);
//COMMUNICATION---------------------------------------------------------
if(ALAverage >=.70)
{
var whatToSend = Math.floor(ALAverage.toString() * 100) +"_"+ Math.floor(PLAAverage.toString() * 100);
port.write(whatToSend);
console.log("Sending " + whatToSend);
//oldValue = newValue;
button.resetIt()await imageClassifier.stop();
} else if (PLAAverage >= .70)
{
var whatToSend = Math.floor(ALAverage.toString() * 100) +"_"+ Math.floor(PLAAverage.toString() * 100);
port.write(whatToSend);
console.log("Sending " + whatToSend);
//oldValue = newValue;
button.resetIt()
await imageClassifier.stop();
} //If you have more than two classes, copy and past the else if, replacing the average variables and adding it to the whatToSend variable
} else{button.resetIt()port.write("0");
await imageClassifier.stop();
// oldValue = true;
// newValue = false;
}
//COMMUNICATION---------------------------------------------------------
ALAverage = 0;
PLAAverage = 0;
counter = 0;
}
//SMOOTHINGDATA---------------------------------------------------------
}
else if (button.getter() == 0)
{
console.log ("Not running");
await imageClassifier.stop();stillContinue = true;
}
ONE LAST STEP!
The last step before we can move on is editing the image-classifier.js to allow our code to restart the image classifier, otherwise, it will stop everything from running and we would not be able to continue.
Navigate to /usr/lib/node_modules/edge-impulse-linux/build/library/classifier/image-classifier.js, open the file, and write these changes:
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.ImageClassifier = void 0;
const tsee_1 = require("tsee");
const sharp_1 = __importDefault(require("sharp"));
class ImageClassifier extends tsee_1.EventEmitter {
/**
* Classifies realtime image data from a camera
* @param runner An initialized impulse runner instance
* @param camera An initialized ICamera instance
*/
constructor(runner, camera) {
super();
this._stopped = true;
this._runningInference = false;
this._runner = runner;
this._camera = camera;
}
/**
* Start the image classifier
*/
async start() {
console.log("image classifier started!!!!!!!!!!!!!!!!!!!!!!!");
let model = this._runner.getModel();
if (model.modelParameters.sensorType !== 'camera') {
throw new Error('Sensor for this model was not camera, but ' +
model.modelParameters.sensor);
}
this._stopped = false; //IMPORTANT
let frameQueue = [];
this._camera.on('snapshot', async (data) => {
// are we looking at video? Then we always add to the frameQueue
if (model.modelParameters.image_input_frames > 1) {
let resized = await this.resizeImage(model, data);
frameQueue.push(resized);
}
// still running inferencing?
if (this._runningInference) {
return;
}
// too little frames? then wait for next one
if (model.modelParameters.image_input_frames > 1 &&
frameQueue.length < model.modelParameters.image_input_frames) {
return;
}
this._runningInference = true;
try {
// if we have single frame then resize now
if (model.modelParameters.image_input_frames > 1) {
frameQueue = frameQueue.slice(frameQueue.length - model.modelParameters.image_input_frames);
}
else {
let resized = await this.resizeImage(model, data);
frameQueue = [resized];
}
let img = frameQueue[frameQueue.length - 1].img;
// slice the frame queue
frameQueue = frameQueue.slice(frameQueue.length - model.modelParameters.image_input_frames);
// concat the frames
let values = [];
for (let ix = 0; ix < model.modelParameters.image_input_frames; ix++) {
values = values.concat(frameQueue[ix].features);
}
let now = Date.now();
if (this._stopped) {
//return; commented out to not stop the program
console.log("this._stopped is true");
}
let classifyRes = await this._runner.classify(values);
let timeSpent = Date.now() - now;
this.emit('result', classifyRes, classifyRes.timing.dsp + classifyRes.timing.classification + classifyRes.timing.anomaly, await img.jpeg({ quality: 90 }).toBuffer());
}
finally {
this._runningInference = false;
}
});
}
/**
* Stop the classifier
*/
async stop() {
console.log("image classifier stopped");
this._stopped = true;
//await Promise.all([
//this._camera ? this._camera.stop() : Promise.resolve(),
//this._runner.stop() //Commented to stop program from exiting
//]);
}
async resizeImage(model, data) {
// resize image and add to frameQueue
let img;
let features = [];
if (model.modelParameters.image_channel_count === 3) {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width,
});
let buffer = await img.raw().toBuffer();
for (let ix = 0; ix < buffer.length; ix += 3) {
let r = buffer[ix + 0];
let g = buffer[ix + 1];
let b = buffer[ix + 2];
// tslint:disable-next-line: no-bitwise
features.push((r << 16) + (g << 8) + b);
}
}
else {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width
}).toColourspace('b-w');
let buffer = await img.raw().toBuffer();
for (let p of buffer) {
// tslint:disable-next-line: no-bitwise
features.push((p << 16) + (p << 8) + p);
}
}
return {
img: img,
features: features
};
}
}
exports.ImageClassifier = ImageClassifier;
//# sourceMappingURL=image-classifier.js.map
ButtonAndLED
Now we have to write the code that will make the button trigger the image classification.
Create a new folder called buttonAndLED at /home/raspberry
Now create a .js file inside the buttonAndLED called buttonAndLED.js at /home/raspberry/buttonAndLED
Inside buttonAndLED.js, write the following code:
var Gpio = require('onoff').Gpio; //include onoff to interact with the GPIO
var LED = new Gpio(4, 'out'); //use GPIO pin 4 as output
var pushButton = new Gpio(17, 'in', 'both'); //use GPIO pin 17 as input, and 'both' button presses, and releases should be handled
var ready = true;
var go = 0;
function sleep(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
function hi()
{
console.log("hi");
}
function getter(){
console.log("starting getter");
//console.log("go:" + go);
return go;
}
function resetIt(){
console.log("starting reset");
ready = true;
go = 0;
LED.writeSync(0);
}
console.log("Button and LED intidated");
pushButton.watch(function (err, value) { //Watch for hardware interrupts on pushButton GPIO, specify callback function
if (err) { //if an error
console.error('There was an error', err); //output error message to console
return;
}
if (value == 1 && ready == true){
console.log("button clicked");
//LED.writeSync(value); //turn LED on or off depending on the button state (0 or 1)
if (ready == true){
console.log("ready is tru");
ready = false;
LED.writeSync(1);
go = 1;
console.log("go equals");
console.log(go);
//setTimeout(reset, 5000);
}
}
});
function unexportOnClose() { //function to run when exiting program
LED.writeSync(0); // Turn LED off
LED.unexport(); // Unexport LED GPIO to free resources
pushButton.unexport(); // Unexport Button GPIO to free resources
};
//returnState();
process.on('SIGINT', unexportOnClose); //function to run when user closes using ctrl+c
module.exports = {getter,resetIt, hi};
Communication!
Finally, we can code the program to allow the Arduino Uno get receive the prediction.
On your Raspberry Pi or another device, open the Arduino IDE.
Create a new project called "communicate.ino" , plug in your Arduino UNO, write the following code, and upload it to the Arduino, that's it!
/** serial_usb_simple_arduino - For communicating over USB serial. Send it a '1' (character one)* and it will make the builtin LED start blinking every one second. Send it a '0'* (character zero) and it will make it stop blinking.** Each time it receives one of the commands, it sends back an 'A' for acknowledge.* But send it a commmand it doesn't recognize and it sends back an 'E' for error.*///bool blinking = false;//bool led_on = false;//int target_time;#include <Wire.h>#include <LiquidCrystal_I2C.h>// Include the Servo library#include <Servo.h>// Declare the Servo pinint servoPin = 3;// Create a servo objectServo Servo1;const unsigned long eventInterval = 100;unsigned long previousTime = 0;boolean servoCheck = false;// Set the LCD address to 0x27 for a 16 chars and 2 line displayLiquidCrystal_I2C lcd(0x27, 16, 2);void setup() {Servo1.attach(servoPin);Servo1.write(90);lcd.begin();lcd.backlight();lcd.clear();Serial.begin(115200);while (!Serial) {; // wait for serial port to connect. Needed for native USB}pinMode(LED_BUILTIN, OUTPUT);pinMode(12, OUTPUT);pinMode(13, OUTPUT);}void loop() {String cc;String al;String pl;if (Serial.available() > 0) {unsigned long currentTime = millis();cc = Serial.readString();int x = cc.indexOf("_");al = cc.substring(0,x);pl = cc.substring(x+1);Serial.println(cc);if (cc == "r"){lcd.clear();}if (al.toInt() > pl.toInt()){lcd.clear();lcd.setCursor(0,0);lcd.print("Aluminum");lcd.setCursor(0,1);lcd.print("AL: " + al + " " + "PL: " + pl);Servo1.write(0);servoCheck = true;}else if (al.toInt() < pl.toInt()){lcd.clear();lcd.setCursor(0,0);lcd.print("Plastic");lcd.setCursor(0,1);lcd.print("AL: " + al + " " + "PL: " + pl);Servo1.write(180);servoCheck = true;}else{if (cc.toInt() > 100){lcd.clear();lcd.setCursor(0,0);lcd.print("Classifying...");} else {lcd.clear();lcd.setCursor(0,0);lcd.print("Try Again,");lcd.setCursor(0,1);lcd.print("Unrecognized");}}if (servoCheck == true){delay(900);Servo1.write(90);servoCheck = false;}}}/*if(c=='n'){Servo1.write(90);lcd.clear();//Serial.write("A", 1);}else if (c=="a"){Servo1.write(0);lcd.setCursor(0,0);lcd.clear();lcd.print("Aluminum!");// Serial.write("A", 1);delay(900);Servo1.write(90);} else if (c=='p'){Servo1.write(180);lcd.setCursor(0,0);lcd.clear();lcd.print("Plastic!");//Serial.write("A", 1);delay(900);Servo1.write(90);} else {lcd.clear();lcd.setCursor(0,1);lcd.print(c);//Serial.write("E", 1);Serial.print(c);}
Video Showcase
What's Next?
For future projects, this project will serve as a foundation and a frame that I will modify and add onto to make completely different projects efficiently, removing all the main parts. This project took a combined time of many weeks, not including school, other activities, or failed attempts from before.
Edge-impulse-linux-runner.js
Code used to run the model on the Raspberry Pi.
#!/usr/bin/env node
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
//COMMUNICATION---------------------------------------------------------
const { pipeline } = require("serialport");
var SerialPort = require("serialport");
const parsers = SerialPort.parsers;
const parser = new parsers.Readline({
deliiter: "\r\n"
});
var port = new SerialPort('/dev/ttyACM0',{
baudRate: 115200,
dataBits: 8,
parity: 'none',
stopBits: 1,
flowControl: false
});
port.pipe(parser);
//COMMUNICATION---------------------------------------------------------
//BUTTON
var stillContinue = false;
//BUTTON
//SMOOTHINGDATA---------------------------------------------------------
var ALAverage = 0;
var PLAAverage = 0;
var counter = 0;
//SMOOTHINGDATA---------------------------------------------------------
//motorLimit
let oldValue = true;
let newValue = false;
//motorLimit
//BUTTON
const button = require("/home/raspberry/buttonAndLED/buttonAndLED.js");
//BUTTON
const path_1 = __importDefault(require("path"));
const linux_impulse_runner_1 = require("../../library/classifier/linux-impulse-runner");
const audio_classifier_1 = require("../../library/classifier/audio-classifier");
const image_classifier_1 = require("../../library/classifier/image-classifier"); //IMPORANT
const imagesnap_1 = require("../../library/sensors/imagesnap");
const inquirer_1 = __importDefault(require("inquirer"));
const config_1 = require("../config");
const init_cli_app_1 = require("../init-cli-app");
const fs_1 = __importDefault(require("fs"));
const os_1 = __importDefault(require("os"));
const runner_downloader_1 = require("./runner-downloader");
const gstreamer_1 = require("../../library/sensors/gstreamer");
const commander_1 = __importDefault(require("commander"));
const express = require("express");
const http_1 = __importDefault(require("http"));
const socket_io_1 = __importDefault(require("socket.io"));
const sharp_1 = __importDefault(require("sharp"));
const library_1 = require("../../library");
const get_ips_1 = require("../get-ips");
const prophesee_1 = require("../../library/sensors/prophesee");
const RUNNER_PREFIX = '\x1b[33m[RUN]\x1b[0m';
const BUILD_PREFIX = '\x1b[32m[BLD]\x1b[0m';
let audioClassifier;
let imageClassifier;
let configFactory;
//BUTTON
let buttonState;
let buttonReset;
//BUTTON
const packageVersion = JSON.parse(fs_1.default.readFileSync(path_1.default.join(__dirname, '..', '..', '..', 'package.json'), 'utf-8')).version;
commander_1.default
.description('Edge Impulse Linux runner ' + packageVersion)
.version(packageVersion)
.option('--model-file <file>', 'Specify model file (either path to .eim, or the socket on which the model is running), ' +
'if not provided the model will be fetched from Edge Impulse')
.option('--api-key <key>', 'API key to authenticate with Edge Impulse (overrides current credentials)')
.option('--download <file>', 'Just download the model and store it on the file system')
.option('--list-targets', 'List all supported targets and inference engines')
.option('--force-target <target>', 'Do not autodetect the target system, but set it by hand (e.g. "runner-linux-aarch64")')
.option('--force-engine <engine>', 'Do not autodetect the inference engine, but set it by hand (e.g. "tflite")')
.option('--clean', 'Clear credentials')
.option('--silent', `Run in silent mode, don't prompt for credentials`)
.option('--quantized', 'Download int8 quantized neural networks, rather than the float32 neural networks. ' +
'These might run faster on some architectures, but have reduced accuracy.')
.option('--enable-camera', 'Always enable the camera. This flag needs to be used to get data from the microphone ' +
'on some USB webcams.')
.option('--dev', 'List development servers, alternatively you can use the EI_HOST environmental variable ' +
'to specify the Edge Impulse instance.')
.option('--verbose', 'Enable debug logs')
.allowUnknownOption(true)
.parse(process.argv);
const devArgv = !!commander_1.default.dev;
const cleanArgv = !!commander_1.default.clean;
const silentArgv = !!commander_1.default.silent;
const quantizedArgv = !!commander_1.default.quantized;
const enableCameraArgv = !!commander_1.default.enableCamera;
const verboseArgv = !!commander_1.default.verbose;
const apiKeyArgv = commander_1.default.apiKey;
const modelFileArgv = commander_1.default.modelFile;
const downloadArgv = commander_1.default.download;
const forceTargetArgv = commander_1.default.forceTarget;
const forceEngineArgv = commander_1.default.forceEngine;
const listTargetsArgv = !!commander_1.default.listTargets;
process.on('warning', e => console.warn(e.stack));
const cliOptions = {
appName: 'Edge Impulse Linux runner',
apiKeyArgv: apiKeyArgv,
cleanArgv: cleanArgv,
devArgv: devArgv,
hmacKeyArgv: undefined,
silentArgv: silentArgv,
connectProjectMsg: 'From which project do you want to load the model?',
getProjectFromConfig: async () => {
if (!configFactory)
return undefined;
let projectId = await configFactory.getLinuxProjectId();
if (!projectId) {
return undefined;
}
return { projectId: projectId };
}
};
let firstExit = true;
let isExiting = false;
const onSignal = async () => {
if (!firstExit) {
process.exit(1);
}
else {
isExiting = true;
console.log(RUNNER_PREFIX, 'Received stop signal, stopping application... ' +
'Press CTRL+C again to force quit.');
firstExit = false;
try {
if (audioClassifier) {
await audioClassifier.stop();
}
if (imageClassifier) {
await imageClassifier.stop(); //IMPORTANT
}
process.exit(0);
}
catch (ex2) {
let ex = ex2;
console.log(RUNNER_PREFIX, 'Failed to stop inferencing', ex.message);
}
process.exit(1);
}
};
process.on('SIGHUP', onSignal);
process.on('SIGINT', onSignal);
// tslint:disable-next-line: no-floating-promises
(async () => {
try {
let modelFile;
if (listTargetsArgv && modelFile) {
throw new Error('Cannot combine --list-targets and --model-file');
}
let modelPath;
// no model file passed in? then build / download the latest deployment...
if (!modelFileArgv) {
const init = await init_cli_app_1.initCliApp(cliOptions);
const config = init.config;
configFactory = init.configFactory;
const { projectId, devKeys } = await init_cli_app_1.setupCliApp(configFactory, config, cliOptions, undefined);
await configFactory.setLinuxProjectId(projectId);
if (listTargetsArgv) {
const targets = await config.api.deployment.listDeploymentTargetsForProjectDataSources(projectId);
console.log('Listing all available targets');
console.log('-----------------------------');
for (let t of targets.targets.filter(x => x.format.startsWith('runner'))) {
console.log(`target: ${t.format}, name: ${t.name}, supported engines: [${t.supportedEngines.join(', ')}]`);
}
console.log('');
console.log('You can force a target via "edge-impulse-linux-runner --force-target <target> [--force-engine <engine>]"');
process.exit(0);
}
const downloader = new runner_downloader_1.RunnerDownloader(projectId, quantizedArgv ? 'int8' : 'float32', config, forceTargetArgv, forceEngineArgv);
downloader.on('build-progress', msg => {
console.log(BUILD_PREFIX, msg);
});
modelPath = new runner_downloader_1.RunnerModelPath(projectId, quantizedArgv ? 'int8' : 'float32', forceTargetArgv, forceEngineArgv);
// no new version? and already downloaded? return that model
let currVersion = await downloader.getLastDeploymentVersion();
if (currVersion && await checkFileExists(modelPath.getModelPath(currVersion))) {
modelFile = modelPath.getModelPath(currVersion);
console.log(RUNNER_PREFIX, 'Already have model', modelFile, 'not downloading...');
}
else {
console.log(RUNNER_PREFIX, 'Downloading model...');
let deployment = await downloader.downloadDeployment();
let tmpDir = await fs_1.default.promises.mkdtemp(path_1.default.join(os_1.default.tmpdir(), 'ei-' + Date.now()));
tmpDir = path_1.default.join(os_1.default.tmpdir(), tmpDir);
await fs_1.default.promises.mkdir(tmpDir, { recursive: true });
modelFile = path_1.default.join(tmpDir, await downloader.getDownloadType());
await fs_1.default.promises.writeFile(modelFile, deployment);
await fs_1.default.promises.chmod(modelFile, 0o755);
console.log(RUNNER_PREFIX, 'Downloading model OK');
}
if (downloadArgv) {
await fs_1.default.promises.mkdir(path_1.default.dirname(downloadArgv), { recursive: true });
await fs_1.default.promises.copyFile(modelFile, downloadArgv);
console.log(RUNNER_PREFIX, 'Stored model in', path_1.default.resolve(downloadArgv));
return process.exit(0);
}
}
else {
if (downloadArgv) {
throw new Error('Cannot combine --model-file and --download');
}
configFactory = new config_1.Config();
modelFile = modelFileArgv;
await fs_1.default.promises.chmod(modelFile, 0o755);
}
const runner = new linux_impulse_runner_1.LinuxImpulseRunner(modelFile);
const model = await runner.init();
// if downloaded? then store...
if (!modelFileArgv && modelPath) {
let file = modelPath.getModelPath(model.project.deploy_version);
if (file !== modelFile) {
await fs_1.default.promises.mkdir(path_1.default.dirname(file), { recursive: true });
await fs_1.default.promises.copyFile(modelFile, file);
await fs_1.default.promises.unlink(modelFile);
console.log(RUNNER_PREFIX, 'Stored model version in', file);
}
}
let param = model.modelParameters;
if (param.sensorType === 'microphone') {
console.log(RUNNER_PREFIX, 'Starting the audio classifier for', model.project.owner + ' / ' + model.project.name, '(v' + model.project.deploy_version + ')');
console.log(RUNNER_PREFIX, 'Parameters', 'freq', param.frequency + 'Hz', 'window length', ((param.input_features_count / param.frequency) * 1000) + 'ms.', 'classes', param.labels);
if (enableCameraArgv) {
await connectCamera(configFactory);
}
let audioDevice;
const audioDevices = await library_1.AudioRecorder.ListDevices();
const storedAudio = await configFactory.getAudio();
if (storedAudio && audioDevices.find(d => d.id === storedAudio)) {
audioDevice = storedAudio;
}
else if (audioDevices.length === 1) {
audioDevice = audioDevices[0].id;
}
else if (audioDevices.length === 0) {
console.warn(RUNNER_PREFIX, 'Could not find any microphones...');
audioDevice = '';
}
else {
let inqRes = await inquirer_1.default.prompt([{
type: 'list',
choices: (audioDevices || []).map(p => ({ name: p.name, value: p.id })),
name: 'microphone',
message: 'Select a microphone',
pageSize: 20
}]);
audioDevice = inqRes.microphone;
}
await configFactory.storeAudio(audioDevice);
console.log(RUNNER_PREFIX, 'Using microphone ' + audioDevice);
audioClassifier = new audio_classifier_1.AudioClassifier(runner, verboseArgv);
audioClassifier.on('noAudioError', async () => {
console.log('');
console.log(RUNNER_PREFIX, 'ERR: Did not receive any audio.');
console.log('ERR: Did not receive any audio. Here are some potential causes:');
console.log('* If you are on macOS this might be a permissions issue.');
console.log(' Are you running this command from a simulated shell (like in Visual Studio Code)?');
console.log('* If you are on Linux and use a microphone in a webcam, you might also want');
console.log(' to initialize the camera with --enable-camera');
await (audioClassifier === null || audioClassifier === void 0 ? void 0 : audioClassifier.stop());
process.exit(1);
});
await audioClassifier.start(audioDevice);
audioClassifier.on('result', (ev, timeMs, audioAsPcm) => {
if (!ev.result.classification)
return;
// print the raw predicted values for this frame
// (turn into string here so the content does not jump around)
// tslint:disable-next-line: no-unsafe-any
let c = ev.result.classification; //IMPORTANT
for (let k of Object.keys(c)) {
c[k] = c[k].toFixed(4);
}
console.log('classifyRes', timeMs + 'ms.', c);
if (ev.info) {
console.log('additionalInfo:', ev.info);
}
});
}
else if (param.sensorType === 'camera') {
console.log(RUNNER_PREFIX, 'Starting the image classifier for', model.project.owner + ' / ' + model.project.name, '(v' + model.project.deploy_version + ')');
console.log(RUNNER_PREFIX, 'Parameters', 'image size', param.image_input_width + 'x' + param.image_input_height + ' px (' +
param.image_channel_count + ' channels)', 'classes', param.labels);
let camera = await connectCamera(configFactory);
imageClassifier = new image_classifier_1.ImageClassifier(runner, camera); //IMPORTANT
//BUTTON
//buttonState = button.getter;
//buttonReset = button.resetIt;
//BUTTON
//Button----------------------------------------------------
console.log("Button state ", button.getter());
await imageClassifier.start(); //IMPORTANT
//Button----------------------------------------------------
let webserverPort = await startWebServer(model, camera, imageClassifier);
console.log('');
console.log('Want to see a feed of the camera and live classification in your browser? ' +
'Go to http://' + (get_ips_1.ips.length > 0 ? get_ips_1.ips[0].address : 'localhost') + ':' + webserverPort);
console.log('');
imageClassifier.on('result', async (ev, timeMs, imgAsJpg) => {
console.log("Starting again-----------------");
if (ev.result.classification) {
if (button.getter() == 1) {
console.log("Starting again-----------------");
if (stillContinue == true){
await imageClassifier.start(); //IMPORTANT
stillContinue = false;
port.write("101");
}
// print the raw predicted values for this frame
// (turn into string here so the content does not jump around)
// tslint:disable-next-line: no-unsafe-any
let c = ev.result.classification;
for (let k of Object.keys(c)) {
c[k] = c[k].toFixed(4);
}
console.log('classifyResLine271', timeMs + 'ms.', c, 'Button State ', button.getter());
//SMOOTHINGDATA---------------------------------------------------------
ALAverage = ALAverage + Number(c["AL"]);
PLAAverage = PLAAverage + Number(c["PL"]);
counter ++;
//console.log("Dividing" + Number(ALAverage), "and" + typeof counter);
//console.log("Dividing" + Number(PLAAverage), "and" + typeof counter);
if (counter >= 20){
ALAverage = Number (ALAverage)/20;
PLAAverage = Number (PLAAverage)/20;
//console.log("Dividing" + typeof ALAverage, "and" + typeof counter);
//console.log("Dividing" + typeof PLAAverage, "and" + typeof counter);
//console.log("Dividing" + ALAverage, "and" + counter);
//console.log("Dividing" + PLAAverage, "and" + counter);
//console.log(ALAverage, PLAAverage);
//COMMUNICATION---------------------------------------------------------
if(ALAverage >=.70){
//if(oldValue != newValue){
var whatToSend = Math.floor(ALAverage.toString() * 100) +"_"+ Math.floor(PLAAverage.toString() * 100);
port.write(whatToSend);
console.log("Sending " + whatToSend);
//oldValue = newValue;
button.resetIt()
await imageClassifier.stop();
//}
} else if (PLAAverage >= .70){
// if(oldValue != newValue){
var whatToSend = Math.floor(ALAverage.toString() * 100) +"_"+ Math.floor(PLAAverage.toString() * 100);
port.write(whatToSend);
console.log("Sending " + whatToSend);
//oldValue = newValue;
button.resetIt()
await imageClassifier.stop();
//}
} else{
button.resetIt()
port.write("0");
await imageClassifier.stop();
// oldValue = true;
// newValue = false;
}
//COMMUNICATION---------------------------------------------------------
ALAverage = 0;
PLAAverage = 0;
counter = 0;
}
//SMOOTHINGDATA---------------------------------------------------------
}
else if (button.getter() == 0){
console.log ("Not running");
await imageClassifier.stop();
stillContinue = true;
}
} // if (ev.result.classification) ends here
else if (ev.result.bounding_boxes) {
console.log('boundingBoxes', timeMs + 'ms.', JSON.stringify(ev.result.bounding_boxes));
}
if (ev.info) {
console.log('additionalInfo:', ev.info);
}
});
}
else {
throw new Error('Invalid sensorType: ' + param.sensorType);
}
}
catch (ex) {
console.warn(RUNNER_PREFIX, 'Failed to run impulse', ex);
if (audioClassifier) {
await audioClassifier.stop();
}
if (imageClassifier) {
await imageClassifier.stop();
}
process.exit(1);
}
})();
async function connectCamera(cf) {
let camera;
if (process.env.PROPHESEE_CAM === '1') {
camera = new prophesee_1.Prophesee(verboseArgv);
}
else if (process.platform === 'darwin') {
camera = new imagesnap_1.Imagesnap(verboseArgv);
}
else if (process.platform === 'linux') {
camera = new gstreamer_1.GStreamer(verboseArgv);
}
else {
throw new Error('Unsupported platform "' + process.platform + '"');
}
await camera.init();
let device;
const devices = await camera.listDevices();
if (devices.length === 0) {
throw new Error('Cannot find any webcams');
}
const storedCamera = await cf.getCamera();
if (storedCamera && devices.find(d => d === storedCamera)) {
device = storedCamera;
}
else if (devices.length === 1) {
device = devices[0];
}
else {
let inqRes = await inquirer_1.default.prompt([{
type: 'list',
choices: (devices || []).map(p => ({ name: p, value: p })),
name: 'camera',
message: 'Select a camera',
pageSize: 20
}]);
device = inqRes.camera;
}
await cf.storeCamera(device);
console.log(RUNNER_PREFIX, 'Using camera', device, 'starting...');
await camera.start({
device: device,
intervalMs: 100,
});
camera.on('error', error => {
if (isExiting)
return;
console.log(RUNNER_PREFIX, 'camera error', error);
process.exit(1);
});
console.log(RUNNER_PREFIX, 'Connected to camera');
return camera;
}
function buildWavFileBuffer(data, intervalMs) {
// let's build a WAV file!
let wavFreq = 1 / intervalMs * 1000;
let fileSize = 44 + (data.length);
let dataSize = (data.length);
let srBpsC8 = (wavFreq * 16 * 1) / 8;
let headerArr = new Uint8Array(44);
let h = [
0x52, 0x49, 0x46, 0x46,
// tslint:disable-next-line: no-bitwise
fileSize & 0xff, (fileSize >> 8) & 0xff, (fileSize >> 16) & 0xff, (fileSize >> 24) & 0xff,
0x57, 0x41, 0x56, 0x45,
0x66, 0x6d, 0x74, 0x20,
0x10, 0x00, 0x00, 0x00,
0x01, 0x00,
0x01, 0x00,
// tslint:disable-next-line: no-bitwise
wavFreq & 0xff, (wavFreq >> 8) & 0xff, (wavFreq >> 16) & 0xff, (wavFreq >> 24) & 0xff,
// tslint:disable-next-line: no-bitwise
srBpsC8 & 0xff, (srBpsC8 >> 8) & 0xff, (srBpsC8 >> 16) & 0xff, (srBpsC8 >> 24) & 0xff,
0x02, 0x00, 0x10, 0x00,
0x64, 0x61, 0x74, 0x61,
// tslint:disable-next-line: no-bitwise
dataSize & 0xff, (dataSize >> 8) & 0xff, (dataSize >> 16) & 0xff, (dataSize >> 24) & 0xff,
];
for (let hx = 0; hx < 44; hx++) {
headerArr[hx] = h[hx];
}
return Buffer.concat([Buffer.from(headerArr), data]);
}
function checkFileExists(file) {
return new Promise(resolve => {
return fs_1.default.promises.access(file, fs_1.default.constants.F_OK)
.then(() => resolve(true))
.catch(() => resolve(false));
});
}
function startWebServer(model, camera, imgClassifier) {
const app = express();
app.use(express.static(path_1.default.join(__dirname, '..', '..', '..', 'cli', 'linux', 'webserver', 'public')));
const server = new http_1.default.Server(app);
const io = socket_io_1.default(server);
// you can also get the actual image being classified from 'imageClassifier.on("result")',
// but then you're limited by the inference speed.
// here we get a direct feed from the camera so we guarantee the fps that we set earlier.
let nextFrame = Date.now();
let processingFrame = false;
camera.on('snapshot', async (data) => {
if (nextFrame > Date.now() || processingFrame)
return;
processingFrame = true;
let img;
if (model.modelParameters.image_channel_count === 3) {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width
});
}
else {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width
}).toColourspace('b-w');
}
io.emit('image', {
img: 'data:image/jpeg;base64,' + (await img.jpeg().toBuffer()).toString('base64')
});
nextFrame = Date.now() + 50;
processingFrame = false;
});
imgClassifier.on('result', async (result, timeMs, imgAsJpg) => {
io.emit('classification', {
modelType: model.modelParameters.model_type,
result: result.result,
timeMs: timeMs,
additionalInfo: result.info,
});
});
io.on('connection', socket => {
socket.emit('hello', {
projectName: model.project.owner + ' / ' + model.project.name
});
});
return new Promise((resolve) => {
server.listen(Number(process.env.PORT) || 4912, process.env.HOST || '0.0.0.0', async () => {
resolve((Number(process.env.PORT) || 4912));
});
});
}
//# sourceMappingURL=runner.js.map
Image-classifier.js
image-classifier.js
Code used to classify the images from the camera.
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.ImageClassifier = void 0;
const tsee_1 = require("tsee");
const sharp_1 = __importDefault(require("sharp"));
class ImageClassifier extends tsee_1.EventEmitter {
/**
* Classifies realtime image data from a camera
* @param runner An initialized impulse runner instance
* @param camera An initialized ICamera instance
*/
constructor(runner, camera) {
super();
this._stopped = true;
this._runningInference = false;
this._runner = runner;
this._camera = camera;
}
/**
* Start the image classifier
*/
async start() {
console.log("image classifier started!!!!!!!!!!!!!!!!!!!!!!!");
let model = this._runner.getModel();
if (model.modelParameters.sensorType !== 'camera') {
throw new Error('Sensor for this model was not camera, but ' +
model.modelParameters.sensor);
}
this._stopped = false; //IMPORTANT
let frameQueue = [];
this._camera.on('snapshot', async (data) => {
// are we looking at video? Then we always add to the frameQueue
if (model.modelParameters.image_input_frames > 1) {
let resized = await this.resizeImage(model, data);
frameQueue.push(resized);
}
// still running inferencing?
if (this._runningInference) {
return;
}
// too little frames? then wait for next one
if (model.modelParameters.image_input_frames > 1 &&
frameQueue.length < model.modelParameters.image_input_frames) {
return;
}
this._runningInference = true;
try {
// if we have single frame then resize now
if (model.modelParameters.image_input_frames > 1) {
frameQueue = frameQueue.slice(frameQueue.length - model.modelParameters.image_input_frames);
}
else {
let resized = await this.resizeImage(model, data);
frameQueue = [resized];
}
let img = frameQueue[frameQueue.length - 1].img;
// slice the frame queue
frameQueue = frameQueue.slice(frameQueue.length - model.modelParameters.image_input_frames);
// concat the frames
let values = [];
for (let ix = 0; ix < model.modelParameters.image_input_frames; ix++) {
values = values.concat(frameQueue[ix].features);
}
let now = Date.now();
if (this._stopped) {
//return; commented out to not stop the program
console.log("this._stopped is true");
}
let classifyRes = await this._runner.classify(values);
let timeSpent = Date.now() - now;
this.emit('result', classifyRes, classifyRes.timing.dsp + classifyRes.timing.classification + classifyRes.timing.anomaly, await img.jpeg({ quality: 90 }).toBuffer());
}
finally {
this._runningInference = false;
}
});
}
/**
* Stop the classifier
*/
async stop() {
console.log("image classifier stopped");
this._stopped = true;
//await Promise.all([
//this._camera ? this._camera.stop() : Promise.resolve(),
//this._runner.stop() //Commented to stop program from exiting
//]);
}
async resizeImage(model, data) {
// resize image and add to frameQueue
let img;
let features = [];
if (model.modelParameters.image_channel_count === 3) {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width,
});
let buffer = await img.raw().toBuffer();
for (let ix = 0; ix < buffer.length; ix += 3) {
let r = buffer[ix + 0];
let g = buffer[ix + 1];
let b = buffer[ix + 2];
// tslint:disable-next-line: no-bitwise
features.push((r << 16) + (g << 8) + b);
}
}
else {
img = sharp_1.default(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width
}).toColourspace('b-w');
let buffer = await img.raw().toBuffer();
for (let p of buffer) {
// tslint:disable-next-line: no-bitwise
features.push((p << 16) + (p << 8) + p);
}
}
return {
img: img,
features: features
};
}
}
exports.ImageClassifier = ImageClassifier;
//# sourceMappingURL=image-classifier.js.map
ButtonAndLED.js
buttonAndLED.js
Code used to get a button press from the button.
var Gpio = require('onoff').Gpio; //include onoff to interact with the GPIO
var LED = new Gpio(4, 'out'); //use GPIO pin 4 as output
var pushButton = new Gpio(17, 'in', 'both'); //use GPIO pin 17 as input, and 'both' button presses, and releases should be handled
var ready = true;
var go = 0;
function sleep(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
function hi()
{
console.log("hi");
}
function getter(){
console.log("starting getter");
//console.log("go:" + go);
return go;
}
function resetIt(){
console.log("starting reset");
ready = true;
go = 0;
LED.writeSync(0);
}
console.log("Button and LED intidated");
pushButton.watch(function (err, value) { //Watch for hardware interrupts on pushButton GPIO, specify callback function
if (err) { //if an error
console.error('There was an error', err); //output error message to console
return;
}
if (value == 1 && ready == true){
console.log("button clicked");
//LED.writeSync(value); //turn LED on or off depending on the button state (0 or 1)
if (ready == true){
console.log("ready is tru");
ready = false;
LED.writeSync(1);
go = 1;
console.log("go equals");
console.log(go);
//setTimeout(reset, 5000);
}
}
});
function unexportOnClose() { //function to run when exiting program
LED.writeSync(0); // Turn LED off
LED.unexport(); // Unexport LED GPIO to free resources
pushButton.unexport(); // Unexport Button GPIO to free resources
};
//returnState();
process.on('SIGINT', unexportOnClose); //function to run when user closes using ctrl+c
module.exports = {getter,resetIt, hi};
Communicate.ino
communicate.ino
Communication between Arduino and Raspberry Pi.
/*
* serial_usb_simple_arduino - For communicating over USB serial. Send it a '1' (character one)
* and it will make the builtin LED start blinking every one second. Send it a '0'
* (character zero) and it will make it stop blinking.
*
* Each time it receives one of the commands, it sends back an 'A' for acknowledge.
* But send it a commmand it doesn't recognize and it sends back an 'E' for error.
*/
//bool blinking = false;
//bool led_on = false;
//int target_time;
#include <Wire.h>
#include <LiquidCrystal_I2C.h>
// Include the Servo library
#include <Servo.h>
// Declare the Servo pin
int servoPin = 3;
// Create a servo object
Servo Servo1;
const unsigned long eventInterval = 100;
unsigned long previousTime = 0;
boolean servoCheck = false;
// Set the LCD address to 0x27 for a 16 chars and 2 line display
LiquidCrystal_I2C lcd(0x27, 16, 2);
void setup() {
Servo1.attach(servoPin);
Servo1.write(90);
lcd.begin();
lcd.backlight();
lcd.clear();
Serial.begin(115200);
while (!Serial) {
; // wait for serial port to connect. Needed for native USB
}
pinMode(LED_BUILTIN, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);
}
void loop() {
String cc;
String al;
String pl;
if (Serial.available() > 0) {
unsigned long currentTime = millis();
cc = Serial.readString();
int x = cc.indexOf("_");
al = cc.substring(0,x);
pl = cc.substring(x+1);
Serial.println(cc);
if (cc == "r"){
lcd.clear();
}
if (al.toInt() > pl.toInt())
{
lcd.clear();
lcd.setCursor(0,0);
lcd.print("Aluminum");
lcd.setCursor(0,1);
lcd.print("AL: " + al + " " + "PL: " + pl);
Servo1.write(0);
servoCheck = true;
}
else if (al.toInt() < pl.toInt())
{
lcd.clear();
lcd.setCursor(0,0);
lcd.print("Plastic");
lcd.setCursor(0,1);
lcd.print("AL: " + al + " " + "PL: " + pl);
Servo1.write(180);
servoCheck = true;
}
else
{
if (cc.toInt() > 100){
lcd.clear();
lcd.setCursor(0,0);
lcd.print("Classifying...");
} else {
lcd.clear();
lcd.setCursor(0,0);
lcd.print("Try Again,");
lcd.setCursor(0,1);
lcd.print("Unrecognized");
}
}
if (servoCheck == true){
delay(900);
Servo1.write(90);
servoCheck = false;
}
}
}
/*
if(c=='n'){
Servo1.write(90);
lcd.clear();
//Serial.write("A", 1);
}else if (c=="a"){
Servo1.write(0);
lcd.setCursor(0,0);
lcd.clear();
lcd.print("Aluminum!");
// Serial.write("A", 1);
delay(900);
Servo1.write(90);
} else if (c=='p'){
Servo1.write(180);
lcd.setCursor(0,0);
lcd.clear();
lcd.print("Plastic!");
//Serial.write("A", 1);
delay(900);
Servo1.write(90);
} else {
lcd.clear();
lcd.setCursor(0,1);
lcd.print(c);
//Serial.write("E", 1);
Serial.print(c);
}
*/
//switch (c) {
//case '0':
//
// stop blinking
//blinking = false;
//if (led_on) {
//digitalWrite(LED_BUILTIN, LOW);
//digitalWrite(13, LOW);
//digitalWrite(12, LOW);
//}
// Servo1.write(180);
// lcd.clear();
// Serial.write("A", 1);
// break;
// case '1'://
//Aluminum
// start blinking
//if (blinking == false) {
//blinking = true;
//digitalWrite(LED_BUILTIN, HIGH);
//digitalWrite(12, HIGH);
//digitalWrite(13, LOW);
//led_on = true;
//target_time = millis() + 100; // turn off in 1 tenth of a second (100 milliseconds)
// //}
//
// Servo1.write(90);
// lcd.setCursor(0,0);
// lcd.clear();
// lcd.print("Aluminum!");
// Serial.write("A", 1);
// break;
// case '2'://
//Plastic
// start blinking
//if (blinking == false) {
//blinking = true;
//digitalWrite(LED_BUILTIN, HIGH);
//digitalWrite(13, HIGH);
//digitalWrite(12, LOW);
//led_on = true;
//target_time = millis() + 100; // turn off in 1 tenth of a second (100 milliseconds)
//}
// Servo1.write(0);
// lcd.setCursor(0,0);
// lcd.clear();
// lcd.print("Plastic!");
// Serial.write("A", 1);
// break;
// default:
// Serial.write("E", 1);
// break;
/*
else if (blinking) {
if (millis() >= target_time) {
if (led_on) {
digitalWrite(LED_BUILTIN, LOW);
led_on = false;
target_time = millis() + 100; // turn on in 1 tenth of a second (100 milliseconds)
} else {
digitalWrite(LED_BUILTIN, HIGH);
led_on = true;
target_time = millis() + 100; // turn off in 1 tenth of a second (100 milliseconds)
}
}
}
*/