Event Classification Via Audio for Pogona Vitticeps in Terrarium
by Freaker99 in Circuits > Raspberry Pi
280 Views, 2 Favorites, 0 Comments
Event Classification Via Audio for Pogona Vitticeps in Terrarium
In this instructable I would like to present the concept and implementation of a system for monitoring bearded dragon's activity in a terrarium.
If you are a reptile owner like me, you should already know that animals living in terrariums have very special needs. It is important to provide the animal with the optimal conditions, similar to those in its natural environment. By optimal conditions, I mean that our animal must be provided with the right temperature and humidity level in terrarium. Besides that, some species also need to have the correct daylighting system and most importantly they must be fed with the optimal amount of the food.
With most other pets like dogs, cats etc. we do not have a hard time keeping track of everything as they would remind us in an audible way. When it comes to the reptiles it gets a bit problematic, because these animals tend to suffer quietly. This is a big problem as there are a lot of inexperienced reptile holders out there which are overchallenged with their reptiles’ special needs. As a result of this, nothing innocent reptiles are released into the wild, placed in animal shelters or in the worst case simply die. To give my bearded dragon a voice that will be heard, I decided to install a microphone in its terrarium that will record the animal's activity in real time. The recorded samples will be analyzed to identify type of the lizard's activity at any given time. As a result, I will be able to keep an eye on my animal's behavior more accurately during the day and night. This is especially useful when I am away from home and have no insight into my pet.
The task of the system is to record acoustic signals from the terrarium at set time intervals and to classify them as predefined events. Analyzed activities are saved to a text file with the time of recording, so that I could check on my pet's activity during the day and night. Initially, it was necessary to record various activities of the animal so that a model of the event classifier could be prepared. For my project I have distinguished three acoustic events (walking, eating, non-activity - acoustic background), which I was able to distinguish and classify accordingly using only acoustic signals from the microphone. The main part of the project was to write the program code, which was responsible for data management, training GMM models, analyzing signals and determining the classifier's percentage performance with an error matrix. The built system was tested in the terrarium with bearded dragon.
In the future, it is possible to develop this project by adding other components such as temperature and humidity sensors, camera, etc., which will allow us to track the activity of the animal in the terrarium.
Prerequisites
Here is the part list of the components I used in my project:
- Raspberry Pi 3 Model B,
- I2S Mems Microphone (INMP441),
- at least 6 jumper wires (female to male),
- solderless breadboard,
- soldering iron (for connecting microphone's pins),
- rj45 cable (optional).
Building the Microphone
For this project I decided to use the I2S MEMS Microphone for Raspberry Pi (INMP441). First thing I had to do was to solder the contact points on the microphone so that I could connect it to the Raspberry Pi using wires. Look at the attached photo on how to properly connect wires. For connecting GRN and L/R pins I used the solderless board. Make sure your wires are long enough so that you can place the microphone in the decent place in the terrarium.
Sources and Communication With the Desktop Computer
Now that you've assembled everything, the next step on the list is the coding part. I make no secret of the fact that this was the longest and most difficult step to complete.
Since my terrariums are located in the same room as my desktop computer, so I decided to use the SSH (Secure Shell) protocol to connect the Raspberry Pi with my university's public server. This allows me to send data of interest, such as sound pressure levels over time, etc., to my website.
After analyzing the characteristics of the input data, I decided to classify the events based on the use of GMM models. The choice was supported primarily by the relatively small amount of data and the stationary nature of the classified activities. An additional benefit of using GMM models is its fast-training time and the ability to select best input data parameters.
I started the implementation of the system by recording three different events, which consisted of:
1) agama running around the terrarium,
2) agama catching food and eating,
3) acoustic background, i.e. the animal's resting.
I pre-processed the files and then began work on preparing Python program scripts, whose task was to divide the recordings from the base-length into equal-length samples, extract selected spectral features from each prepared fragment of the recording and train GMM models along with saving the model to a file. Model training was preceded by splitting the input data into training and evaluation data.
With the models trained, I decided to test the performance of the classifier based on the evaluation data given in the program. I decided to first examine the effect of sample length on the classifier's performance score, and I found out that the classifier's performance decreased as the length of the signal increased - that is, also as the number of samples used for training decreased. With this knowledge, I was able to modify the parameter of the signal length used for training models to the one of which I achieved the highest results of classifier effectiveness. I did the same operations for the window length parameter, window offset length, etc.
The results of my work were two simultaneously running Python scripts. The first was responsible for recording acoustic signals from the terrarium at set intervals, pre-processing the audio and saving the files accordingly. The second script's task was the extraction of selected spectral features and their analysis in real time. The system classifies the recorded signals as predefined events and saves the results to a text file. An additional functionality of the system is that the sound pressure level is determined from each prepared sample and the results are presented on the website.
Testing Out the System
For the system prepared in this way, I conducted several tests. These consisted of recording sound signals from the terrarium for a period of 1 second with 15 seconds pause between each start of the recording process (acquisition.py). The recorded signals were analyzed by an event classifier script (analysis_and_classification.py). Before running the test, I prepared a list of events scheduled to occur. To record the event of interest, a stopwatch was very helpful. During the test, I used the VNC Viewer, which allowed convenient access to the microcontroller's desktop via a desktop computer connection to the Raspberry Pi using SSH protocol.
Using a time daemon (cron), it would have been possible to run the acquisition process at equal time intervals, instead of putting a time delay into the loop. However, the solution used is simpler and safer. Program delays are imperceptible and do not affect the usefulness of the information collected by the system.
After analyzing the results of the tests, it turned out that 13 of the 16 samples audios were classified correctly. Thus, based on the test conducted, we can determine the real-time effectiveness of the classifier, which is 81%. This is a satisfactory result, which can testify to the success of the designed event classification system. The biggest problem was to carry out the operation of feeding the lizard, so that the animal stood still, and the program started the event registration process at the right moment. In addition, my animal often stopped while running in place for a moment, causing some samples to be misclassified.
To improve the functionality of the system, it would be necessary to prepare more data for training. That would allow to obtain higher results of the classifier's effectiveness and to use of samples longer than 1 second.
The monitoring system's code can be found on my github.
Have fun with this instructable!