Friendly Robot (ICARUS Robot)

by ICARUS in Circuits > Raspberry Pi

749 Views, 9 Favorites, 0 Comments

Friendly Robot (ICARUS Robot)

df8195e0-d859-4730-a14a-bffb904b48d4.jpg
IMG_٢٠٢٠٠٦٢٥_١٧٣٨٢٥.jpg
IMG_٢٠٢٠٠٦٢٥_١٧٤٠٢٨_1.jpg
IMG_٢٠٢٠٠٦٢٥_١٧٣٩٠٩_1.jpg

The structure of the ICARUS Documentation 


  1. material selection
  2.  the material I choose
  3.   why did I choose it?
  4. Introduction
  5.  how did we think about this project?
  6.  how do we select the project?    
  7.  how do we do the storyboard?
  8.  how did we deal with issues?

  9. software selection & machine using
  10. cad design
  11.  design the body on Solid Works or Fusion 360 (preferred)
  12.  prepare the body (STL)file on Cura
  13.  print the body
  14. circuit design
  15.  wiring diagram using fritzing
  16.   design PCB on eagle (hat for Raspberry Pi 3 model B)

  17. computer vision coding 
  18.   definition of computer vision
  19.    OpenCV library and its uses
  20.    Installing important libraries
  21.    Face recognition explanation
  22.    Hand recognition explanation
  23.    The whole code     
  24. google assistant code
  25. mobile app

Supplies

raspberry-pi-4-board.jpg
s-l1000.jpg
WEA012864DLPP3N00003-1.jpg
WhatsApp Image 2020-05-01 at 3.27.09 PM.jpeg
1300-4s-front.jpg
Micro_servo.jpg
8a76a41d5a8f0412.jpg
2014-03-04T15_41_23.991Z-hcsr05.jpg
61ONF3jLFNL._AC_UL160_SR160,160_.jpg
Mini-Speaker.jpg
ball_transfers_135_1.jpg
Fusion-360-news-header-image.png
main-qimg-2021c2c84ccf75bc869645dd91781ab3.jpg
eagle-badge-2048px-1.jpg
220px-Logo_for_Cura_Software.png
PyCharm_Logo.svg.png
Prusa_i3_MK2.jpg
maxresdefault (1).jpg
maxresdefault (2).jpg


It’s the main and most important process of them all because if you don’t select your component carefully and accurately the whole project will fall down

  • The main and most important thing is the microprocessor ( Raspberry Pi 3 model B )
  • Raspberry Pi camera to make face recognition and camera tracking “computer vision”
  • 2 Metal gear motors ratio 1:1000 because it has a huge power that can hold the whole body and move fast
  • OLED screen to display the emotion and messages of FABY to make it interact well with the hymen
  • Lipo battery 1300 mA, has a large capacity that can supply the motors and servo and OLED screen
  • Servo motors to make a mechanism for the head of FABY to make it track the person and look up and down for more interactive
  • Ultrasonic sensor to detect any obstacles in front of it and avoid it
  • Infrared sensor: to detect if there is ground or not so that it doesn’t fall from high ground
  • USB microphone: to recognize speeches 
  • Speakers: to make the robot speak
  • Caster wheel: to support the body


Software selection & machine using

  • Fusion 360 or Solid Works any of them will be satisfying for making the body
  • Eagle: for making PCB of the FABY Robot
  • Cura Ultimaker 3D printer (to make infill and slice)
  • For programming, we will use pycharm to use python
  • 3D printer machine
  • PCB milling machine
  • Fritzing software

 

Introduction

WhatsApp Image 2020-05-09 at 12.18.06 AM.jpeg
Roomba560_sideview.jpg
1.jpg
Vector by Anki: A Giant Roll Forward for Robot Kind.
anki-vector-100780578-large.3x2.jpg
IMG_٢٠٢٠٠٦٠٦_١٧٣٤٤٥.jpg
IMG_٢٠٢٠٠٦٠٦_١٧٣٥٤٨.jpg

 

As usual, before I go into details about this project, I will discuss with you the process before making this project, the process I love to call on it, the selection process

 Selection process contains

  • how we think in this project
  • how we select the project
  • how we did the storyboard
  • how we deal with issues
  • technical selection

I studied the project as it's known as (project management) it's organized procedures that let you study projects very carefully and accurately, and I will try to give you thoughts about what happened behind the scenes

How we think in this project 


when we decided to make a project (at first, we hadn't decided yet which project we would choose) we intended to make it like ROMBA (its cleaning robot)

We want to make an automatic smart robot.

After buying ROMBA and doing reverse Engineering on it (that examines every part in the ROMBA robot and determines the function of every part)

We watched an ad for a robot called Vector

What makes us admire this project is that it's an interactive Robot, small, and very cute

So we were confused about which one we would choose to make and then we got a wonderful idea.


Selection of project


As I said earlier, we had a smart idea, the team and I decided to make a combination with both of them

We decided to make our ICARUS ROBOT

ICARUS ROBOT is not like both ROMBA and VECTOR, it's had some features of ROMBA and some others of VECTOR

We were excited to start building the board and study the project and guess what, that's exactly what we did

Storyboard  


Simply what we want this robot to do is

To be interactive

Connected with mobile application

Connected with WIFI

Connected with Google Assistant

So, we started to warm up and make some prototypes to see how it worked and absolutely we faced issues and problems

And as I make in all of my documentation, I will tell you our failures before success, to avoid everything wrong we did


Cad Design

WhatsApp Image 2020-05-31 at 6.04.47 PM.jpeg
WhatsApp Image 2020-05-31 at 6.04.48 PM.jpeg
WhatsApp Image 2020-05-31 at 6.37.45 PM.jpeg
WhatsApp Image 2020-05-31 at 5.33.30 PM (1).jpeg
WhatsApp Image 2020-05-05 at 2.35.54 PM.jpeg
ice_screenshot_٢٠٢٠٠٦٠٧-٠٧٣٤١٠.png
aca3d377-ba41-4003-8de6-d82634814769.jpg
7200a92c-fcab-4000-b20e-16f6c3abe675.jpg
56930a3e-2c91-4c98-a0a8-d29334f4771d.jpg
a5b51410-d02f-4d67-9249-c8a87cb5931e.jpg
WhatsApp Image 2020-05-31 at 7.01.37 PM.jpeg
ice_screenshot_٢٠٢٠٠٦٠٧-٠٧٢٠٣٧.png
ice_screenshot_٢٠٢٠٠٦٠٧-٠٧٢١١٣.png
WhatsApp Image 2020-06-03 at 5.39.32 PM.jpeg
ice_screenshot_٢٠٢٠٠٦٠٧-٠٧١٩٠٧.png
ice_screenshot_٢٠٢٠٠٦٠٧-٠٧١٧٥٨.png
WhatsApp Image 2020-05-13 at 5.38.02 PM.jpeg
WhatsApp Image 2020-05-13 at 5.38.15 PM.jpeg
WhatsApp Image 2020-05-13 at 5.45.14 PM.jpeg
WhatsApp Image 2020-05-13 at 5.45.30 PM.jpeg
WhatsApp Image 2020-05-13 at 5.46.43 PM.jpeg
WhatsApp Image 2020-05-13 at 5.47.11 PM.jpeg
WhatsApp Image 2020-05-14 at 8.39.56 PM.jpeg
IMG_٢٠٢٠٠٦٠٨_١٢١٩٢٢_1.jpg
IMG_٢٠٢٠٠٦٠٨_١٢١٩٥٦.jpg
IMG_٢٠٢٠٠٦٠٨_١٨٢٨١٩_1.jpg
IMG_٢٠٢٠٠٦٠٨_١٨٢٨٢٦_1.jpg

In this part, we will learn how to design the robot on CAD software

How did we think about the body?

Before you think anything you should know this, you must take the measurements of your components to know how the body dimension will be Figures (1),(2),(3)

As smaller it is, as better it was

As we finished the measurements and prepared our dimensions, we could imagine the body, so we had to body to be smaller and artistic we found one that looked exactly as we describe

Figure (4)

So, we will make our body look closely like that

We had the measurements and a vision of the body so, let's build our FABY Robot

Let’s begin with the head, the head must include the camera and OLED screen so will design the head to contain all of these components in Figure (5)

now we need to think of a mechanism that allows the head to move up and down, luckily for you, we will give you this mechanism in Figure (6)

So let’s see our mechanism on the head figure (7)

Let’s prepare our components and put them in the body figure (8)

Let’s see our body now figure (9)

design the wheel and put it into the final shape figure (10)

Don’t worry I will leave all files down so you can download it

After finishing designing the body it’s time to fabricate it

To fabricate the body, you need the main thing that you will print with the filament I prefer PLA filament figure (11) 

 So, after you save the parts as STL files, open Cura and import these files, I will give you two examples

 The first one is the mechanism of the head figure (12) 

Then press save to file and take this file to flash drive put it on a flash drive and put it in the 3D printer to begin to print Figure (13) 

Let’s move on to the second part that we will print which is the body of the wheel without the tire, again repeat the same steps, and import the file in cura figure (14)

Edit the filament and supports and draft as you see and press slice, you will see the average time that the part will be 3D printed in Figure (15)

Then press save the file, put it on the flash drive upload it to the 3D printer, and leave it to be printed figure (16),(17)

Then add to the wheel the rubber part which we made by ourselves figure (18), (19), (20), (21)

CIRCUIT DESIGN

c3f700ec-5c5f-42c5-bb6e-b0c60ea07db5.png
ddfd9c78-20e5-4925-992e-11096023beb7_db9Ji4X.png
f1c38657-091d-4973-973e-5a7544dc384b_Opxfga0.png
9d1e21b9-a266-4912-af12-2e6d9b421254.png
b6ba63b9-e390-4813-827d-60405ba3f08e.png
20ad2ce7-a7d2-4a78-a168-e72f76f783be.png
5fb367f9-2cf4-4dc4-992a-da4fd1ef4e35.png
WhatsApp Image 2020-05-21 at 9.28.42 PM (1).jpeg
WhatsApp Image 2020-05-21 at 9.28.42 PM.jpeg
WhatsApp Image 2020-06-02 at 11.31.36 AM.jpeg

First, before we go to PCB design we should make at first wiring diagram to know how we will connect the components with each other.

Open fritzing insert all of the components on it and start wiring (figure 1)

Complete wiring (figure 2)

So the final schematic wiring diagram (figure 3)

After we made the wiring diagram it’s time to move to the eagle cad, we will have some fun

Now we will design a hat for Raspberry Pi 3 Model B

  1. Select your components.
  2. arrange them in the most suitable way so that the pins are as close to each other as possible to avoid overlap of wires and routes.
  3. export your schematic to the board and get your components arranged.

*Since We're using the Raspberry Pi board as a controller, we need to make sure that the board will not overlap the pi and fit well over it.

that's why we used the component "RASPBERRYPI-40-PIN-GPIO_PTH_REFERENCE".


Components:

  • 7.4 lipo battery
  • step-down dc-dc converter
  • 2 IR sensors
  • L293D motor drive IC.
  • 2 metal gear DC motor.
  • Raspberry Pi 3 model B.
  • 1 Ultrasonic sensor

 

Shield board components :

  • 5 of 2 wire 5mm terminals.
  • 3 of 3 wire 5mm terminals.
  • 6-pin headers for the screen

After that, we will send it to the lab to be fabricated and note that (the mine track width is 0.5mm and the mine diameter hole is 0.8 mm )

After we sent it to FAB LAB it was fabricated and the result was fantastic again, don’t worry I will leave the board and schematic down in files, just click on it and download it

After we finish the PCB fabrication it’s now time to soldring the component on the PCB

Note: Thanks to Alaa Saber for providing us with all the resources, we did the PCB design with his effort so thank you again  

Computer Vision

mm.JPG
x.jpg
z.jpg
video-to-gif-converter.gif
1.JPG
3.JPG
0.JPG
c.jpg
v.png
n.png
b.png

this part is not easy, it’s needed a lot of concentration and codes to understand how face recognition and hand recognition work

so be aware of everything I will say and I will try to give you the conclusion, not the whole idea

Note: special thanks to our lovely Amani who helped us all by herself with computer vision she explained all the procedures with marvelous words, and explained all the errors she faced, I recommend reading her article on computer vision you will find interesting videos with simple explanation just click on

Computer Vision: Computer Vision is the broad parent name for any computations involving visual content – that means images, videos, icons, and anything else with pixels involved. But within this parent idea, there are a few specific tasks that are core building blocks:

In object classification, you train a model on a dataset of specific objects, and the model classifies new objects as belonging to one or more of your training categories.

For object identification, your model will recognize a specific instance of an object – for example, parsing two faces in an image and tagging one as Tom Cruise and one as Katie Holmes

to make face & hand recognition we should install an important library called OpenCV.

What is OpenCV library?

is an open-source for computer vision and digital image processing and machine learning software library

I highly recommend to you if you don’t know what face detection is and haven’t deal with it you should visit the websites, I will leave now

Face detection with OpenCV and deep learning

OpenCV Face Recognition

These two websites will explain every small detail about using deep learning with the OpenCV library to make face detection, I shall not go into detail in this documentation because it will be very long and hard instead of I will give you short summary of it.

How to install OpenCV library?

I worked on the command prompt in Windows and just typed this one line

pip install opencv-python               

I assume that you have OpenCV installed on your system.


Dlib and the face_recognition packages.

Note: For the following installs, ensure you are in a Python virtual environment if you’re using one. I highly recommend virtual environments for isolating your projects — it is a Python best practice. If you’ve followed my OpenCV install guides (and installed virtualenv + virtualenvwrapper )

then you can use the workon command prior to installing dlib and face_recognition

 .

Installing dlib without GPU support

If you do not have a GPU you can install dlib using pip by following this guide:

Face recognition with OpenCV, Python, and deep learning

$ workon # optional


$ pip install dlib


Or you can compile from the source:

Face recognition with OpenCV, Python, and deep learning


$ workon <your env name here> # optional


$ git clone https://github.com/davisking/dlib.git


$ cd dlib


$ mkdir build


$ cd build


$ cmake .. -DUSE_AVX_INSTRUCTIONS=1


$ cmake --build .


$ cd ..


$ python setup.py install --yes USE_AVX_INSTRUCTIONS


Installing dlib with GPU support (optional)

If you do have a CUDA-compatible GPU you can install dlib with GPU support, making facial recognition faster and more efficient.

For this, I recommend installing dlib from source as you’ll have more control over the build:

Face recognition with OpenCV, Python, and deep learning

$ workon <your env name here> # optional


$ git clone https://github.com/davisking/dlib.git


$ cd dlib


$ mkdir build


$ cd build


$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1


$ cmake --build .


$ cd ..


$ python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA


Install the face_recognition package

The face_recognition module is installable via a simple pip command:

Face recognition with OpenCV, Python, and deep learning

$ workon <your env name here> # optional


$ pip install face_recognition

How does face recognition work?

In order to build our OpenCV face recognition pipeline, we’ll be applying deep learning in Reviewing the entire FaceNet implementation is outside the scope of this tutorial, but the gist of the pipeline can be seen in Figure 1

Face alignment, as the name suggests, is the process of

(1) identifying the geometric structure of the faces and

 (2) attempting to obtain a canonical alignment of the face based on translation, rotation, and scale

While optional, face alignment has been demonstrated to increase face recognition accuracy in some pipelines


After we’ve (optionally) applied face alignment and cropping, we pass the input face through our deep neural network:

  The FaceNet deep learning model computes a 128-d embedding that quantifies the face itself.

That’s the way face recognition works.

Here is a sample of the face recognition code

And here is a video for Amani trying the code and it works yeah

Amani video 


Hand track and counting 

In order to make hand track we need to install a library called math it’s easy to install I will leave links and codes down


How does hand track work?

In order to detect fingertips, we are going to use the Convex Hull technique. In mathematics, Convex Hull is the smallest convex set that contains a set of points. And a convex set is a set of points such that, if we trace a straight line from any pair of points in the set, that line must also be inside the region. The result is then a nice, smooth region, much easier to analyze than our contour, that contains many imperfections.

To detect the Fingers and count them.

Find the ROI(region of interest)

Hand Segmentation: Convert the video frame from BGR to HSV(or Gray)

Perform a Gaussian blur

Perform a Threshold

Find the Biggest Contour(this will be our hand)

Perform a convex hull and mark the ROI(region of interest

Count the no. of countors

Display it

Samples

very well. You can find the original code here. Then, found another source that illustrates how the counting of hand fingers' code works. You can find it here.

Now for the final step: mix the two codes together (face recognition and hand tracking)

Just bring the face recognition code and merge it with hand tracking code in one big code, don’t forget to edit the variables

Sample of code

                                                                                       

And here is a video for Amani after she finished the whole code  Here is a video of the mixed code running 

Google Assistant Software

75e049f7-250a-4ded-becc-3e1a688b438e.png
a24cfbef-561d-48c1-9c68-5a45bfc83889.png
d7d4c5b8-708e-4907-b658-5cd7a34928f2.png
2a0b3785-00ac-4587-939b-3a6279175726.png
91ad361f-d851-4136-97c1-09d59c0678f7.png
a6f18c6e-2fea-469c-b113-ac72522de9bb.png
c969e65d-ac01-4e35-bbf2-5b1d61351f78.png
9c4314b8-f107-4fc8-aebb-2db11853117e.png

In this part, we will learn about Google Assistant and how to provide our FABY Robot with Google Assistant

Before anything, it’s preferred to work on the Raspberry Pi software (Linux rasspian distribution)

Make sure you plug the microphone into the Raspberry Pi

 By the end of 2019 and the beginning of 2020, google posted a tutorial about how to implement Google Assistant on your Raspberry Pi https://developers.google.com/assistant/sdk/guides/library/python, I followed all the tutorial steps, I managed to make it work In this documentation I will talk about these steps, so you can follow me here or go to Google tutorial.

Configure and Test the Audio Test the microphone and speaker to make sure they are working well. To do so, Frist, open the terminal then write “a play -l” to find the card number and device number allocated for the microphone and speaker. Don’t forget to write down these two numbers for the microphone and speaker as we will need them later.

Second, Create the (asoundrc) file by writing this command “sudo nano /home/pi/.asoundrc”

on the terminal

 Third, replace all the text inside it with the following text “ pcm.!default {

 type asym capture.pcm "mic"

playback.pcm "speaker"

 }

 pcm.mic

{

type plug slave

{ pcm "hw:,"

 }

}

pcm.speaker {

type plug slave {

 pcm "hw:,"

 }

 }

Fourth, replace it with the numbers that you wrote down earlier.

Fifth, save and exit the file by clicking (Ctrl + x) then (y + Enter)

Sixth, type “alsamixer” on the terminal and raise the sound level of the speakers

Seventh, test the speakers by typing “speaker-test -t wav” on the terminal, by clicking enter you should

hear (left, Front) from the speakers then click (Ctrl + c) to stop it.

Eighth, test the microphone by recording sound by typing “a record --format=S16_LE --duration=5 --

rate=16000 --file-type=raw out.raw” on the terminal

Ninth, play the sound that you recorded to make sure the microphone works well by typing “apply --

format=S16_LE --rate=16000 out.raw” on the terminal



Configure a Developer Project and Account Settings and Register the Device Model

First, open your internet browser go to “console.actions.google.com” then select (new project) and

enter the name of your project

Third, after clicking on (REGISTER MODEL), click on (Download OAuth 2.0 credentials), save the json file

as we will need it later, then skip the specific traits options

Fourth, go to “console.developers.google.com/apis” then click on (Enable API and services), search for

Google Assistant API and enable it


Fifth, go to the authorization console screen as shown in the figure below, then select (External) then

(CREAT), then confirm your Email and save the settings

Sixth, go to “myaccount.google.com/activitycontrols” and make sure all the following are turned on

Web & app activity

Location History

Device information

Voice and audio activity

Step4: Install the SDK and Sample Code

First, open the terminal then type “sudo apt-get update” then “sudo apt-get upgrade” then install

python3 virtual environment by typing “sudo apt-get install python3-dev python3-venv” then write

“python3 -m venv env” to enable the virtual environment

Second, update the pip by typing “env/bin/python -m pip install --upgrade pip setuptools wheel”, then

activate the Python virtual environment using the source command by typing “source env/bin/activate”

Third, type in the terminal the following “sudo apt-get install portaudio19-dev libffi-dev libssl-dev”, then

Install Google Assistant SDK by typing “python -m pip install --upgrade google-assistant-sdk[samples] ”

Fourth, copy the JSON file that you downloaded and put it in “/home/pi” directory then copy its path.

Fifth, back to the terminal and make sure the virtual environment is activated then type “python -m pip

install --upgrade google-auth-oauthlib[tool]”

Sixth, type in the terminal “google-oauthlib-tool –scope https://www.googleapis.com/auth/assistantsdk-prototype --save --headless --client-secrets /home/pi/<credential-file-name>.json”, Don’t forget to

replace <credential-file-name> with the path of the JSON file.

Seventh, now you must see a URL on the terminal, open it copy the code then paste it into the

terminal

Eighth, to activate Google Assistant type “google samples-assistant-push to talk --project-id <project-id> --

device-model-id <model-id>” on the terminal Don’t forget to replace both <project-id> and <modelid> with their values from the action dashboard then go to the project settings page


Mobile Application

Capture 2.PNG
Capture 1.PNG
Capture 1_code.PNG
Capture 3_code.PNG
Capture _code.PNG
Welcome screen.PNG
Google assistant_code.PNG
Google assistant.PNG
Interact_code.PNG
Interact.PNG
motion_code.PNG
motion.PNG
Camera_code.PNG
Camera.PNG
About_code.PNG
About.PNG
Home_code.PNG
Home.PNG
Welcome screen_code.PNG

For Mobile App design, I used MIT App Inventor, because it's easy to learn and easy to use. MIT App Inventor is a great starter program for app building.

Several challenges should be included in the App:

1.   Make an attractive design.

2.   Converting the voice into words for commands.

3.   Send a command to take a photo or record a video.

4.   The robot motion control.

5.   Connect to Google Assistant API. 

So, I watched several videos on YouTube, such as:

The design not work well in first, so several versions as the following:

  • Version 1:
  • The first Screen:
  • The buttons for voice [Button3] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Added ON [Button4]and OFF [Button5] buttons to control the raspberry pi GPIO pin, by adding Android thingsboard1 and Android thingsGPIO1 as Non-visible components.
  • Forward [Button2] was added to go to screen 2.
  • The Second screen:
  • The buttons for voice [Speak Button] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Added Backward [Button1] to back to Screen 1, and forward [Button2] to go to Screen 3. 
  • The third Screen:
  • The buttons for voice [Button1] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Bluetooth
  • Added a Bluetooth picture [ListPicker1 button]to open the mobile Bluetooth to contact with the robot which is controlled by adding BluetoothClient1 and Clock as Non-visible components.
  • Added Backward button [Button2] to back to Screen 1. 
  • The Cons of the design: 
  • It is designed for converting speaking into words, but it is so boring with poor design. 
  • The video testing 


  • Version 2:
  • here I downloaded the Vector app to follow the ideal design. But the home icons are not fully organized and sending the commands still not working.

 

  • Version 3:
  • It covers all the previous challenges.
  • The First Screen {Welcome screen}:
  • Involves one button to go to the home screen [Screen 2]
  • The Second Screen {Home Screen}:
  • Includes six buttons to open the camera, Stats, Entertainment, Question and Answer mode, Interact, About.
  • The Third Screen {Camera Screen}:
  • Contacted with firebase database by adding in Non-visible components, by sending code {1} when pressing camera picture [Snap button] to open the robot camera and take a capture, and sending code {2} when pressing the video picture [Video button] to open the robot camera and record a video. 
  • The firebase was also tested by adding Name and Age in the text boxes and pressing save to save on the database.
  • A camera test button was added to show the firebase response, which was done by adding Ev3touchsensor1 and camera1 as non-visible components.
  • The Fourth Screen {Entertainment Screen}:
  • Includes four arrows {left, right, up, and down} to control the robot's motion.
  • The Fifth Screen {Question and Answer mode Screen}:
  • Which will be connected with Google Assistant API
  • The Sixth Screen {Interact Screen}:
  • Contains the buttons for speaking [Hi_FABY], with the speech organizer to can talk as Non-visible components, beside a text box for writing the voice words.
  • The Seventh Screen {About}:
  • That includes information about the robot and the working team and the fabricated place.  
  • The video testing 
  • The video Testing the commands sending to firebase.