Raspberry Pi Robot With Google Assistant Expresing Robotic Emotions (EWON Remix)

by Réunion974 in Circuits > Raspberry Pi

1808 Views, 15 Favorites, 0 Comments

Raspberry Pi Robot With Google Assistant Expresing Robotic Emotions (EWON Remix)

20220514_203132.jpg
20220515_170014.jpg

I really like sharathnaik's EWON robot and really enjoyed building it.

I had to make a few adaptations to have it working: the originaly used screen was not available anymore and the one I ordered needed a very different wiring and programing, so I had to redesign many 3D files.

Some modifications were required in the code for different reasons:

  • the wakeword software engine (showboy) is not available anymore
  • some changes in google assistant
  • different hardware display used

So following the advice of sharathnaik, I'm writing my first instructable, here :)

The documentation for the software was written by Zach

Parts Required

  • Raspberry PI zero w
  • Servo SG90 (x4)
  • Servo MG995 – standard (x2)
  • PCA9685 16-Channel Servo Driver
  • USB Sound card
  • Microphone
  • Speaker
  • Male and female pin header connectors
  • GPIO screen for PI 3.5"

https://fr.aliexpress.com/item/4001112465559.html?spm=a2g0o.order_list.0.0.2cdb5e5bH8WgXD&gatewayAdapt=glo2fra

FASTENERS AND BEARINGS

I used some screws for plastic that I took on a broken printer but this is the list from the original project and it will work.

  • M3*10mm (x10)
  • M3*8mm (x10)
  • M3 Nuts (x20)
  • Bearing
  • OD: 15mm ID: 6mm Width: 5mm (x2)
  • OD: 22mm ID: 8mm Width: 7mm (x2)

When I started the project, I did not have the bearings so I 3d printed it.

This is my favourite design for 608 bearings, it requires some 4mm balls for BB guns. I found those balls in a toy shop:

https://www.printables.com/fr/model/66530-608-bearing-using-bbs

Many other designs are available

Printing 3D Files

You have to print all the files I uploaded on this page: Raspberry Pi Robot with google assistant expresing robotic emotions (EWON remix)

  • Files named "x2" or "x4" are to print 2 or 4 times.
  • "Directly reused" means that the design is conform to the files from original EWON project.

Assembly 1: the Base

step1.png
build plate.png
Neck_base_1 (directly reused).png
Base_cap 4x (directly reused).png
stand-off_70mm FF x4.png

1 - assembly of the base

Use the following printed parts :

  • Neck_base_2 (directly reused).stl
  • Base_plate (directly reused).stl
  • 4x stand-off_70mm FF x4.stl
  • 4x Base_cap 4x (directly reused).stl

Position a base cap under the base plate, and the 70mm standoff on the other side. Assemble with a screw. Repeat 4x.

Attach the MG995 Servo to the Neck_base_2 using 4x screws and 4x bolts.

Assemble both parts using 4x screws like in the picture above


Assembly Next

20220514_135410.jpg
20220514_135417.jpg
20220514_135459.jpg
20220514_164357.jpg
20220514_164412.jpg
20220514_164431.jpg
20220514_181617.jpg
20220515_170023.jpg
20220515_170036.jpg
20220515_170046.jpg
20220515_170113.jpg

I will provide later a step by step guide for assembly.

You can check my pictures and also the pictures from Sharu's guide.

Please install the "PCA9685 16-Channel Servo Driver board" inside the head of the robot.

Mount the GPIO screen.

The pi, the speaker will be in the bottom part, in the body of the robot


Electronic Wiring

display.jpeg
raspberry-pi-zero-pinout.jpg
1.jpg
2.jpg
3.jpg
Screen wiring.gif

Connecting the PI to the GPIO screen.

Your PI will be located in the bottom of the robot and the screen connector is in the top of the head. So you can use 30 cm male/female dupont cable to connect it together.

The cables will go from the GPIO screen connector (pic 1) to the hole in the neck of the robot (pic 2), then it will cross the 2 horizontal discs separating head and body (pic 3).

For the wiring, check the numbers in the picture of pi zero pinout. On the screen side, the number are not written, but it is pretty obvious: if you connect the screen to the PI directly, pin1 -> pin1, pin2 -> pin2, etc...

The wiring is in the attache excel tab.

Install Pi Software

  • Go to www.raspberrypi.com/software/
  • Download the imager by clicking the download button when you scroll down
  • Open the App, click "Operating System," click Raspberry Pi OS (Other)
  • Scroll down and click Raspberry Pi OS Lite (32-bit or legacy)
  • Insert SD Card
  • Click Storage and click on your Micro SD card.
  • Click Write
  • Insert SD card to the Raspberry Pi
  • Connect a good power supply to the Raspberry Pi.

You will need bullseye (v11) or more recent version of the OS for the next step.

Connect to the Raspberry Pi With SSH

Wifi Confirmation

Confirm that your computer and Raspberry Pi are connected to the same network.


Connect Via SSH


Make sure the computer you are working with has Google Chrome and install the Secure Shell Extension.




Launch the Secure Shell Extension


if you’re using Chrome on a Windows, Mac, or Linux computer, you should see the Secure Shell Extension icon in the toolbar (look to the right of the address bar). Click that icon and then select Connection Dialog in the menu that appears.



Connect to the Raspberry PI

In the top field, type pi@192.168.0.0, but replacing those numbers with the real IP address of your Raspberry Pi. After typing this in, click on the port field. The [ENTER] Connect button should light up.

Click [ENTER] Connect to continue.


Give Permissions

Click Allow.

This gives permission to the SSH extension to access remote computers like your Raspberry Pi.

You will only need to do this when you add the extension into Chrome.

Continue connecting

At this point, the SSH extension has connected to your Raspberry Pi and is asking you to verify that the host key it printed out matches what is stored on the Raspberry Pi. Since this is the first time your Raspberry Pi has been turned on, the data listed above this prompt is brand new, and it's safe to continue.

When you answer yes here, the SSH extension will save this information to your browser and verify it is correct every time you reconnect to your Raspberry Pi.

At the prompt, type yes and press enter to continue.

Enter the Raspberry Pi’s password

Enter Raspberry Pi’s password at the prompt. The default, case-sensitive password is raspberry


When you type, you won’t see the characters.


If it’s typed wrong, you’ll see “Permission denied, please try again” or “connection closed.” You’ll need to re-start your connection by selecting the (R) option by pressing the R key.


Confirm Connection

If the password was entered correctly, you’ll see a message about SSH being enabled by default and the pi@raspberrypi shell prompt will be green.



Install Google Assistant

Login To Google Cloud Platform


In order to make use of the Google Assistant APIs, you need to get credentials from Google's developer console.


On your computer (not the Raspberry Pi), go to https://console.cloud.google.com/ and log in with your Google account.


If it's your first time, you'll need to agree to the terms of service.



Select a project

First, we have to create a project to track all of the APIs we want to use.

From the top bar, click Select a project.



Create a New Project


A dialog like the image to the left will appear.

Click New Project in the top right corner of the dialog.




Enter a Project Name


Enter a project name and click Create. (You can leave the Location option alone.)

My son wanted to name his robot Levi, so that is what I went with.


Open the Project



Now that we've created the project, we need to select it so we can turn on the APIs we want to use.

Click the Dashboard link in the left navigation. Then select the project you just created.

This opens the dashboard view for your project (you can tell by the dropdown at the top of the screen; it should display the name you chose in the previous step).


Open The Library


If the left navigation is not already visible, click on the three-line menu icon at the top left of the page to open it. Hover your mouse over APIs & Services, and then click Library.



Enable Google Assistant API


In the search box, type "google assistant" and click on the card labeled Google Assistant API.




Click on the results and select Enable.




Create Credentials


Now we need to get the credentials to access our API. Locate Create Credentials button on the top right hand side.



Add Credentials to the Project


You should be directed to the Credentials helper page.

For "Which API are you using?", select Google Assistant API.

For "What data will you be accessing?", select User data.

Then click Next.


Fill Out Consent Screen


Because the app requires user data, you need to add information for a user consent screen.

The information in the consent screen is intended for end-users when publishing a production product that uses Google APIs. But because this project is just for your personal use, you can keep it simple.

Enter something for the App name, such as "Ewon Project."

For the User support email, select your account email from the drop-down.

You can leave the App logo empty, and type your email for the Developer contact information.

Then click Save and continue.




Create OAuth Client ID

Skip the Scopes section by scrolling down and clicking Save and continue.

Next, you'll see the OAuth Client ID section. For the Application type, select Desktop app, and enter a name for the client, such as "Ewon Client."

Then click Create.

Create Test User

In the left navigation, click OAuth consent screen.

Scroll down to Test users and click Add users.

In the form that appears, enter your Google Account email address (it must be the account that you'll use when signing in from the Voice Kit), and click Save.

You should then see your email address listed under Test users (below the button).

Download the Credentials


Now go get the client credentials by clicking Credentials in the left nav.

Find your client name under OAuth 2.0 Client IDs and click the download button on the far right. A dialog appears where you must then click Download JSON to get the credentials in a .json file.

Copy the .json file

Find out where you saved the json file and open it up. It is the file that starts with clients_secret. 


Once open, the entire string. Ctrl-A and then Ctrl-C is the easiest way.


Add it to the Raspberry Pi


Go back to the Secure Shell Extension and then type the following command:

nano assistant.json

This command starts the nano text editor so we can paste the contents of the JSON file we downloaded earlier into a file on disk.

 

With the credentials still copied, right click the command terminal and the text will paste.

To save the file, press Ctrl-O

A prompt appears to verify the file name you already specified: assistant.json. Press Enter.

Press Ctrl-X to exit.

Create Actions for the Project

Now we need to setup our actions. Go to Google Action Console and create a new project


You will then be prompted to enter the project information. You should see the project we already created. Select that and hit import project.



** This is important! It is a pain to redo this step **


Scroll to the bottom where it says Are you looking for device registration? 



Select the click here


Register Your Model


If all goes well, you should be presented with this screen:



Click on Register Model


You will be prompted to enter more information. These settings are not as important, but they have to all be filled out. I selected speaker, but you could probably select another type.



Once the fields are filled out, select Register Model.


Hit continue and skip the traits for now. You should now have your model ID. Save this for later.


Confirm Updates

Type in the following command into the terminal:


sudo apt-get update

sudo apt-get upgrade

This will update the Raspberry Pi’s package list.


If an update is required, it will ask you if it is okay to continue. Just select enter and it will update.


Install Dependencies


Now type in the following to update the dependencies for Google Assistant:


sudo apt install python3-dev python3-venv python3-pip libssl-dev libffi-dev libportaudio2


Setup the Virtual Environment


Type in the following command to enable the virtual environment:


python3 -m venv env


Update More Packages…


We need to make sure we have the latest versions of pip and setuptools. Type in env/bin/python3 -m pip install --upgrade pip setuptools --upgrade

 and run the command to ensure we have the latest.



Enable the Environment


Type in the following to access the new environment:


source env/bin/activate

If it worked, your console should now be prefaced with (env)



Install Google Assistant Library


Now that everything is prepared, we can run the commands to install the library!


python3 -m pip install --upgrade google-assistant-library

python3 -m pip install --upgrade google-assistant-sdk[samples]


If you get an error: If you get an error installing google-assistant-library, that may be the solution:

Tenacity version 4.1.0 is installed while installing google assistant sdk samples. So you get a Syntax Error.

Installed the latest version of tenacity 8.1.0.

If you still get the error and it says that arr.tostring() is deprecated you can change that to arr.tobytes().

When you enter the command googlesamples-assistant-pushtotalk --project id xxxxx --device-model-id xxxxxx

If you get the error "not found --project id" you can fixe that by entering only googlesamples-assistant-pushtotalk command.


Get Authorization


Now we need to work on the authorizations. Do do this, type in the following:


google-oauthlib-tool --scope https://www.googleapis.com/auth/assistant-sdk-prototype   --save --headless --client-secrets ./assistant.json


If all goes well, you should be provided with a url and asking for an authorization code.



Copy the url by highlighting the text. Go to the url and login with your google account that you setup previously as a tester.



You will get alerted that google has not verified the app. 



Select continue.


Now you will have to give permissions. Just select continue.


You will now be presented with the authorization code. Copy it and paste it into the terminal.



You should now be connected to your API!


Configure Your Pi to Control the Servo

Configuring Your Pi for I2C (text from Sharu)

The next step would be using the triggered emotion to run the respective facial expression. With Ewon, the facial expression is noting but moving its ear and neck using servos and changing the display to change the eye movements.

First, the servos, to run this it's fairly easy you can follow this tutorial to set up the Adafruit servo library.

Link: https://learn.adafruit.com/adafruit-16-channel-se...


Then we assign the maximum and minimum value for all the servos. This is done by manually moving each servo and checking its limits. You can do this once you have assembled Ewon.

The calibration will be described later.

Install the Wake Word Engine

tbc

Import My Code

OK the software installation should be ok. Time to customize it.

You can import my code from github :

https://github.com/liouma/Modifications-on-Ewon-project

Download all files and insert it in your files tree. Rename original files before overwriting with mines.