Voice Kit

Do-it-yourself intelligent speaker. Experiment with voice recognition and the Google Assistant.

Meet your kit

Welcome! Let’s get started

The AIY Voice Kit from Google lets you build your own natural language processor and connect it to the Google Assistant. All of this fits in a handy little cardboard cube, powered by a Raspberry Pi.

These instructions show you how to assemble your AIY Voice Kit, connect to it, and run the Google Assistant demo application.

Time required: 1.5 hours

If you have any issues while building the kit, check out our help page or contact us at support-aiyprojects@google.com.

Check your kit version

These instructions are for Voice Kit 2.0. Check your kit version by looking on the back of the white box sleeve in the bottom-left corner.

If it says version 2.0, proceed ahead! If it doesn’t have a version number, follow the assembly instructions for the earlier version.

Gather additional items

You’ll need some additional things, not included with your kit, to build it:

  • 2mm flat screwdriver: For tightening the screw terminals
  • Micro USB power supply: The best option is to use a USB Power supply that can provide 2.1 Amps of power via micro-USB B connector. The second-best choice is to use a phone charger that also provides 2.1A of power (sometimes called a fast charger). Don't try to power your Raspberry Pi from your computer. It will not be able to provide enough power and it may corrupt the SD card, causing boot failures or other errors.
  • Wi-Fi connection

Below are two different options to connect to your kit to Wi-Fi, so that you can communicate with it wirelessly.

Option 1: Use the AIY Projects app

Choose this option if you have access to an Android smartphone and a separate computer.

You’ll need:

  • Android smartphone
  • Windows, Mac, or Linux computer

Option 2: Use a monitor, mouse, and keyboard

Choose this option if you don’t have access to an Android smartphone.

You’ll need:

  • Windows, Mac, or Linux computer
  • Mouse
  • Keyboard
  • Monitor or TV (any size will work) with a HDMI input
  • Normal-sized HDMI cable and mini HDMI adapter
  • Adapter to attach your mouse and keyboard to the kit. Below are two different options.

Adapter option A: USB On-the-go (OTG) adapter cable to convert the Raspberry Pi USB micro port to a normal-sized USB port. You can then use a keyboard/mouse combo that requires only one USB port.

Adapter option B: Micro USB Hub that provides multiple USB ports to connect to any traditional keyboard and mouse.

Get to know the hardware

Open your kit and get familiar with what’s inside.

Missing something? Please send an email to support-aiyprojects@google.com and we will help direct you to finding a replacement.

List of materials

Materials
1
Voice Bonnet
2
Raspberry Pi Zero WH
3
Speaker
4
Micro SD card
5
Push button
6
Button nut
7
Button harness
8
Standoffs
9
Micro USB cable
10
Speaker box cardboard
11
Internal frame cardboard

In your kit

  1. 1 Voice Bonnet (×1)
  2. 2 Raspberry Pi Zero WH (×1)
  3. 3 Speaker (×1)
  4. 4 Micro SD card (×1)
  5. 5 Push button (×1)
  6. 6 Button nut (×1)
  7. 7 Button harness (×1)
  8. 8 Standoffs (×2)
  9. 9 Micro USB cable (×1)
  10. 10 Speaker box cardboard (×1)
  11. 11 Internal frame cardboard (×1)

Build your kit

Fold the internal frame

Orient the frame

Let’s start by folding the internal frame, which will hold the electrical hardware inside the box.

Lay the frame on the table in front of you like the photo. The left slit should be closer to the cardboard edge.

Fold long flap

Fold the long flap away from you and downwards along the two creases.

Fold left and right flaps

Fold the two highlighted flaps toward you.

Fold the bottom flap

Fold the bottom flap upward toward you.

Align the bottom

The slits on the bottom flap will align with two notches.

Insert the notches

Insert the notches into the slits.

Check frame alignment

Look at the bottom of your frame. The slit closer to the edge should be on the left.

If yours looks different, you probably folded the frame the wrong way. Unfold it and check step 4 in fold the internal frame.

Set the frame aside

The internal frame is built. Set it aside for now; we’ll need in the next phase.

Connect the boards

Gather your boards and speaker

Now it’s time to connect all the electrical hardware together. Gather your:

  • Raspberry Pi
  • Voice Bonnet board
  • Standoffs (x2)
  • Speaker

What are these boards for? The Raspberry Pi circuit board is small but mighty computer. The Voice Bonnet adds 2 microphones, a speaker connector, an LED button connector, and a special audio codec processor.

Orient your Raspberry Pi

Find your the Raspberry Pi board and orient it so that the 40-pin header is on the left edge of the board, like the photo.

WARNING: First make sure your Raspberry Pi is disconnected from any power source and other components. Failure to do so may result in electric shock, serious injury, death, fire or damage to your board or other components and equipment.

Insert standoffs

Holding the board in your hand, insert the standoffs into the holes on the right edge of the board, opposite the header.

It will take a firm push to get the standoffs to click in, so you should support the board from underneath so it doesn’t bend too much.

It’s always best to hold the board by the edges (not by the top and bottom surfaces).

Connect the boards

Grab your Voice Bonnet board. It has a header connector on the bottom of the board.

Align the header connector with the pin header on the Raspberry Pi, then press down to connect the headers. Firmly push on the other side of the Voice Bonnet to snap the standoffs into place. Push near the standoffs, not in the center of the board.

Check connections

Make sure the standoffs have snapped into the boards and that the 40-pin header is pushed all the way down so that there is no gap between the two boards.

WARNING: Failure to securely seat the Voice Bonnet may cause electric shock, short, or start a fire, and lead to serious injury, death, or damage to property.

Loosen screws

Grab your 2mm screwdriver and loosen the two bottom screws of the screw terminal so that you can insert the speaker wires.

Insert speaker wires

Take your speaker and find the red and black wires attached to it. Insert the red wire into the bottom slot of the left-most terminal and gently push it in as far as you can. Do the same for the black wire in the second terminal from the left.

What are the speaker wires for? The red and black wires transmit electrical signals which are converted to sound by the speaker.

Secure wires

Secure the wires by using your 2mm screwdriver to turn the screws clockwise.

WARNING: Failure to secure the wires or leaving wires exposed may cause electric shock, short, or start a fire, and lead to serious injury, death, or damage to property.

Add boards to the frame

Now that your speaker is connected to the terminals, we can put the boards into the internal frame.

Slide the boards into the internal frame you folded earlier. The board slides into a slot that looks like a mouth :o

Check board alignment

The microphones on the left and right edges of the board (circled in white) should be outside the internal frame.

Insert speaker

Slide the speaker down into the internal frame.

WARNING: The speaker contains magnets and should be kept out of reach of children and may interfere with medical implants and pacemakers.

Check that the speaker is secure

Make sure the speaker is snug and secure.

Now that your electrical hardware is secure on the cardboard frame, let’s build the box it goes into.

Put it all in the box

Open the speaker box

Find the speaker box cardboard and pop it open by squeezing along the edges of the box.

Fold flap A

Fold flap A down into the box.

Fold flaps B and C

Then fold flaps B and C down into the box.

Fold flap D

Lastly, fold flap D down, pressing until it locks into place.

Check the box bottom

The bottom should be secure. In the next step we’ll bring it all together.

Slide the internal frame inside

Slide the internal frame into the speaker box. Make sure the speaker is lined up with the side of the cardboard box that has circular holes.

Check alignment of internal frame

Once it’s in, check the sides of the internal frame. There should be more space between the internal frame and the right side of the speaker box.

Yours looks different? If the inside of your box looks different, you might need to fold the internal frame the other way. Check step 10 in fold the internal frame for more help.

Check connectors

Make sure the connectors are lined up with the cardboard cutouts. The connectors are used to plug in things like your SD card and power.

WARNING: Forcing connectors into misaligned ports may result in loose or detached connectors. Loose wires in the box can cause electric shock, shorts, or start a fire, which can lead to serious injury, death or damage to property.

Yours looks different? If your connectors don’t line up, you may have inserted the internal frame the wrong way. Check step 29 in put it all in the box for more help.

Add the button

Gather your pieces

It’s time to add the button. From your kit, round up the:

  • Push button
  • Button nut
  • Button harness

Insert the push button

Insert the push button into the hole on the top of your cardboard box.

Orient the button

On the other side, orient the button so that the side with four prongs is on the top (check the photo).

Screw on the button nut

Screw on the button nut to secure the button in place.

Make sure the wider, flanged side of the nut is facing the cardboard flap.

Make sure your button is still oriented so the four prongs are on top.

Plug in your wires

Now it’s time to plug in the wires on the button harness. Take the button harness and find the end with six individually colored wires.

Plug each of those wires in the correct slot by matching its color to the image.

  • Top row: blue and green
  • Middle row: grey and black
  • Bottom row: red and orange

Check wires

Double-check to make sure your wires are plugged in the same way as the image.

Find the 8-pin connector on the Voice Bonnet

To plug in the button harness, first find the 8-pin connector on your Voice Bonnet board, outlined in a white rectangle in the photo.

With the speaker facing away from you, plug the black wire into the top-right slot. Plug the blue wire into the top-left slot. Double check to make sure this is correct.

WARNING: Failure to securely seat the connector may cause electric shock, short, or start a fire, and lead to serious injury, death, or damage to property.

Fold the top flaps

Close your box by folding down the top flaps.

Tuck in the tab

Secure the box by tucking in the tab.

Insert the SD card

The SD card is pre-loaded with all the software you need.

With the arrow side facing up, insert your SD card into the silver slot on the Raspberry Pi, which you can find through the cardboard cutout labeled SD Card.

Congrats, you’ve just assembled the Voice Kit hardware!

Now you’re ready to connect your kit to the Google Assistant.

Connect to your kit

Connect to your kit

Select an option

There are two ways to connect your kit to Wi-Fi and get an IP AddressThe Internet Protocol Address is a four-segment number that identifies a device on a network. Every device on your network - your computer, phone, your Voice Kit - will have a unique IP Address. Using this address, one device can talk to another., depending on what you have available. There are two options for connecting, explained in Meet your kit.

Follow instructions for one connection option, either with the AIY Projects app or with a monitor, mouse, and keyboard.

Option 1: AIY Projects App

Plug your Voice Kit into a power supply

Plug your Voice Kit into your power supply through the port labeled Power.

See Meet your kit for power supply options. Do not plug your Voice Kit into a computer for power.

Let it boot up

To confirm that it’s connected to power, look into the hole in the cardboard labeled SD Card. You’ll see a green LED flashing on the Raspberry Pi board.

Wait about four minutes to allow your device to boot. You’ll know when it’s booted when the LED light stops flashing and remains lit.

Download the AIY Projects app

From your Android device, go to the Google Play Store and download the AIY Projects app.

This app will allow you to connect your Voice Kit to a Wi-Fi network, and display an IP address to communicate with your Voice Kit wirelessly via a separate computer and SSH.

Follow app instructions

Open the app and follow the onscreen instructions to pair with your Voice Kit.

Take note of the IP address — you’ll need it later. The app will also remember and display it on the home screen.

Move to the next section, Log in to your kit.

Not working? Make sure your Voice Kit is still connected to a power supply.

If you run into errors, quit the app and try again.

If the device won’t pair, make sure the green LED on the Voice Bonnet is flashing. If it’s not flashing, it may have timed out. Press and hold the Voice Bonnet button for 5 seconds, and try again. If that doesn’t work, try restarting your phone.

Option 2: With Monitor, Mouse, and Keyboard

Gather your peripherals

Use connection option if you don’t have access to a second computer and Android smartphone, or if you prefer to connect directly to your Raspberry Pi.

You’ll need a set of peripherals to interact with your Raspberry Pi, including a monitor, keyboard and mouse. Check here for suggestions.

Connect peripherals

Before plugging into power, plug in your monitor into the HDMI port and your keyboard and mouse into the port labeled Data on your Voice Kit using one of the adapters described in Meet your kit.

Plug your Voice Kit into a power supply

Connect your power supply to the Raspberry Pi. To confirm that it’s connected to power, look into the hole in the cardboard labeled SD Card. You’ll see a green LED flashing on the Raspberry Pi board.

Let it boot up

When you plug it in, you’ll see the Raspberry Pi logo in the top left corner of the screen.

Wait about four minutes to allow your device to boot. You’ll know when it’s booted when the LED light stops flashing and remains lit.

Acknowledge the warning

You’ll see a desktop with the AIY logo on the background. A warning pop-up will tell you the password for the Raspberry Pi user is set to the default. This is important if you plan to use this kit in other projects or expose it to the internet, but for now, it's safe to click OK.

Do I need to change my password? You'll want to change the Raspberry Pi user's password if you plan on using this kit in a project that is exposed to the open internet. It's not safe to expose it with a password everybody knows. If you plan on doing this, you'll want to use the passwd program. This is an advanced step, so for the purposes of this guide, we will assume you haven't changed the password for the Raspberry Pi user.

Note If you do change the password, make sure you keep your password written down somewhere safe in case you forget. It's not easy to recover it if you change it.

Connect to Wi-Fi

Using your mouse, click on the Wi-Fi connection icon at the bar at the top right of the screen. Look for your Wi-Fi network name (also known as an SSID) and click on it. If you have a password, a dialog will appear asking for the pre-shared keyThis is just a fancy way of saying password and is borrowed from the Wi-Fi security standards. Pre-shared just means the password was given to you before you attempted to connect.. Enter your password here and click OK.

Check Wi-Fi

Watch the Wi-Fi icon you just clicked on to bring up the menu. It should begin flashing. Once it's gone solid, you are connected.

Find IP address

Find The Raspberry Pi’s IP address by hovering over the Wi-Fi icon. It will look something like 192.168.0.0 or 10.0.0.0 and prefixed with wlan0Linux uses acronyms like wlan0 as names for network devices connected to your computer. In this case, the wlan part stands for Wireless Local Area Network and the 0 means it's the first device Linux identified. In this case, we want the IP address assigned to the wlan0 device, which is why we looked for it in the tooltip.

Write the IP address down; you’ll need it to connect via SSH.

Move to the next section, Log in to your kit.

Setup the Assistant

Log In to Your Kit

Check your Wi-Fi

Make sure your computer is on the same Wi-Fi network as the kit.

Now we’re going to connect to your kit through SSHSSH stands for secure shell. It’s a way to securely connect from one computer to another. and set things up so it can talk to the Google Assistant in the cloud.

Get your terminal ready

We’re going to connect your computer to the Raspberry Pi using SSH in a terminalA terminal is a text window where you can issue commands to your Raspberry Pi. SSH allows you to do so from a separate computer..

If you’re familiar with using a terminal, start an SSH session with pi@192.168.0.0 (but using your Raspberry Pi's real IP address from above), then skip to step 62.

If you're not familiar with a terminal, download and install the Chrome browser and Secure Shell Extension, and proceed to the next step.

Open the Secure Shell Extension

Once the extension is installed, open it.

If you’re using Chrome on a Windows, Mac, or Linux computer, you should see the Secure Shell Extension icon in the toolbar (look to the right of the address bar). Click that icon and then select Connection Dialog in the menu that appears.

If you’re using Chrome on a Chromebook, go to the app menu and type "secure shell app extension".

Connect to the Raspberry Pi

In the top field, type pi@192.168.0.0, but replacing those numbers with the real IP address of your Raspberry Pi. After typing this in, click on the port field. The [ENTER] Connect button should light up.

Click [ENTER] Connect to continue.

Can’t connect? If you can’t connect, check to make sure the IP address you wrote down earlier is correct and that your Raspberry Pi is connected to the same Wi-Fi access point your computer is.

Note If you rewrite or replace your SD card, you will need to remove and add the Secure Shell Extension from Chrome. You can do this by right clicking on the icon in your toolbar and selecting Remove, then re-add it by following the instructions above.

Give the extension permission

Click Allow.

This gives permission to the SSH extension to access remote computers like your Raspberry Pi.

You will only need to do this when you add the extension into Chrome.

Continue connecting

At this point, the SSH extension has connected to your Raspberry Pi and is asking you to verify that the host keyThe SSH extension is designed to be secure, and because of this goal, it needs to identify that the computer you're trying to connect to is actually the computer you expect. To make this job easier, the computers generate a long number and present it to the extension for verification each time. The extension saves this key somewhere safe so that it can verify that the computer you're speaking to is actually the right one. it printed out matches what is stored on the Raspberry Pi. Since this is the first time your Raspberry Pi has been turned on, the data listed above this prompt is brand new, and it's safe to continue.

When you answer yes here, the SSH extension will save this information to your browser and verify it is correct every time you reconnect to your Raspberry Pi.

At the prompt, type yes and press enter to continue.

Enter the Raspberry Pi’s password

Enter Raspberry Pi’s password at the prompt. The default, case-sensitive password is raspberry

When you type, you won’t see the characters.

If it’s typed wrong, you’ll see “Permission denied, please try again” or “connection closed.” You’ll need to re-start your connection by selecting the (R) option by pressing the R key.

Note Your IP address will be different than the one shown in the example.

It’s okay if you see the warning line. It’s letting you know that the host key has been saved, and the extension will do the hard work of comparing what it just stored with what the Raspberry Pi provides automatically next time.

Confirm you’re connected

If the password was entered correctly, you’ll see a message about SSH being enabled by default and the pi@raspberrypi shellA shell is a program that runs on a computer that waits for instructions from you, and helps you to make your computer work for you. promptIt’s a response from the shell that indicates that it is ready to receive commands, and tells you what your current working directory is (the tilde, ~, in this case). It ends in a $ where you type your command. will be green.

Congrats! Your computer is now connected to the Raspberry Pi.

What is the ~ in the prompt? ~ is just shorthand for /home/pi.

Check your audio

Check your audio is working. Type the following command line into your prompt and press enter:

/home/pi/AIY-projects-python/checkpoints/check_audio.py

Copy paste in Secure Shell ExtensionCopying and pasting in the Secure Shell Extension is a little different than other applications you may be used to. To copy some text, highlight what you want by clicking and dragging with the left mouse button, and as soon as you let go of it, it'll copy it. To paste, click the right mouse button. On a touchpad this can be a little tricky, so try tapping or pressing in the lower right of the touchpad, or tapping with two fingers.

Confirm audio is working

If the audio is working, you’ll hear it say “Front! Center!”

The command will ask if you’ve heard it. If so, type y and press enter. It will then ask you to press enter to record some audio and play it back. Do so, and it will ask if you heard your own voice. If so, type y and press enter.

You’ll then see The audio seems to be working, followed by Press Enter to close.... Go ahead and press the enter key to return to the shell prompt.

Audio not working? Check to make sure that your speakers are connected to the right terminals.

Adjust the volume

Too loud? Too quiet? Let’s adjust the volume. Type the following command into your prompt and press enter:

alsamixer

This starts the volume control program, as shown in the picture to the left. Use the up and down arrows on your keyboard to adjust the volume. Press the Esc key to exit the program.

To test the new volume level, run check_audio.py again:

/home/pi/AIY-projects-python/checkpoints/check_audio.py

Quick tip Press the up arrow key at the prompt to scroll through a history of commands you've run. You can edit the command if needed, then press enter.

Get Credentials

Head to the Google Cloud Platform

In order to make use of the Google Assistant and Cloud Speech APIs, you need to get credentials from Google's developer console.

On your computer (not the Raspberry Pi), go to https://console.cloud.google.com/.

Login using your Google account

Log in using your Google account.

Don’t have a Google account? Sign up for one here.

Agree to the Terms of Service

Read the Terms of Service. If you agree, click Accept.

Welcome to the Cloud Platform. This is the control panel where you can configure your applications to make use of Google's developer APIs.

Select a project

First, we have to create a project to track all of the APIs we want to use on the Voice Kit. From the top bar, click Select a project.

Create a new project

A dialog like the image to the left will appear.

Click New Project in the top right corner of the dialog.

Enter and create a project name

Enter a project name into the bar and click Create. (You can leave the Location option alone.)

We’ve used "Voice Kit" for a project name, but you can enter any name you like.

Go back to the home page

Now that we've created the project, we need to select it so we can turn on the APIs we want to use.

From the overview click Select a project again. Then select the project you just created and click Open.

This opens the dashboard view for your project (you can tell by the dropdown at the top of the screen; it should display the name you chose in the previous step).

Navigate to the library

Now we need to turn on the Cloud Speech and Google Assistant APIsAn API is a collection of functions that programs can call to make use of extra functionality. It's kinda like using an ice cream maker; you put things in and get a delicious result back! .

If the left navigation is not already visible, click on the three-line menu icon at the top left of the page to open it. Hover your mouse over APIs & Services, and then click Library.

Turn on the Google Assistant API

In the search box, type "google assistant" and click on the card labeled Google Assistant API.

Enable the Google Assistant API

Click Enable.

This turns on the Google Assistant API for your project, and gives applications you write access to its features.

Create credentials

You'll be directed to a Dashboard page for the Google Assistant API. From here you can get more details on how you're using the API and see how many requests you're making.

For now, though, we’ll create a credentials file so that the demos can tell Google who they are, and which project they're a part of.

Click the Create credentials button.

Add credentials to your project

You should be directed to the Credentials helper page.

Under "Which API are you using?", select Google Assistant API.

In the "Where will you be calling the API from?" field, select Other UI (e.g. Windows, CLI tool).

Finally, under "What data will you be accessing?", choose User dataWe do this because we'll be using the Google Assistant, which requires access to user's data..

Once you've done all that, click the What credentials do I need? button.

Create an OAuth 2.0 client ID

Enter a client ID. We suggest using the same project name that you used previously.

Click the Create client ID button.

Set up OAuth 2.0 Content screen

Enter your email and a product name shown to users (we suggest something like "Voice Kit Demo").

Click Continue. This will generate the credentials in Google's servers and prepare the APIs for use.

It might take a few seconds to complete.

Download the credentials

Click Download which will download a .json fileA .json file is a plain text file that contains JavaScript-formatted data. In this case, this data contains information that the demo scripts will present to Google's servers to identify them, but they can contain any kind of data. onto your computer. We will copy this to your Raspberry Pi, and your Voice Kit will use it to authorize the demos and your scripts to use the Google Assistant API.

Find the .json file in Downloads

On a Mac: Navigate to > Downloads > Show in Finder

On a Windows: File Explorer > Downloads

Open the file

Right click on the file in the Downloads folder.

For Mac: Open With > TextEdit

For Windows: Open With > More applications > Notepad. And uncheck Always use.

Copy the text

We’re going to copy the text from the text file so that we can paste it into a new file on your Raspberry Pi.

From Notepad or TextEdit, select Edit > Select All, then Edit > Copy

Go back to Secure Shell Extension

Back in Secure Shell Extension, type the following command and press enter:

nano assistant.json

This command starts the nano text editor so we can paste the contents of the JSON file we downloaded earlier into a file on disk.

Paste the text

Right click to paste the text.

Note Your client id will be a different number.

Write it to a file

To save the file, press Ctrl-O (that’s O as in Open, not the number zero).

A prompt appears to verify the file name you already specified: assistant.json. Press Enter.

Hint: nano has quite a few options you can use to write programs with later. Type Ctrl-G to find out more.

Exit

Type Ctrl-X to exit. This will bring you back to the shell prompt.

Confirm the file was created

Type lsls is shorthand for LiSt and prints out all of the files in the current working directory. It's a great way to look around and see what changed on disk. and press enter. Hint: that’s an “l” as in lemon, not a #1.

This shows you all of the files in your current directory. You should see assistant.json here in white.

Congrats! Now you have the credentials you need.

Move to the demos folder

Now we’re going to try out some demos. Let’s change directories to the folders where the demos live.

Type the following command line and press enter:

cd AIY-voice-kit-python

What’s cd? cd stands for change directory. Think of it as clicking through file folders. You should see the path in the command line in blue. Capitalization matters: it’s cd, not CD. Practice using cd and ls to navigate around!

What’s python? Python is a programming language that we use for the majority of our demos and scripts. It's a simple language and is very easy to learn. You can find out more about Python at https://www.python.org/.

Take a look around

Now that you've changed directoriesYou might have heard the terms "folder" or "directory" before. They are synonyms for the same thing: a data structure that contains a listing of filenames and the location of their contents on disk. Think of them like a table of contents: each time you run the ls command, you're "list"-ing the contents of one of these directories., type ls and press enter to see what's inside the AIY-voice-kit-python directory.

Learn more about working in the terminal Check out some guides from our friends at the Raspberry Pi Foundation: Conquer the Command Line and Linux Commands.

Run the Assistant Demo

Run the assistant demo

Now we’re going to run the assistant demo and try out those APIs we turned on earlier.

Type the following command line and press enter:

src/examples/voice/assistant_grpc_demo.py

Give permission

You’ll see a message about going to a URL to authorize your application.

Copy and paste the link you see in the terminal into your browser. Left click to select and copy in the the terminal.

The demo application makes use of the Google Assistant. Because of this, it the Assistant needs to access your Google account's data safely. To do this, you have to authorize it by going to the URL it printed out and giving it an authorization code.

Confirm your account

To allow the project to access your account, click Allow.

Copy and paste the code line

You’ll see a line of numbers, letters and symbols. This is the authorization code that the script needs. Copy it and paste it into the Secure Shell Extension. You may have to press enter after you paste.

Note On a Mac or Windows, highlight the code in the text field and copy it. On a Mac, press Command + C to copy, and on Windows press Ctrl + C.

To paste in the Secure Shell Extension, simply right click. This can be a little tricky on a touchpad. Try tapping or clicking in the lower right of the touchpad, or tapping with two fingers.

Optional: Give permissions to the Google Assistant API

If your kit tells you Give me permission, that means you’ve previously never interacted with the Google Assistant, and you need to give it permission to work with your account.

Download the Google Assistant app on a device (this can be your smartphone or tablet, and is available for both Android and iOS devices) and run it.

See the demo running

The demo is running. You’ll see Press the button and speak and Listening.

Remember Press here means a tap of the button (hold for no more than a second).

See machine learning at work

Tap and release the push button. Then ask a question, or give the Google Assistant a command.

For example you can say: “Sing me a song.”

As you speak, you’ll see what the machine learning model is "hearing".

Hint It might take a few seconds for the Google Assistant to respond (it’s gotta think). If the Voice Kit is having trouble hearing you, make sure you’re not too close to the device. Try speaking from a little further back.

Experiment with the Google Assistant

Here are our favorite things to say to the Google Assistant:

“Beam me up Scotty!”

“How much wood could a woodchuck chuck...”

“I’m one of your engineers.”

“How do you say "this tastes delicious" in Korean?”

“How far away is the moon?”

“Beatbox!”

“Remind me to call my friend tomorrow morning.”

Finish testing the demo

When you’re done testing the demo, press the button and say “Goodbye” or type Ctrl-CWhen Ctrl-C is pressed while in Secure Shell Extension (or in a terminal), will interrupt the command that you previously ran. If you don’t see a prompt, try pressing Ctrl-C a few times, as it's safe to use at the prompt. to interrupt it and stop the demo.

If it freezes, type Ctrl-C, and then the ‘up’ arrow for the src line to wake it up again.

Close the demo

When you’re done with your Voice Kit for the day, it’s important to shut it down properly to make sure you don’t corrupt the SD card.

Type the following command and press enter:

sudo poweroff

Once you see the prompt to (R)econnect, (C)hoose another connection, or E(x)it? and the green LED on the Raspberry Pi has turned off, you can unplug the power supply from your kit safely. Remember when you reconnect your power supply to wait until the LED stops blinking before reconnecting your kit via SSH.

Important The demo will stop working if you close the Secure Shell Extension or unplug your kit.

Reconnecting your Kit

To reconnect to your kit, plug it back into the power supply and wait for it to boot up (about 4 minutes).

Then go ahead and reconnect via SSH.

Note: You may have to re-pair your kit via the app.

What's Next?

Congrats! You’ve set up your very own intelligent speaker.

Now that you’ve got a taste for the Voice Kit can do, we’d love to see what you do with it. In the next section, we’ve included HW, APIs, and tools to enable you to get your own intelligent speaker projects up and running.

Share your creations with the maker community at #aiyprojects

Maker’s guide

Software extensions

Below are some options to change the device behavior and suggestions for extensions if you want to hack further.

Source code

If you’re using the SD card image provided, the source for the AIY libraries is already installed on your device. You can browse the Python source code at $HOME/AIY-projects-python/src/

Alternately, the project source is available on GitHub: https://github.com/google/aiyprojects-raspbian.

Python API Reference

Please see the table below for a list of modules available for developer use. The full APIs are available on GitHub: https://github.com/google/aiyprojects-raspbian/tree/aiyprojects/src/aiy.

Module APIs Provided Description & Uses in Demo Apps
aiy.voicehat get_button()
get_led()
get_status_ui()
For controlling the Arcade button and the LED.

See uses in any demo app.
aiy.audio get_player()
get_recorder()
record_to_wave()
play_wave()
play_audio()
say()
For controlling the microphone and speaker. It is capable of speaking some text or playing a wave file.

See uses in assistant_grpc_demo.py and cloudspeech_demo.py.
aiy.cloudspeech get_recognizer() For accessing the Google CloudSpeech APIs.

See uses in cloudspeech_demo.py.
aiy.i18n set_locale_dir()
set_language_code() get_language_code()
For customizing the language and locale.

Not used directly by demo apps. Some APIs depend on this module. For example, aiy.audio.say() uses this module for speech synthesis.
aiy.assistant.grpc get_assistant() For accessing the Google Assistant APIs via gRPC.

See uses in assistant_grpc_demo.py.
google.assistant.library This is the official Google Assistant Library for Python.

See online documentation.

Create a new activation trigger

An activation trigger is a condition in the code that starts a conversation with the Google Assistant. The assistant demo from above uses a press on the top button as the activation trigger, but you can implement different triggers. For example, you can use a motion detector (not included) as your activation trigger as shown here:

gpio trigger
# =========================================
# Makers! Implement your own actions here.
# =========================================

import aiy.audio
import aiy.cloudspeech
import aiy.voice


def main():
    '''Start voice recognition when motion is detected.'''
    my_motion_detector = MotionDetector()
    recognizer = aiy.cloudspeech.get_recognizer()
    aiy.audio.get_recorder().start()
    while True:
        my_motion_detector.WaitForMotion()
        text = recognizer.recognize()
        aiy.audio.say('You said ', text)


if __name__ == '__main__':
    main()

Use the Google Assistant library with a button

In the User's Guide, you learned to use the Google Assistant library to make Voice Kit into your own Google Home. Sometimes, we also want to use an external trigger to start a conversation with the Google Assistant. Example external triggers include the default button (GPIO trigger, demonstrated in cloudspeech_demo.py and assistant_grpc_demo.py), a motion sensor, or a clap trigger.

This section shows how to start a conversation with a button press. It is little trickier because of the way the assistant library works. If you are new to programming, you may skip the "Design" section and jump to the "Implementation" subsection.

Design

Each python app has a main thread, which executes your code in main. For example, all our demo apps contain the following code:

 
if __name__ == '__main__':
    main()

It executes the main() function in the main thread. The assistant library runs an event loop:

 
...
for event in assistant.start():
   process_event(event)

The button driver has a method called "on_press" so you can tell it to run a function you provided every time it is pressed. You may wonder why the following does not work with assistant library:

 
...
def on_button_press(_):
    assistant.start_converstation()


...
aiy.voicehat.get_button().on_press(on_button_press)
for event in assistant.start():
   process_event(event)

Save it as my_demo.py, run it in the terminal, and press the button. Nothing happened. This is actually because the assistant library's event loop blocks the main thread, so the internal event loop inside the button driver does not get to run. For more details, you may take a look how the button driver works (see src/aiy/_drivers/_button.py).

To summarize, the button driver runs an internal event loop (from the stock GPIO driver) in the main thread. And assistant library also runs an event loop that blocks the main thread. To solve this problem and allow both event loops to run successfully, we need to use the powerful threading library in Python and run the assistant library event loop in a separate thread. For more information on Python threading, take a look at the official Python threading doc.

Implementation

The source code for a working demo is at: src/examples/voice/assistant_library_with_button_demo.py

We created a class MyAssistant to capture all the logic. In its constructor, we created the thread that will be used to run the assistant library event loop:

 
...
class MyAssistant(object):
    def __init__(self):
        self._task = threading.Thread(target=self._run_task)

The "_run_task" function specified as the target will be run when you start the thread. In that function, we created an assistant library object and ran the event loop. This event loop is executed in the thread we created, separate from the main thread:

 
...
def _run_task(self):
    credentials = aiy.assistant.auth_helpers.get_assistant_credentials()
    with Assistant(credentials) as assistant:
        # Save assistant as self._assistant, so later the button press handler can use
        # it.
        self._assistant = assistant
        for event in assistant.start():
            self._process_event(event)

We have yet to hook up the button trigger at this point, because we want to wait until the Google Assistant is fully ready. In the "self._process_event" function, we enabled the button trigger when the API tells us it is ready to accept conversations:

 
...
def _process_event(self, event):
    ...
    if event.type == EventType.ON_START_FINISHED:
        # The Google Assistant is ready. Start the button trigger.
        aiy.voicehat.get_button().on_press(self._on_button_pressed)

This is the simplest demo of utilizing the button trigger. You may connect your own trigger with the assistant library the same way to start a conversation, mute/unmute the assistant, and do many other things.

Build on Android Things

Android Things is Google's managed operating system (OS) for internet-of-things (IOT) devices. It's a powerful OS that helps you build connected devices on a variety of embedded hardware systems. Android Things does not support the Raspberry Pi Zero that's included in the V2 Voice Kit, but it does support the AIY Voice Bonnet when connected to a Raspberry Pi 3.

So if you also have a Raspberry Pi 3, follow this codelab to build a voice assistant on Android Things, or download the sample code on GitHub.

Learn more about Android Things

Android Things

Custom Voice User Interface

Change to the Cloud Speech API

Want to try another API? Follow the instructions below to try the Cloud Speech API, which recognizes your voice speech and converts it into text. The Cloud Speech API supports 80 languages, long audio clips, and the ability to add phrase hints for processing audio.

Turn on billing

Why do I need to turn on billing?

The voice recognizer cube uses Google’s Cloud Speech API. If you use it for less than 60 minutes a month, it’s free. Beyond that the cost is $0.006 for 15 seconds. Don’t worry: you’ll get a reminder if you go over your free limit.

  1. In the Cloud Console, open the navigation menu
  2. Click Billing
  3. If you don’t have a billing account, then click New billing account and go through the setup
  4. Return to the main billing page, then click the My projects tab.
  5. Find the name of your new project. Make sure it’s connected to a billing account.
  6. To connect or change the billing account, click the three-dot button , then select Change billing account
Enable the API
  1. In the console, open the navigation menu and click APIs & Services
  2. Click ENABLE API
  3. Enter “Cloud Speech API” into the search bar, then click the name
  4. Click ENABLE to turn on the API
Create a service account and credentials
  1. Go to the left-hand navigation menu, click APIs & Services and then click Credentials
  2. Click Create credentials and then click Service account key from the list
  3. From the “Service account” dropdown, click New service account
  4. Enter a name so that you’ll know this is for your voice recognizer stuff, like “Voice credentials”
  5. Select the Project viewer role
  6. Use the JSON key type
  7. Click Create
  8. Your credentials will download automatically. The file name contains your project name and some numbers: locate it rename it to cloud_speech.json
  9. Open your workstation’s terminal. Move your credentials.json file to the correct folder by entering the following:
(using the local file system)
`cp /path/to/downloaded/credentials.json ~/cloud_speech.json`
(from another machine)
`scp /path/to/downloaded/credentials.json pi@raspberrypi.local:~/cloud_speech.json`
Start the demo app

On your desktop, double-click the Start Dev Terminal icon. Then start the app: src/examples/voice/cloudspeech_demo.py

Check that it works correctly

On your desktop, double-click the Check Cloud icon. Follow along with the script. If everything is working correctly, you’ll see this:

The cloud connection seems to be working

If you see an error message, follow the details and try the Check Cloud script again.

Voice commands

To issue a voice command, press the arcade button once to activate the voice recognizer and then speak loudly and clearly.

Voice command Response
turn on the light The LED is turned on and is solid
turn off the light The LED is turned off
blink The LED starts blinking
goodbye The app automatically exits

Create a new voice command (or action)

You can create new actions and link them to new voice commands by modifying src/examples/voice/cloudspeech_demo.py directly.

Example: repeat after me

To add a voice command, first make it explicit what command is expected to the recognizer. This improves the recognition rate:

 
recognizer.expect_phrase('repeat after me')

Then add the code to handle the command. We will use aiy.audio.say to repeat the recognized transcript:

 
...
// In the process loop. 'text' contains the transcript of the voice command.
if 'repeat after me' in text:
    // Remove the command from the text.
    to_repeat = text.replace('repeat after me', '', 1)
    aiy.audio.say(to_repeat)

The modified cloudspeech_demo.py looks like:

 
"""A demo of the Google CloudSpeech recognizer."""

import os

import aiy.audio
import aiy.cloudspeech
import aiy.voicehat


def main():
    recognizer = aiy.cloudspeech.get_recognizer()
    recognizer.expect_phrase('turn off the light')
    recognizer.expect_phrase('turn on the light')
    recognizer.expect_phrase('blink')
    recognizer.expect_phrase('repeat after me')

    button = aiy.voicehat.get_button()
    led = aiy.voicehat.get_led()
    aiy.audio.get_recorder().start()

    while True:
        print('Press the button and speak')
        button.wait_for_press()
        print('Listening...')
        text = recognizer.recognize()
        if text is None:
            print('Sorry, I did not hear you.')
        else:
            print('You said "', text, '"')
            if 'turn on the light' in text:
                led.set_state(aiy.voicehat.LED.ON)
            elif 'turn off the light' in text:
                led.set_state(aiy.voicehat.LED.OFF)
            elif 'blink' in text:
                led.set_state(aiy.voicehat.LED.BLINK)
            elif 'repeat after me' in text:
                to_repeat = text.replace('repeat after me', '', 1)
                aiy.audio.say(to_repeat)
            elif 'goodbye' in text:
                os._exit(0)


if __name__ == '__main__':
    main()

You may add more voice commands. Several ideas include a "time" command to make it speak out the current time or commands to control your smart light bulbs.

Run your app automatically

Imagine you have customized an app with your own triggers and the Google Assistant library. It is an AIY-version of a personalized Google Home. Now you want to run the app automatically when your Raspberry Pi starts. All you have to do is make a system service (like the status-led service mentioned in the user's guide) and enable it.

Assuming that your app is src/my_assistant.py. We would like to make a system service called "my_assistant". First, it is always a good idea to test your app and makes sure it works to your expectation. Then you need a systemd config file. Open your favorite text editor and save the following content as my_assistant.service:

 
[Unit]
Description=My awesome assistant app

[Service]
Environment=XDG_RUNTIME_DIR=/run/user/1000
ExecStart=/bin/bash -c 'python3 -u src/my_assistant.py'
WorkingDirectory=/home/pi/AIY-projects-python
Restart=always
User=pi

[Install]
WantedBy=multi-user.target

The config file is explained below.

Line Explanation
Description= A textual description of the service.
ExecStart= The target executable to run. In this case, it executes the python3 interpreter and runs your my_assistant.py app.
WorkingDirectory= The directory your app will be working in. By default, we use /home/pi/AIY-projects-python. If you are working as a different user, please update the path accordingly.

Note shortcuts files do not support $HOME, so you have to explicitly use /home/pi/.
Restart= Here we specify that the service should always be restarted should there be an error.
User= The user to run the script. By default we use the "pi" user. If you are working as a different user, please update accordingly.
WantedBy= Part of the dependency specification in systemd configuration. You just need to use this value here.

For more details on systemd configuration, please consult its manual page.

We also need to move the file to the correct location, so systemd can make use of it. To do so, move the file with the following command:

sudo mv my_assistant.service /lib/systemd/system/

Now your service has been configured! To enable your service, enter:

sudo systemctl enable my_assistant.service

Note how we are referring to the service by its service name, not the name of the script it runs. To disable your service, enter:

sudo systemctl disable my_assistant.service

To manually start your service, enter:

sudo service my_assistant start

To manually stop your service, enter:

sudo service my_assistant stop

To check the status of your service, enter:

sudo service my_assistant status

Use TensorFlow on device

Help your fellow makers experiment with on-device TensorFlow models by donating short speech recordings. This small web app will collect short snippets of speech, and upload them to cloud storage. We'll then use these recordings to train machine learning models that will eventually be able to run on-device, no Cloud needed.

START OPEN SPEECH RECORDING

Hardware extensions

GPIO Pinout and Expansions

If you plan to take your project beyond the cardboard box, you might be wondering which GPIO pins are available for your other hardware. So figure 1 shows exactly which pins from the Raspberry Pi are used by the Voice Bonnet.

Figure 1. GPIO pins used by the Voice Bonnet (highlighted pins are used)

The Voice Bonnet also includes a dedicated microcontroller (MCU) that enables the following GPIO features:

  • Control of four additional GPIO pins, freeing up the Pi GPIOs for other uses
  • PWM support for servo/motor control without taxing the Raspberry Pi's CPU
  • Analog input support via on-board analog-to-digital converter (ADC)
  • Control of the two LEDs on the bonnet

The extra GPIO pins are provided on the top of the Voice Bonnet (see figure 2). You can control the GPIOs and LEDs with the gpiozero library, using pin names PIN_A, PIN_B, PIN_C, PIN_D, LED_1, and LED_2.

The gpiozero-compatible pin definitions are provided by the aiy.pins package. You can use these definitions to construct standard gpiozero devices like LEDs, Servos, and Buttons. For example, the following shows how to set up a Servo using these pins:

from gpiozero import Servo
from aiy.pins import PIN_A
from aiy.pins import PIN_B

# Create a default servo that will not be able to use quite the full range.
simple_servo = Servo(PIN_A)
# Create a servo with the custom values to give the full dynamic range.
tuned_servo = Servo(PIN_B, min_pulse_width=.0005, max_pulse_width=.0019)

See more examples using GPIOs here.

Figure 2. Voice Bonnet top

Google Actions + Particle Photon (via Dialogflow)

Want to learn how to use your Voice Kit to control other IoT devices? You can start here with a Particle Photon (a Wi-Fi development kit for IoT projects) and Dialogflow (a tool for creating conversational interfaces). This tutorial will show how to make your Voice Kit communicate with Dialogflow (and Actions on Google) to control an LED light with the Photon by voice.

Get all the code for this example here.

Android Things

What's included

This example ties together multiple technology platforms, so there are a few separate components included in this repo:

  • dialogflow-agent - an agent for Dialogflow
  • dialogflow-webhook - a web app to parse and react to the Dialogflow agent's webhook
  • particle-photon - a Photon app to handle web requests, and to turn the light on and off

We've included two separate web app implementations. Choose (and build on) the one that best suits your preferences:

This should be enough to get you started and on to building great things!

What you'll need

We’ll build our web app with Node.js, and will rely on some libraries to make life easier:

On the hardware side, you will need:

It's handy to have a breadboard, some hookup wire, and a bright LED, and the examples will show those in action. However, the Photon has an addressable LED built in, so you can use just the Photon itself to test all the code presented here if you prefer.

You'll also need accounts with:

  • Dialogflow (for understanding user voice queries)
  • Google Cloud (for hosting the webhook webapp/service)
  • Particle Cloud (for deploying your Photon code and communicating with the Particle API)

If you're just starting out, or if you're already comfortable with a microservices approach, you can use the 1-firebase-functions example — it's easy to configure and requires no other infrastructure setup. If you'd prefer to run it on a full server environment, or if you plan to build out a larger application from this, use the 2-app-engine example (which can also run on any other server of your choosing).

If you've got all those (or similar services/devices) good to go, then we're ready to start!

Getting started

Assuming you have all the required devices and accounts as noted above, the first thing you'll want to do is to set up apps on the corresponding services so you can get your devices talking to each other.

Local setup

First, you'll need to clone this repo, and cd into the newly-created directory.

git clone git@github.com:google/voice-iot-maker-demo.git
cd git@github.com:google/voice-iot-maker-demo.git

You should see three directories (alongside some additional files):

  • dialogflow-agent - the contents of the action to deploy on Dialogflow
  • dialogflow-webhook - a web application to parse the Google Actions/Dialogflow webhook (with server-based and cloud function options)
  • particle-photon - sample code to flash onto the Particle Photon

Once you‘ve taken a look, we’ll move on!

Dialogflow

Using the Dialogflow account referenced above, you‘ll want to create a Dialogflow agent. We'll be setting up a webhook to handle our triggers and send web requests to the Particle API.

  1. Create a new agent (or click here to begin). You can name it whatever you like
  2. Select Create a new Google project as well
  3. In the Settings section (click on the gear icon next to your project name) and go to Export and Import
  4. Select Import from zip and upload the zip provided (./dialogflow-agent/voice-iot-maker-demo.zip)

You've now imported the basic app shell — take a look at the new ledControl intent (viewable from the Intents tab). You can have a look there now if you're curious, or continue on to fill out the app's details.

  1. Head over to the Integrations tab, and click Google Assistant.
  2. Scroll down to the bottom, and click Update Draft
  3. Go back to the General tab (in Settings), and scroll down to the Google Project details.
  4. Click on the Google Cloud link and check out the project that's been created for you. Feel free to customize this however you like.
  5. Click on the Actions on Google link, and go to 2 - App information
  6. Click Add, and fill in the details of your project there
    1. Add some sample invocations, as well as a pronunciation of your Assistant app's name
    2. Fill out the other required fields (description, picture, contact email, etc.)
  7. Scroll down to the bottom, and click Test Draft

You can now test out the conversational side of the app in one of two ways:

You can also try talking to your application on any Assistant-enabled device that you‘re signed into.

However, if you’re following along step-by-step, it won't turn any lights on yet — we still have to set up the web service and the Photon app. Onward then!

Google Cloud

Depending on which hosting environment you want to use, cd into either ./dialogflow-webhook/1-firebase-functions or ./dialogflow-webhook/2-app-engine, and continue the setup instructions in that directory's README.md file.

IMPORTANT: Regardless of what hosting/deployment method you choose, make sure you return to the Dialogflow panel and go into the Fulfillment tab to update the URL field. Also, check that the DOMAINS field is set to "Enable webhook for all domains". Without doing these things, Dialogflow won't be able to talk to your new webhook.

Particle

Make sure the Photon is correctly set up and connected. (If it’s not configured yet, follow the steps in the Particle docs

You can upload your code to your photon via the Particle web editor, the Particle Desktop IDE (based on Atom), or the Particle command-line tools.

We'll be using the CLI for this example, which you can install thusly:

sudo npm i particle-cli -g

To deploy via the command line, first make sure you’re logged in:

particle login

You can find out the ID of your device by running:

particle list

Then upload the code using that ID:

particle flash [YOUR-DEVICE-ID] particle-photon/particle-blink-demo.ino

The Photon should blink rapidly while the upload is in process, and when it's done (and calmly pulsing cyan), you're ready to go.

Note: Make sure you generate a Particle access token, and add that token (along with your Photon's device id) to your config.js file.

You can make sure it all works by running the following from your terminal:

curl https://api.particle.io/v1/devices/[YOUR-DEVICE-ID]/led -d access_token=[YOUR-ACCESS-TOKEN] -d led=on

If everything is configured properly, you should see something like the following:

{
    "id": "[YOUR-DEVICE-ID]",
    "last_app": "",
    "connected": true,
    "return_value": 1
}

You should see the Photon's light come on (along with an LED on the breadboard, if you've wired one up)! Doing the same with led=off will return a 0 instead of a 1, and will (you guessed it) turn the light off.

Note: If you ever see a "return_value":-1, that's an error message — something has gone wrong somewhere.

Putting it all together

Once you’ve uploaded all the code and each service is configured, it’s time to give it all a try! You can confirm that everything went to plan by going to either your Assistant-enabled device or the Google Actions simulator, asking to talk to your app ("talk to [APP-NAME]"), and typing "turn the light on". If all goes well, your LED should turn on!

Further reading

This application is just a taste of what's possible — how far you take this framework is up to you! Here are a few resources to help you continue on your journey:

Appendix

Log Data and Debugging

You can view logs to get a better sense of what’s happening under the (cardboard) hood if you’re running the voice-recognizer as a service.

Logs

With the voice-recognizer running manually or as a service, you can view all log output using journalctl.

sudo journalctl -u voice-recognizer -n 10 -f

Example logs
[2016-12-19 10:41:46,220] INFO:audio:audio opened
Clap your hands then speak, or press Ctrl+C to quit...
[2016-12-19 10:41:54,425] INFO:trigger:clap detected
[2016-12-19 10:41:54,426] INFO:main:listening...
[2016-12-19 10:41:54,427] INFO:main:recognizing...
[2016-12-19 10:41:55,048] INFO:oauth2client.client:Refreshing access_token
[2016-12-19 10:41:55,899] INFO:speech:endpointer_type: START_OF_SPEECH
[2016-12-19 10:41:57,522] INFO:speech:endpointer_type: END_OF_UTTERANCE
[2016-12-19 10:41:57,523] INFO:speech:endpointer_type: END_OF_AUDIO
[2016-12-19 10:41:57,524] INFO:main:thinking...
[2016-12-19 10:41:57,606] INFO:main:command: light on
[2016-12-19 10:41:57,614] INFO:main:ready...
  1. Any lines before and including this one are part of the initialization and are not important
  2. Here is where the main loop starts
  3. Each successful trigger is logged
  4. Once a trigger is recognized the audio recording will be activated
  5. … and a new session with the Cloud Speech API is started
  6. For this a new token is generated to send the recognition request
  7. Feedback from the recognizer that it is listening (our request was accepted)

    See https://cloud.google.com/speech/reference/rest/v1beta1/EndpointerType

  8. Same as line 7

  9. Same as line 7
  10. Back in the application, where we dispatch the command
  11. The command that has been dispatched
  12. The app is ready and wait for a trigger again

Project complete!

You did it! Whether this was your first hackable project or you’re a seasoned maker, we hope this project has sparked new ideas for you. Keep tinkering, there’s more to come.