Voice Kit

Project Overview

This project demonstrates how to get a natural language recognizer up and running and connect it to the Google Assistant, using your AIY Projects voice kit. Along with everything the Google Assistant already does, you can add your own question and answer pairs. All in a handy little cardboard cube, powered by a Raspberry Pi.

Don’t own a kit? You can also integrate the Google Assistant into your own hardware by following the official Google Assistant SDK guides, or you can read below for links to purchase the AIY kit.

Assembling the kit and setting up the Google Assistant SDK should take about an hour and a half.

Get the kit

No more stock? Join the waitlist and we'll let you know as soon as they are back.

List of Materials

Voice HAT accessory board
Voice HAT microphone board
Plastic standoffs
3” speaker (wires attached)
Arcade-style push button
4-wire button cable
5-wire daughter board cable
External cardboard box
Internal cardboard frame
Lamp holder

Open the box and verify you have all of the necessary components in your kit. You’ll also need a couple of tools for assembly.

In your kit

  1. 1 Voice HAT accessory board (×1)
  2. 2 Voice HAT microphone board (×1)
  3. 3 Plastic standoffs (×2)
  4. 4 3” speaker (wires attached) (×1)
  5. 5 Arcade-style push button (×1)
  6. 6 4-wire button cable (×1)
  7. 7 5-wire daughter board cable (×1)
  8. 8 External cardboard box (×1)
  9. 9 Internal cardboard frame (×1)
  10. 10 Lamp (×1)
  11. 11 Micro-switch (×1)
  12. 12 Lamp holder (×1)

Not included

  1. Raspberry Pi 3 (×1)
  2. SD card (×1)
  3. Size “00” Phillips screwdriver (×1)
  4. Scotch tape (×1)

Assembly Guide

This guide shows you how to assemble the AIY Projects voice kit.

The kit is composed of simple cardboard form, a Raspberry Pi board, Voice HAT (an accessory board for voice recognition) and a few common components.

By the end of this guide, your voice project will be assembled with the Raspberry Pi board and other components connected and running. Then you’ll move on the User’s Guide to bring it to life!


Get the Voice Kit SD Image

You’ll need to download the Voice Kit SD image using another computer. Both of the next steps can take several minutes for your computer to complete, so while you're waiting, get started on "Assemble the hardware" in the next step.

  1. Get the Voice Kit SD image

  2. Write the image to an SD card using a card writing utility (Etcher.io is a popular tool for this)

Building on Android Things

Our default platform and instructions are for Raspbian Linux. However, you can also build on Android Things, an IoT solution using Android APIs and Google services. Skip down to the Maker's Guide for instructions.


Assemble the hardware

Assemble the hardware image 1

Find your Raspberry Pi 3 and the two plastic standoffs that came with your kit.

Insert the standoffs into the two yellow holes opposite the 40-pin box header on your Raspberry Pi 3. They should snap into place.

Assemble the hardware image 2

Take your Voice HAT accessory board and attach it to the Raspberry Pi 3 box header.

Gently press down to make sure the pins are secure. On the other side, press down to snap the spacers into place.

Assemble the hardware image 3

Find the speaker with the red and black wires attached. Insert the speaker’s red wire end into the “+” terminal on the Voice HAT blue screw connector.

Do the same for the black wire end into the “-” terminal. At this point, they should be sitting there unsecured.

Assemble the hardware image 4

Now screw the wires in place with a Phillips “00” screwdriver.

Gently tug on the wires to make sure they’re secure.

Assemble the hardware image 5

Find the 4-wire button cable: it has a white plug on one end and four separate wires with metal contacts on the other.

Insert the plug into one of the white connectors on the Voice HAT board.

Assemble the hardware image 6

Find the Voice HAT Microphone board and the 5-wire daughter board cable from your kit (pictured).

Insert the 5-wire plug into the Microphone board.

Assemble the hardware image 7

Connect the Microphone board to the Voice HAT board using the other white connector on the Voice HAT board.

Step complete

Well done! Set aside your hardware for now.


Fold the cardboard

3.1. Build the box

Build the box image 1

Now let’s build the box. Find the larger cardboard piece with a bunch of holes on one side (pictured).

Fold along the creases, then find the side with four flaps and fold the one marked FOLD 1.

Build the box image 2

Do the same for the other folds, tucking FOLD 4 underneath to secure it in place.

Easy! Now set it aside.

3.2. Build the frame

Build the box image 1

Find the other cardboard piece that came with your kit (pictured). This will build the inner frame to hold the hardware.

Fold the flaps labeled 1 and 2 along the creases.

Build the box image 2

The flap above the 1 and 2 folds has a U-shaped cutout. Push it out.

Build the box image 3

Then fold the rest of the flap outward.

Fold the section labeled FOLD UP so that it’s flush with the surface you’re working on. There’s a little notch that folds behind the U-shaped flap to keep it in place.

Build the box image 4

The U-shaped flap should lay flush with the box side.

At this point, the cardboard might not hold its shape. Don’t worry: it’ll come together once it’s in the box.

Build the box image 5

Find your speaker (which is now attached to your Raspberry Pi 3).

Slide the speaker into the U-shaped pocket on the cardboard frame.

Build the box image 6

Turn the cardboard frame around.

Take the Pi + Voice HAT hardware and slide the it into the bottom of the frame below flaps 1 + 2 (pictured).

The USB ports on the Pi should be exposed from the cardboard frame.


Put it all together


If your SD card is already inserted into the Pi, remove the SD card before sliding the hardware into the cardboard or it may break.

Put it all together image 1

Let’s put it all together!

Take the cardboard box you assembled earlier and find the side with the seven speaker holes.

Slide the cardboard frame + hardware into the cardboard box, making sure that the speaker is aligned with the box side with the speaker holes.

Put it all together image 2

Once it’s in, the Pi should be sitting on the bottom of the box.

Make sure your wires are still connected.

Put it all together image 3

Check that your ports are aligned with the cardboard box holes.

Put it all together image 4

Find your arcade button. There should be a button, a spacer, and a nut.

If they’re connected, unscrew the nut and spacer from the button.

Put it all together image 5

Insert the button into the top flap of the cardboard box.

The pushable button side should face outward.

Put it all together image 6

Screw on the spacer and then the washer to secure the button in place.

Put it all together image 7

Next, find your button lamp components:

  • Lamp
  • Black micro-switch
  • Black lamp holder
Put it all together image 8

Insert the lamp into the black lamp holder.

Put it all together image 9

Then attach the lamp holder to the micro-switch.

Put it all together image 10

Insert the completed lamp into the button.

Put it all together image 11

Secure the lamp in place by carefully rotating it right-ward. It may take some force to lock it in place.

Put it all together image 12

Find the four colored wires with metal contacts that you previously connected to the Voice HAT board.

Following the picture above, attach the four metal contacts to the micro-switch.

Important: Wire color matters! Make sure each of the wires are attached to the same end as the picture.

Put it all together image 13

Find the Voice HAT Microphone board.

The Microphone board sits below the button on the top flap.

Before you tape it down, check the other side of the cardboard flap to align the microphones with the two cardboard holes (see the picture in the next step).

Using some trusty scotch tape, tape the board to the top flap of the cardboard.

Put it all together image 14

Turn it around and double check that your microphones are aligned with the cardboard holes.

Put it all together image 15

That’s it! Close the box up.

Put it all together image 16

Look at that! The device is assembled and ready to be used. Next you’ll connect it and boot it up.


Connect and boot the device

5.1. Connect peripherals

Plug peripherals in
USB Keyboard
USB Mouse
HDMI Monitor

Now that your box is assembled, plug your peripherals in:

  1. 1 USB Keyboard
  2. 2 USB Mouse
  3. 3 HDMI Monitor

5.2. Boot the device


The SD card can be tricky to remove after it’s been inserted. We recommend using either small, needle nose pliers to remove it, or attaching tape to the SD card before inserting so you can pull on the tape to remove it.

Build the box image 1

Insert your SD card (the one with the Voice Kit SD image) into the slot on the bottom side of the Raspberry Pi board. The SD card slot should be accessible through a cutout provided in the external cardboard form.

With the SD card in place and peripherals connected, plug in the power supply and the Raspberry Pi will begin booting up.

If you don’t see anything on your monitor, or you see "Openbox Syntax Error", check the troubleshooting guide in the appendix.

5.3. Connect to the internet

Build the box image 1

Click the network icon in the upper right corner of the Raspberry Pi desktop. Choose your preferred WiFi access point.


Verify it works

Once booted, the small red LED in the center of the Voice HAT and the LED inside the arcade button should both indicate the device is running by emitting a slow pulse. If you don’t see the LED pulse, check the troubleshooting guide in the appendix.

6.1. Check audio

Check audio image 1

This script verifies the audio input and output components on the Voice HAT accessory board are working correctly. Double-click the Check Audio icon on your desktop.

When you click the script, it will run through each step listed below. Note: some of the steps require voice input, which you will be prompted for—so watch closely!

Check audio image 2

Follow along with the script and if everything is working correctly, you’ll see a message that says The audio seems to be working

If you see an error message, follow the message details to resolve the issue and try again.

6.2. Check WiFi

Check WiFi image 1

This script verifies that your WiFi is configured and working properly on the Raspberry Pi board. Double-click the Check WiFi icon on your desktop.

When you double-click the script, it will check your Raspberry Pi is connected to the internet over WiFi.

Check WiFi image 2

If everything is working correctly, you’ll see a message that says The WiFi connection seems to be working.

If you see an error, click on the network icon at the top right and verify you are connected to a valid access point.

Wrap up

Congratulations on assembling the voice recognizer device and verifying the components are setup properly. Now you’ll need to connect the device to Google Cloud Platform.

To do that, open the User’s Guide and follow the instructions provided.



Troubleshooting Tips

  1. A red LED on the Raspberry Pi near the power connector should light. If it doesn't, unplug the power, unplug the connector to the microphone, and power-up again. If it lights after powering-up without the microphone, then the microphone board may be defective.
  2. If the lamp in the button doesn't light up, it might be the wrong way around. Take the lamp out of the button (undo steps 8 to 11), turn it 180°, and put it all back together. If it still doesn't light up, check that the wire colors are the same as the picture in step 12.
  3. If you don't see anything on your monitor, make sure the HDMI and power cables are fully inserted into the Raspberry Pi.
  4. If you see "Openbox Syntax Error", you'll need to rewrite the image to the SD card and try booting the device again.

User’s Guide

Congrats on assembling your voice recognizer device -- now, let’s bring it to life!

The voice recognizer uses the Google Assistant SDK to recognize speech, along with a local Python application that evaluates local commands. You can also use the Google Cloud Speech API. By the end of this guide, your voice recognizer will let you talk to the Google Assistant. Then check out the Maker’s guide for creative extensions to inspire you to use voice capabilities in your future projects.


Setting up your device

1.1. Connect to Google Cloud Platform

To try the Google Assistant API, you need to first sign into Google Cloud Platform (GCP) and then enable the API.

Log into GCP

Log into GCP 1

Using your voice recognizer device, open up an internet browser and go to the Cloud Console

I’ve never used Google Cloud Platform before

Use your Google account to sign in. If you don’t have one, you’ll need to create one. Trying the Google Assistant API is free to use for personal use.

Create a project

GCP uses projects to organize things. Create one for your voice recognizer box.

Create a project 1

In the Cloud Console, click the drop-down button to the right of the “Google Cloud Platform” logo

Create a project 2

From the dropdown, click Create project

Create a project 3

Enter a project name and click Create

Create a project 4

After your project is created, make sure the drop-down has your new project name displayed

1.2. Turn on the Google Assistant API

Turn on the Google Assistant API image 1

In the Cloud Console, enable the "Google Assistant API".

Turn on the Google Assistant API image 2

In the Cloud Console, create an OAuth 2.0 client by going to API Manager > Credentials

Turn on the Google Assistant API image 3

Click Create credentials and select OAuth client ID

  • If this is your first time creating a client ID, you’ll need to configure your consent screen by clicking Configure consent screen. You’ll need to name your app (this name will appear in the authorization step)
Turn on the Google Assistant API image 4

Select Other, enter a name to help you remember your credentials, then click Create.

Turn on the Google Assistant API image 5

A dialog window will pop up. Click OK. In the Credentials list, find your new credentials and click the download icon (Download icon) on the right.

Note: if you don't see the download icon, try expanding width of your browser window or zooming out.

Turn on the Google Assistant API image 6

Find the JSON file you just downloaded (client_secrets_XXXX.json) and rename it to assistant.json. Then move it to /home/pi/assistant.json

Turn on the Google Assistant API image 7

On your desktop, click Start dev terminal and enter sudo systemctl stop voice-recognizer

Turn on the Google Assistant API image 8

Go to the Activity Controls panel. Make sure to log in with the same Google account as before.

  • Turn on the following:
    1. Web and app activity
    2. Location history
    3. Device information
    4. Voice and audio activity

  1. You’re ready to turn it on: follow the manual start instructions under Using your device below

    • You can also SSH from another computer. You’ll need to use ssh -X to handle authentication through the browser when starting the example for the first time.
  2. Authorize access to the Google Assistant API, when prompted

    • Make sure you're following the manual start instructions the first time - if you run as a service, you won't be prompted for authorization.
  3. Try an example query like "how many ounces in 2 cups" or "what's on my calendar?"-- and the Assistant should respond!

    • If the voice recognizer doesn't respond to your button presses or queries, you may need to restart.
    • If the response is Actually, there are some basic settings that need your permission first..., perform step 8 again, being sure to use the same account that you used for the authorization step.

Using your device

The voice recognizer doesn't run automatically by default. You can either run it as a service in the background, or if you'd like to make changes to the code, it might be useful to run it manually. This is required when using the Assistant API for the first time, and lets you see some diagnostic output as well.

3.1. Manually start the application

For the device to begin listening for your queries, start the voice recognizer app by double-clicking "Start dev terminal" on the Desktop and entering:


When you are done, press Ctrl-C to end the application.

3.2. Manage the service

As an alternative to running the application manually, you can run it as a system service, however running the application manually may be better for making code changes for fast restarts.

You start the service by entering sudo systemctl start voice-recognizer. You can stop the service by entering sudo systemctl stop voice-recognizer.

If you started with the preloaded system image (on an SD card), then the voice recognition service will not be started on-boot. If you would like for the service to automatically start on boot, then run sudo systemctl enable voice-recognizer once.

To learn about other commands, like stop and disable, see the systemctl command manual.

3.3. LED status codes

Your box has a range of responses that it displays through the bright LED inside the arcade style button mounted on top of the device. The LED signals can be configured to your preference by modifying ~/voice-recognizer-raspi/src/led.py.

Verify your device is up and running when it displays a slow pulse pattern. Once you see this, you’re ready to start speaking queries to the device.

LED signal Description
Pulse The device is starting up, or the voice recognizer has not been started yet
Blink (every few seconds) The device is ready to be used
On The device is listening
Pulse The device is thinking or responding
Pulse → off The device is shutting down
3 blinks → pause There’s an error
Extending the project

There’s a lot you can do with this project beyond the Assistant API. If you’re the curious type, we invite you to explore the Maker’s Guide for more ideas on how to hack this project, as well as how to use the Cloud Speech API as an alternative to the Assistant API.

Maker’s Guide

This is a hackable project, so we encourage you to make this project your own! We’ve included a whole section on replacing the Google Assistant SDK with the Cloud Speech API to give you even more options. This guide gives you some creative extensions, settings, and even a different voice API to use.

We hope this project has sparked some new ideas for you.


Software extensions

Below are some options to change the device behavior and suggestions for extensions if you want to hack further.

1.1. Source code

If you’re using the SD card image provided, the source for the voice-recognizer app is already installed on your device. You can browse the Python source code at $HOME/voice-recognizer-raspi/

Alternately, the project source is available on GitHub at aiyprojects-raspbian.

1.2. Config files

The application can be configured by adjusting the properties found in $HOME/.config/voice-recognizer.ini. That file lets you configure the default activation trigger and which API to use for voice recognition. Try adding additional properties for your own extensions! And don’t worry: If you mess it up, there’s a backup copy kept in $HOME/voice-recognizer-raspi/config.

1.3. Change the activation trigger

By default, the voice recognizer activates after a single button press (see ~/voice-recognizer-raspi/src/triggers/*.py), but you can change the activation trigger when you manually start the application by including the -T flag. As another example, we’ve included an activation trigger that responds to a single clap or snap of your fingers.

python3 src/main.py -T {trigger-name}

  • The -T parameter selects a trigger
trigger-name Description
gpio Activates by pressing the arcade button
clap Activates from a single clap or snap

1.4. Create a new activation trigger

You can add additional triggers beyond these examples by modifying the code with your own ideas. To add a new activation trigger, you’ll need to create a new source file in the trigger folder, implement a subclass of Trigger (see ~/voice-recognizer-raspi/src/triggers/trigger.py) and add it to the command-line options.

Below is the code for the GPIO trigger for reference.

gpio trigger
import RPi.GPIO as GPIO
import time
from triggers.trigger import Trigger
class GpioTrigger(Trigger):
    '''Detect edges on the given GPIO channel and call the callback.'''
    DEBOUNCE_TIME = 0.05
    def __init__(self, channel, polarity=GPIO.FALLING,
        self.channel = channel
        self.polarity = polarity
        if polarity not in [GPIO.FALLING, GPIO.RISING]:
            raise ValueError('polarity must be GPIO.FALLING or GPIO.RISING')
        self.expected_value = polarity == GPIO.RISING
        self.event_detect_added = False
        GPIO.setup(channel, GPIO.IN, pull_up_down=pull_up_down)
    def start(self):
        if not self.event_detect_added:
            GPIO.add_event_detect(self.channel, self.polarity, callback=self.debounce)
            self.event_detect_added = True
    def debounce(self, _):
        '''Check that the input holds the expected value for the debounce period,
        to avoid false trigger on short pulses.'''
        start = time.time()
        while time.time() < start + self.DEBOUNCE_TIME:
            if GPIO.input(self.channel) != self.expected_value:

Build on Android Things

Get the Google Assistant running on Android Things with these instructions.


Custom Voice User Interface

3.1. Change to the Cloud Speech API

Want to try another API? Follow the instructions below to try the Cloud Speech API, which recognizes your voice speech and converts it into text. The Cloud Speech API supports 80 languages, long audio clips, and the ability to add phrase hints for processing audio.

Turn on billing

Why do I need to turn on billing?

The voice recognizer cube uses Google’s Cloud Speech API. If you use it for less than 60 minutes a month, it’s free. Beyond that the cost is $0.006 for 15 seconds. Don’t worry: you’ll get a reminder if you go over your free limit.

  1. In the Cloud Console, open the navigation menu Navigation menu
  2. Click Billing
  3. If you don’t have a billing account, then click New billing account and go through the setup
  4. Return to the main billing page, then click the My projects tab.
  5. Find the name of your new project. Make sure it’s connected to a billing account.
  6. To connect or change the billing account, click the three-dot button Navigation menu, then select Change billing account
Enable the API
  1. In the console, open the navigation menu and click API Manager
  2. Click ENABLE API
  3. Enter “Cloud Speech API” into the search bar, then click the name
  4. Click ENABLE to turn on the API
Create a service account and credentials
  1. Go to the left-hand navigation menu, click API Manager and then click Credentials
  2. Click Create credentials and then click Service account key from the list
  3. From the “Service account” dropdown, click New service account
  4. Enter a name so that you’ll know this is for your voice recognizer stuff, like “Voice credentials”
  5. Select the Project viewer role
  6. Use the JSON key type
  7. Click Create
  8. Your credentials will download automatically. The file name contains your project name and some numbers: locate it rename it to cloud_speech.json
  9. Open your workstation’s terminal. Move your credentials.json file to the correct folder by entering the following:

    (using the local file system)
    cp /path/to/downloaded/credentials.json ~/cloud_speech.json

    (from another machine)
    scp /path/to/downloaded/credentials.json pi@raspberrypi.local:~/cloud_speech.json

Check that it works correctly

On your desktop, double-click the Check Cloud icon. Follow along with the script. If everything is working correctly, you’ll see this:

The cloud connection seems to be working

If you see an error message, follow the details and try the Check Cloud script again.

3.2. Voice commands

To issue a voice command, press the voice recognizer button once to activate the voice recognizer and then speak loudly and clearly. You’ll know the device is listening for a voice command when you see the LED arcade button is steady on.

We’ve included a few example voice commands in our local dictionary as a starting point, but we encourage you to explore the code and add your own.

Voice command Response
Hello Hello to you too
What time is it? It is <time>. E.g. "It is ten to nine."
Tell me a joke (listen for the joke response)
Volume up Increase the volume by 10% and say the new level
Volume down Decrease the volume by 10% and say the new level
Max volume Increase volume to 100%

3.3. Create a new voice command (or action)

You can create new actions and link them to new voice commands in ~/voice-recognizer-raspi/src/action.py.

Example: connect and control another LED

To control an LED that you've connected to GPIO 4 (Driver0), add the following class to ~/voice-recognizer-raspi/src/action.py below the comment "Implement your own actions here":

# =========================================
# Makers! Implement your own actions here.
# =========================================

import RPi.GPIO as GPIO

class GpioWrite(object):

    '''Write the given value to the given GPIO.'''

    def __init__(self, gpio, value):
        GPIO.setup(gpio, GPIO.OUT)
        self.gpio = gpio
        self.value = value

    def run(self, command):
        GPIO.output(self.gpio, self.value)

Then add the following lines to ~/voice-recognizer-raspi/src/action.py below the comment "Add your own voice commands here":


    # =========================================
    # Makers! Add your own voice commands here.
    # =========================================

    actor.add_keyword('light on', GpioWrite(4, True))
    actor.add_keyword('light off', GpioWrite(4, False))

3.4. Use TensorFlow on device

Help your fellow makers experiment with on-device TensorFlow models by donating short speech recordings. This small web app will collect short snippets of speech, and upload them to cloud storage. We'll then use these recordings to train machine learning models that will eventually be able to run on-device, no Cloud needed.



Hardware extensions

4.1. Connecting additional sensors


GPIO description
Function GPIO Description
Button 23 button is active low
LED 25 LED is active high
Driver0/GPIO4 4 500mA drive limit, can be used as GPIO
Driver1/GPIO17 17 500mA drive limit, can be used as GPIO
Driver2/GPIO27 27 500mA drive limit, can be used as GPIO
Driver3/GPIO22 22 500mA drive limit, can be used as GPIO
Servo0/GPIO26 26 25mA drive limit, can be used as GPIO
Servo1/GPIO6 6 25mA drive limit, can be used as GPIO
Servo2/GPIO13 13 25mA drive limit, can be used as GPIO
Servo3/GPIO5 5 25mA drive limit, can be used as GPIO
Servo4/GPIO12 12 25mA drive limit, can be used as GPIO
Servo5/GPIO24 24 25mA drive limit, can be used as GPIO
I2S 20, 21, 19 used by Voice HAT ALSA driver, not available to user
Amp Shutdown 16 used by Voice HAT ALSA driver, not available to user
I2C 2, 3 available as GPIO or I2C via Raspbian drivers
SPI 7, 8, 9, 10, 11 available as GPIO or SPI via Raspbian drivers
UART 14, 15 available as GPIO or UART via Raspbian drivers


5.1. Log Data and Debugging

You can view logs to get a better sense of what’s happening under the (cardboard) hood if you’re running the voice-recognizer as a service.


With the voice-recognizer running manually or as a service, you can view all log output using journalctl.

sudo journalctl -u voice-recognizer -n 10 -f

Example logs
Clap your hands then speak, or press Ctrl+C to quit...
[2016-12-19 10:41:54,425] INFO:trigger:clap detected
[2016-12-19 10:41:54,426] INFO:main:listening...
[2016-12-19 10:41:54,427] INFO:main:recognizing...
[2016-12-19 10:41:55,048] INFO:oauth2client.client:Refreshing access_token
[2016-12-19 10:41:55,899] INFO:speech:endpointer_type: START_OF_SPEECH
[2016-12-19 10:41:57,522] INFO:speech:endpointer_type: END_OF_UTTERANCE
[2016-12-19 10:41:57,523] INFO:speech:endpointer_type: END_OF_AUDIO
[2016-12-19 10:41:57,524] INFO:main:thinking...
[2016-12-19 10:41:57,606] INFO:main:command: light on
[2016-12-19 10:41:57,614] INFO:main:ready...
  1. Any lines before and including this one are part of the initialization and are not important
  2. Here is where the main loop starts
  3. Each successful trigger is logged
  4. Once a trigger is recognized the audio recording will be activated
  5. … and a new session with the Cloud Speech API is started
  6. For this a new token is generated to send the recognition request
  7. Feedback from the recognizer that it is listening (our request was accepted)

    See https://cloud.google.com/speech/reference/rest/v1beta1/EndpointerType

  8. Same as line 7

  9. Same as line 7
  10. Back in the application, where we dispatch the command
  11. The command that has been dispatched
  12. The app is ready and wait for a trigger again

Project complete!

You did it! Whether this was your first hackable project or you’re a seasoned maker, we hope this project has sparked new ideas for you. Keep tinkering, there’s more to come.