AR Service Dog Simulation

The Whole Process

Very soon I will be owning a service dog, but I haven’t been around or interacted with large dogs before. And although Service dogs come in all shapes and sizes, Seizure Response Dogs (SRDs) tend to be on the larger side of the sale. The top dog breeds that are the best choices for people with Seizure Disorders are the following Golden Retrievers, Labrador Retrievers, Poodles, Golden-doodles, German Shepherds, Collies, and Newfoundlands. A seizure response dog is a dog that is trained to perform various behaviors in response to a seizure. The tasks of a seizure response dog are endless; and you can teach your dog as much or as little as you like. Since every person is different, their needs will be different and, therefore, the tasks their dog performs differ, as well. There are some typical behaviors that are desired from seizure response dogs, many seizure response dogs are trained to begin barking to alert others when a seizure occurs, moving in a way to protect the person having a seizure, or activating an alarm or other pre-programmed devices that call for help. Finally, Some dogs are trained to lie next to their owners during a seizure to help the owners from injuring themselves, and others will stand next to their owners to break their fall and prevent injury. These dogs can even catch their owners as they lose control and fall over. That said, I made a large AR Dog that I can command.

Not everyone has experienced owning a large dog or is comfortable being around one; this AR experience can help people prep themselves before they get or meet one. So the how might we statement for this project is the following : “ How might we use mixed reality to help people become more comfortable around larger dogs.”

Since learning about the ARKit in class, I’ve been thinking about what it could mean for pet lovers or owners who want to see what large dogs look like before getting one. Falling in love with a dog is super easy. Sharing your home with a canine friend can bring you much joy, but one must be aware that owning a dog takes a lot of time, money, and commitment — over 15 years worth, in many instances. Whether your canine friend is big, medium or small, dog ownership can be extremely gratifying, but it’s also a big responsibility. Thus, for my final project I created this AR app that allows the user to see first hand what a large dog would look like from a virtual stand point.

“So why am I doing this?” I want to offer the most realistic experience of having a large dog, and it will certainly be helpful for people who think about getting a dog or owning a large service dog. By interacting with virtual one first, they will learn first hand all the joy and responsibilities of having a large pet. Thus, when they decide to get a real dog, they will do it with full knowledge of what it entails.

A) Starting With Frank :

In the past weeks I have made Lil’Bob and Lil’Snowman on Cinema4D; I used the same techniques with Lil’Frank, which was my practice dummy before getting deep with the project.

B) Testing other platforms besides unity :

I tried experimenting with different SDKs but unfortunately with Lil’Frank nothing seemed to work.

C) Purchasing Athena from the Asset Store :

The reason I purchased the dog off the Asset store is because I wanted the dog to behave like a real life dog: which has a somewhat realistic and friendly disposition and is obedient when clicked on. Although I have worked on Cinema4D for a while, I could not achieve that level of rending and refinement as the one’s available on the asset store. Moreover, jumping into this project I wanted the the user to command the dog with their speech, but unfortunately, I couldn’t do it (and I will talk more about that in the challenges I faced) so I scripted for a bit and the dog’s behavior is completely influenced by the way each user interacts with it, all with a click of buttons.

D) Scripting for UI Buttons :

  • Step 1 : Under the hierarchy Right Click> Scroll to UI > Add Button
  • Step 2 Making a Button : The Button control responds to a click from the user and is used to initiate or confirm an action. Familiar examples include the Submit and Cancel buttons used on web forms.
  • Step 3 Button Image : The UI Image component is the main graphic element of the UI system in Unity, and is used for everything from button and panel backgrounds to slider handles and speedometers. For this part I colored my buttons pink and a dark pink color is assigned for when it is clicked.
  • Step 4 “On-Click” Scripting : Select the button in your hierarchy, there should be a “Button” component on the inspector. Add a new entry (+ symbol) in the “On Click()”-List. Now you have to drag the GameObject with the attached script into the new entry field (no-object), after that you will be able to select a function which will be called every time you clicking the button Copy and paste the following scripts :
// To use this example, attach this script to an empty GameObject.
// Create three buttons (Create>UI>Button). Next, select your
// empty GameObject in the Hierarchy and click and drag each of your
// Buttons from the Hierarchy to the Your First Button, Your Second Button
// and Your Third Button fields in the Inspector.
// Click each Button in Play Mode to output their message to the console.
// Note that click means press down and then release.

using UnityEngine;
using UnityEngine.UI;

public class Example : MonoBehaviour
{
//Make sure to attach these Buttons in the Inspector
public Button m_YourFirstButton, m_YourSecondButton, m_YourThirdButton;

void Start()
{
//Calls the TaskOnClick/TaskWithParameters/ButtonClicked method when you click the Button
m_YourFirstButton.onClick.AddListener(TaskOnClick);
m_YourSecondButton.onClick.AddListener(delegate {TaskWithParameters("Hello"); });
m_YourThirdButton.onClick.AddListener(() => ButtonClicked(42));
m_YourThirdButton.onClick.AddListener(TaskOnClick);
}

void TaskOnClick()
{
//Output this to console when Button1 or Button3 is clicked
Debug.Log("You have clicked the button!");
}

void TaskWithParameters(string message)
{
//Output this to console when the Button2 is clicked
Debug.Log(message);
}

void ButtonClicked(int buttonNo)
{
//Output this to console when the Button3 is clicked
Debug.Log("Button clicked = " + buttonNo);
}
}

*Link to script in resources

Step 5 Scripting for Animator.play : Create a new c# script and paste this code. add the script to the object that own the animation then select the object on the hierarchy, go to the inspector then go to your UI button and in the inspector, on click() , this function will enable our animator, so the fist time we click out button, the object is gonna be animated but the second time we click it, it wont, to make this happen, “add to list” other function and now every time you click the button this function will play the animation

using UnityEngine;

// Press the space key in Play Mode to switch to the Bounce state.

public class Move : MonoBehaviour
{
private Animator anim;

void Start()
{
anim = GetComponent<Animator>();
}

void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
if (null != anim)
{
// play Bounce but start at a quarter of the way though
anim.Play("Bounce", 0, 0.25f);
}
}
}

*Link to script in resources

E) ARKit

  • Step 1 : Under the hierarchy Right Click> Scroll to XR (if you don’t have it add it to your package manager under window)> Add AR Session & AR Session Origins > Under AR Session add Camera> Delete your first main camera> Assign AR Session Origins Camera as main camera.

F) ARFoundation : Transferring Project to Xcode

This is a quick and simple video that I used, It demonstrates the whole process. To learn more about transferring your project to Xcode Click Here

At first I proposed to make an AR Dog that I can command with my speech to sit, lay down, walk, and run, etc. but that didn’t happen. At first, I tried downloading IBM’s Watson API and I tried to follow 4 different tutorials on youtube as well as IBM’s Github Repository’s instructions and unfortunately failed. However, despite the trials and errors, I later found out after much research that :

IBM’s Watson is no longer supported by Unity (you can still download it, create an account, and go through the the whole process presented in the youtube videos but you might bump into many issues along the way.

If you want to try it out yourself, this is the GitHub repository

Augmented reality is the new form of a technology that superimposes a computer-generated image on a user’s view of the real world, thus providing a composite view. Augmented reality is the integration of digital information with the user’s environment in real time. There a lot of benefits for augmented reality for every business, project and idea, some of the listed benefits are as follows:

  • To get or enhance creativity
  • Provide a new experience
  • Able to preview the plants, animals (wild and domestic) or product visually
  • Build real-time data experiences
  • Enjoy experiential experiences
  • Enhance user as well as brand experiences

Despite what happened with the IBM API, my goal for the final product is to add a more fun and interactive functionality as possible to the app. In the near future, I hope Unity brings back The IBM Watson SDK so I can enhance the experience by make the dog able to recognize speech.

Besides prepping for a large dog, this experience can lead to real-world gains for people with phobias, or anxiety towards larger dog breeds and I believe it has the potential to work as well as traditional exposure therapy, which slowly subjects patients to what causes anxiety for them. Thus, encouraging patients to interact with a large virtual dog standing in front of them, could help lower the anxiety they have towards larger dog breeds.

*(I just want to mention that I have no phobias or anxiety towards larger dogs, but this project is not only about me. This whole immersive experience is meant to be inclusive.)

Don’t like reading? Here’s a short video presentation of my project… :)

Strategic Communication | Designer | Design Thinker | Researcher | www.giaalmuaili.com