AR Service Dog Simulation
UI Buttons & Animation
Hello and welcome back! There’s a lot to talk about here so buckle up
My Attempt with IBM’s Unity SDK (Voice Recognition & Speechtotext)
*(The IBM Cloud Unity SDK Core is a core project of Unity SDKs generated using the IBM OpenAPI SDK generator.)
First thing’s first, I tried downloading IBM’s Watson API and I tried to follow 4 different tutorials on youtube as well as IBM’s Github Repository’s instructions and unfortunately failed. However, despite the trials and errors, I later found out after much research that :
A) IBM’s Watson is no longer supported by Unity (you can still download it, create an account, and go through the the whole process presented in the youtube videos but you will bump into many issues, especially if your unity is up to date.
B) It won’t work properly on MacOSCatalina, I do not have the answers for this.
C) …I’ve squinted my eyes way too much during the whole coding process.
The Steps I Took:
I added the core to the Assets directory of my Unity project
Configured unity by changing the build settings in Unity (File > Build Settings) to any platform except for web player/Web GL. The IBM Watson SDK for Unity does not support Unity Web Player.
I am using Unity 2019.3.9 so I had to set Scripting Runtime Version and Api Compatibility Level in Build Settings to .NET 4.x equivalent. Because they need to access security options to enable TLS 1.2. As soon as I did that the SDK immediately started and I got super excited (but little did I know)
I created an IBM cloud account, put in my credentials in unity and coded as usual, it seemed to work out fine at first, but then whenever I say something, it wouldn’t take in my speech and it wouldn’t animate the dog either like it did in the videos. I am not sure if it’s because the youtube videos are outdated but I did all I could. However, I won’t stop there just because it isn’t working.
- If you want to try it out yourself, this is the GitHub repository
So where’s your project going now?
Definitely not south…it’s not over yet, and I still want to give my users an interactive experience, what I want to do instead at this point is this; utilize the UI package in Unity.
If I can’t use my voice to command the AR Dog then at least having a few buttons for people to use while being face to face with the AR Dog is a start. Perhaps in the near future when unity brings back Watson then maybe I’d be able to use voice commands.
Where I am at right now
A) I will not be Practicing on Frank Anymore:
I’ve been Frankenstein-ing on Frank for a bit in the previous weeks. Frank can now Walk, Jump, Run and Spin (aka The Frank Dance) but this will be it for Lil’Frank. I have decided to purchase an already rigged Dog from the Asset Store and use it for my project instead, as I want my users to experience a realistic looking dog, not a cartoon one. In the end, this whole project is to allow people to be more confident around larger dogs and although Frank is cute, he doesn’t look like a realistic large dog. Good-bye Lil’Frank you’ve done well :’)
What I’ve accomplished so far
I was able to animate the Dog from the unity Asset store. However, I have no control over the Dog’s behavior. I don’t want it to move in the order of the animation I’ve set; I want to be able to click a button to activate the animation. Thus, this is going to be my next and final step for this project is to create Buttons for my AR experience.
The Prototype :
The Final AR Experience should look something like this.
How I will achieve this
Youtube videos and tutorials have been a great help throughout this whole process, so I will definitely be watching more of them.